Guilty Until Proven Innocent, By AI 


As artificial intelligence becomes something less akin to a virtual assistant named Siri and more like the judge in a criminal case or the picky landlord, U.S. law enforcement and government must step in to regulate the development and use of AI. There are many issues that need regulation when it comes to this new frontier of technology,  such as those surrounding data privacy and intellectual property rights, but none are more prevalent and timely than the human rights violations that can come from bias in AI. These machines can’t go to jail or be held accountable, therefore the companies that create and use AI must be liable and involved in regulating it. 

An AI system makes decisions based on data that is fed to it by human beings. Unknowingly a programmer can feed their unconscious biases into the AI. Companies and other entities then use this AI to make decisions for everything from job interviews to prison sentences, this is where the problem lies. AI does not have any sort of recognition of its own biases, it only knows how to solve problems based on the data it is given. There are laws and regulations in place to stop people and businesses from discriminating against people based on biases, but when it comes to regulating AI in this regard, the process is unclear. 

The unregulated use of AI and the biases that are implemented into it are harmful to everyone who has to interact with the programs. From the person facing the AI’s judgment to the company that developed the program in the first place, nobody wins when AI is playing fast and loose. For example, a recent case alleged that unlawful algorithm-based screening processes were discriminating against tenants that were people of color. Two tenants filed the lawsuit claiming that an AI screening software caused them to be denied rental housing based on a “SafeRent Score.” This score relies on certain factors that impact Black and Hispanic applicants disproportionately. The Justice Department has since stated that they have, “filed a Statement of Interest to explain the Fair Housing Act’s (FHA) application to algorithm-based tenant screening systems,” (“Justice Department”).

As AI develops at a breakneck speed, cases like the one above will pop up more frequently and result in an increasing need for legal action. The U.S. government must step in to regulate this issue and recognize that AI has the capability to violate the civil rights that the U.S. upholds. The Wall Street Journal covered the beginning of a U.S. investigation into AI bias with an article stating that “they (U.S. law-enforcement officials) are resolved to combat discrimination and bias arising from the use of artificial intelligence in areas such as lending, housing, and hiring,”(Tracy). For the U.S. government, the race against AI is on, but this battle is reminiscent of one that it has already lost, the long tumultuous tale of the U.S. versus the internet. In a land where data privacy breaches and more than disappointing Senate battles against tech CEOs already run rampant, one has to wonder if AI will only exacerbate these issues and become another misunderstood, unregulated tumbleweed in the Wild West of U.S. legislation.

As for other stakeholders like everyday people and organizations that are just beginning to realize that AI is going to become an everyday part of their lives, they need to educate themselves on AI and the bias that it can have. Consumers or applicants that encounter AI when being screened for a job or a house need to be aware of their rights when it comes to being discriminated against. Working up the responsibility ladder, the duty to push for unbiased AI falls on the companies and organizations that embed it into their processes. These entities need to be aware of the fact that AI can be biased and use that awareness to self-regulate and determine when the use of AI is acceptable. 

The companies that develop AI have an even bigger responsibility, they must work towards creating unbiased AI and ensuring that AI becomes something that can be used for good, rather than the evil, discriminating robot monster that it has the potential to be. A Wall Street Journal article stated that “Microsoft and other companies say they are testing the tools for bias before releasing them and constantly making updates to ensure they are used properly,” (Tracy). The key to creating unbiased AI lies with companies like Microsoft which need to implement tools that detect bias at the same pace that they are developing AI. The concept of detecting bias along with regulation of AI by the U.S. government, are steps that need to happen to protect people that must interact with AI, which will soon be nearly everyone. 

According to Forbes, “…almost 100% of organizations will be using AI by 2025,”(Marr). AI has the potential to become an incredibly useful tool that can generate positive outcomes for organizations and other stakeholders alike. Regulation is an important step in this process that will serve to strengthen the integrity of AI and increase the possibilities that AI can accomplish.