Technology, including AI, can be used as an instrument of discrimination against minorities. With recent Black Lives Matters protests taking place across the US, discussions and debates are now underway over how racism is institutionalized, especially when it comes to policing and law enforcement. That conversation has also now started to address the inherent biases in some technologies.
The world is consistently becoming more reliant on technology. In fact, the past few months have seen that reliance increase with technology being used for communications, research and public health awareness during the COVID-19 pandemic. Big data and AI is expected to play a greater role in many industries, including public health and law enforcement. When such big changes happen, it is important to make sure that discrimination of any form doesn't seep into the design of those new systems. Such foundational flaws can happen with technology and Facebook's recommendation algorithm is a good example of how technology can be divisive by design.
Bias towards minorities can also be seen in technologies driven by big data, and especially in policing. As MIT Tech Review points out, there are already tech tools in use that have a racist bias with activists like Hamid Khan fighting to stop their use. Bias in a technological system or infrastructure is not anything new. An early example of this in the US is the case of the purposefully built low-bridges on Long Island parkways going to Jones Beach by Robert Moses. In the acclaimed biography The Power Broker by Robert Caro, and as pointed out by NBC News, it is suggested Moses did this to keep the buses out and stop African Americans, Latinos and low-income New Yorkers from using public transport to access the beach.
Today, researchers and activists are more worried about racist AI, than racist roads. The Los Angeles Police Department's predictive-policing program that aims to predict where property crimes are going to happen is one of them. Since the program uses skewered data collected by police over the years, it is likely to be biased against minorities, with critics suggesting the algorithm has been disproportionately targeting minority neighborhoods. However, LAPD recently announced that it is stopping the program, due to the scarcity of funding caused by the COVID-19 crisis.
Facial recognition is another family of technologies that has a controversial track record when it comes to racial and gender bias, due to systems consistently misidentifying non-white faces. Law enforcement agencies commonly use AI-powered facial recognition technology developed by companies like Clearview AI. Development and use of mass-surveillance tools itself is a questionable topic, however, a racially biased mass-surveillance technology is simply unacceptable. Highlighted in a 2019 Washington Post article, a federal study confirmed that most facial recognition technologies used in the US are seriously inaccurate when it comes to identifying faces of non-white people. A comparison to algorithms developed by Asian companies that were far better at identifying Asian faces, suggested that the biased data used in training the algorithm is the issue.
Following the recent protests, Tech companies are now taking the matter more seriously and as reported by CNBC, IBM recently announced it is getting out of the facial recognition business. The company's CEO, Aravind Krishna, said in a letter that IBM is also supporting many of the reforms suggested in the Democrats' police reform bill. In addition, Amazon recently announced how it was placing a one-year hold on the police's use of its facial recognition technology, Rekognition. These are encouraging signs and hopefully, other major companies invested in facial recognition will follow suit. However, much needs to change in technology and AI research and design in general, in order to deal with what is now known as 'techno-racism'.
from ScreenRant - Feed https://ift.tt/2ApRVkF
No comments: