X
    Categories: Tech

We are making AIs sexist and racist

Image: WallpaperPulse.

Princeton University researchers found out that artificial intelligence can show racist and sexist biases just by analyzing human language. Their findings were published in Science Magazine last Friday.

Scientists used the traditional Global Vectors for Word Representation (GloVe) algorithm to see if it made the same stereotypical associations humans do. As it turns out, the machine learning system made every single one of them after training with over 800 billion words.

AI is getting better and better at replicating humans, even if that implies that it will end up with some of our bad traits. Google has been experimenting with the technology, most recently teaching it how to make quick decent drawings based on doodles.

The brand doesn’t fall far from the tree

Researchers did not explicitly set out to teach the next generation of artificially intelligent machines how to discriminate against women or African-Americans, but they learned it themselves after an extensive input straight from the internet.

Aylin Caliskan and other associate specialists fed GloVe with roughly 840 billion words, including articles from supposedly neutral sites like Wikipedia. The artificial intelligence was then put to the test using a new version of the Implicit Association Tests (IAT).

These tests were named Word-Embedding Association Tests (WEAT), and they work much like regular IATs do. In simple terms, the software was presented with sets of words that they had to pair with others based only on their experience.

Using this tool, scientists noticed how GloVe quickly started showing human traits, from relating flowers to beauty to linking insects with unpleasantness.

However, as you may expect, the AI also made some heavily biased associations between African-American names and weapons, and female names with home chores and tasks instead of professional careers.

AI has yet to learn how to discern context

Bias in machine learning presents some issues for the future of this technology, as it may come to be used more widely on platforms that judge or label people.Such a system was already in existence, and it was shut down precisely for these same reasons.

Law enforcement agencies in the U.S. used intelligent software designed to identify potential criminals, but it ended up targeting mostly African-American males as a result of the database it was using.

The debate is now centered on whether machines learn to be racist and sexist through language or if it is language itself that is loaded with biased stereotypes. The research authors are still unsure of the answer, but they suggest introducing a human element to help AI with context.

“IF THE SYSTEM IS GOING TO TAKE ACTION, THE ACTION SHOULD BE CHECKED TO ENSURE IT ISN’T PREJUDICED BY UNACCEPTABLE BIASES. THE BEST THING TO DO IS TO KEEP TRYING TO MAKE CULTURE BETTER, AND TO KEEP UPDATING AI TO TRACK CULTURE,” said co-author Joanna Bryson.

Source: Science

Rafael Fariñas:
Related Post