When we talk about the rise of AI, Tay must be included in our discussion, Tay is an infamous Twitter chatbot launched by Microsoft in 2016 that intended to learn by reading tweets and connecting with Twitter users. Tay's biography read "The further you talk, the smarter Tay gets!" It only took countless hours before Tay was tricked by social media users via twittering objectionable, sexist, and racist posts. Microsoft disjointed Tay within 24 hours of its takeoff. Tay could be rejected as a mistake in AI programming. But there are many other examples like Tay that show bias and may have more negative impacts on our lives than conversations with a chatbot on a social media platform. AI systems can carry out tasks we could hardly imagine two decades ago, and they will increasingly do so as they are still learning to do. Some are already carrying out important decisions that affect everyone's lives in justice, education, policymaking, and healthcare, to name a few, making it important to understand the AI discrimination program in these systems. We must ensure that technology does not nullify decades of struggles for human rights, dignity, and equality in the rapidly changing digital environment.

AI is Full of Discrimination

Artificial intelligent researcher and YouTuber Yannic Kilcher prepared an AI using 3.3 million threads from 4chan's infamously toxic Politically Incorrect board or pol. He then unchained, the bot back onto 4chan with expected results the AI was just as worse as the posts it was trained on, jetting racial slurs and engaging with antisemitic threads. The bot,  GPT-4chan, "the most horrible model on the internet" -a reference to GPT-3, a language model designed by Open artificial intelligence that uses deep learning to produce text or description-was shockingly efficient and replicated the tone and feel of 4chan posts. "The model was amazing in a terrible sense," Kilcher said in a video about the project. "It perfectly well enclosed the mix of nihilism, distastefulness, trolling, and deep  mistrust of any information whatsoever that interpenetrates most posts on pol or board."

Kilcher explained the environment of 4chan is so toxic, that the messages his bots displayed would have no impact. He said. "I ask you to go visit and spend a few minutes on board or pol and ask a question to yourself if a bot that just results from the same way is changing the experience." "No one was even a bit hurt by this on 4chan," Kilcher didn't believe GPT-4chan can be displayed at scale for targeted hate campaigns. He said, "He said, "It is difficult to form GPT-4chan say something targeted," "Usually, it'll behave badly in odd ways and is not suitable for running targeted anything. Again, vague assumption censures are thrown around, without any actual instances or evidence."

Why AI Program is Discriminable and Extreme?

The reason why AI is discriminable and extreme is quite obvious, the software engineers and designers behind AI are mostly white men. Almost, more than 90% of coders are men in the United Kingdom. While, in Europe, only 11.2% of leadership positions in the STEM (science, technology, engineering, and mathematics) fields are occupied by women. This arithmetic rises slightly to a still meager 18.1% in North America. Due to a lack of diversity in the sector, sexist and racist biases are intentionally or not inserted into algorithms and codes that power machine learning and AI systems. The lack of diversity also affects the design and names of robots like most of the humanoid robots have white 'skin' and are highly gendered. Often, one can guess the function of the robot due to its 'gender' since 'male' robots tend to be used in the army while 'female' robots usually serve as personal assistants. Female robots are designed with a narrow waist, wide hips, and a sensual voice, and are sexualized. For example, Valkyrie, a female robot designed in 2013 and used by NASA, possesses breasts. Anyone can wander about their functionality.

Another reason why AI is discriminable and extreme is the impact of copyright law on biases in AI was uncovered by Amanda Levandowski, Associate Professor of Law and the founding Director of the iPIP Clinic at Georgetown University. As computers need access to texts, books, works of art, photographs, videos, movies, and other documents to learn and analyze patterns, copyright laws complicate and limit access to these materials. As a result, by shrinking datasets, copyright laws also limit the worldwide view of AI systems. A machine can still ultimately make discriminatory decisions, even if it is built without any explicit biases and by a diverse team.

Conclusion

AI programs are becoming smarter and playing a bigger role in our societies, the limited field of Human-Computer Interactions must expand into a conventional field of study in universities and be occupied by a stronger presence in the human rights sector. Considering the consequences of AI exhibited by the chatbot Tay, the international community should act promptly to ensure that human rights-compliant algorithms govern the fastly expanding digital space.

Mia Woods
Senior Editor

Mia Woods is a senior editor and guest writer at TopTen.AI, specializing in news, topical, and popular science articles. With a strong passion for these subjects, Mia has conducted extensive research over the years. And Mia has achieved impressive results in these areas and has worked with top media organizations to provide them with high-quality articles. As an important member of TopTen.AI, Mia strives to deliver accurate, real-time, objective information and provide insight through her passionate articles. So that readers can fully understand the latest news developments and scientific progress, broaden their horizons, and deepen their understanding of all aspects of the world.