In important decisions, any amount of bias can have detrimental effects. In an attempt to solve this issue, corporations, governments and police forces have developed artificial intelligence (AI) systems in order to make important decisions without bias; however, AI systems, which are machines that mimic human thinking and actions, are fallible, with many of its decisions influenced by bias. These systems are far from perfect, with flaws, such as biased samples and performance inaccuracies, emerging in major areas of AI development, especially within system training and implementation.
Currently, AI consists mainly of large language models, which are systems that are fed large amounts of information to analyze and replicate natural language. For example, Chat-GPT, whose parent company, OpenAI, uses data extracted of individuals from sources like Reddit and Google to train the AI. Although AI development companies try to reduce preferences, these sources often contain bias, leading the AI to also develop the same biases. Along with being biased, data is also often extracted without informed consent, leading to many ethical dilemmas regarding training AI.
Despite its notable flaws, AI is used for multiple purposes, including criminal face recognition and medical diagnosis. With these uses, multiple issues, especially with personal privacy, have risen, such as potential tampering of smart home devices. Since its implementation into smart home devices, AI has been capable of collecting and using user information, regardless of its intended privacy. This information includes images of faces, which are crucial for training facial recognition systems. In order to train an AI model for facial recognition, samples of hundreds of different people’s faces are fed to the algorithm in order to increase its accuracy. With the risk of data breaches, coupled with the nonconsensual extraction of this data, many are worried about having their identity compromised. Their worries have proven true, as IBM’s X-Force, a specialized cybersecurity group, has reported that cybercriminals have begun to use AI to optimize their attacks.
Within the medical field, AI-caused data breaches are an even greater concern, as any leak of information to external AI systems could be detrimental both to the company and the individual whose information was stolen.
Outside of concerns on data collection, AI is actively used by governments and corporations to manipulate masses, leading to further ethical concerns. Whether it’s to spread political information such as in the Cambridge Analytica Scandal or to encourage workers to stay for longer hours, AI is at the forefront of spreading manipulative information in the workforce and politics. Among some of the companies who have implemented AI into their system, Amazon has been in disputes with its workers due to its delivery AI telling workers to use impossible routes. Although these new messages are still developing and therefore have flaws within, they have still proven to be effective at influencing the masses.
Other than professional environments, AI is also used frequently in schools, mainly by students to finish assignments with ease. In order to counteract students cheating, many AI companies have developed AI spotters in order to aid teachers in detecting plagiarized work.
However, these AI spotters are not perfect, with even the most accurate spotters having a 4% chance of having a false positive. Due to these false positives, the majority of spotters feature a disclaimer explaining that results cannot be used as evidence for accusations. Despite these disclaimers, many teachers still use spotters as a determinate form of evidence of cheating using AI, which has led to countless students being forced to fight an uphill battle. Among the most common papers assigned by teachers are research papers, which are particularly often flagged by AI spotters, as there are few ways to define the same subject and variables.
Although spotters are made for a good cause and could be highly beneficial in the future, they are still quite experimental, and should therefore not be used as definitive evidence.
Due to the rise in popularity of AI, even the police use it, especially for facial detection. However, due to the biases AI developed from the data used to train it, there have been instances where AI has become racist, which has led to seven false criminal accusations since AI began to be used as facial recognition for criminals.
As all usage of AI has exploded in popularity, many are worried that it could drastically alter the job market. Due to current AI’s ability to easily repeat simple tasks and difficult calculations, the World Economic Forum has predicted that by 2025 AI will perform over half of all jobs, compared to 29% currently. Among the jobs predicted to decrease in number, the majority are repetitive in nature, such as accounting.
In order to counter the growing amount of concerns and issues with AI, many have pushed for legislation to be established to regulate the extent at which AI can be used and taught, such as through the creation of a new federal agency specialized in AI. Legislation would allow for AI to be refined and perfected more efficiently while not jeopardizing people’s privacy.
Despite the increase of AI quality and its development, all AI systems are currently still weak, meaning they can only outperform a human in a specific task. If AI were to surpass human performance, major reforms in training and creation would have to be implemented. Training changes would have to optimize accuracy and prevent the system from being used for criminal purposes.