August 12, 2023

AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype

Effective regulation of AI needs grounded science that investigates real harms, not glorified press releases about existential risks

By Alex Hanna & Emily M. Bender

Illustration of people walking and their faces being recognized by AI.

Hannah Perry

Wrongful Arrests, an expanding surveillance dragnet, defamation and deepfake pornography are all existing dangers of the so-called artificial-intelligence tools currently on the market. These issues, and not the imagined potential to wipe out humanity, are the real threat of artificial intelligence.

End-of-days hype surrounds many AI firms, but their technology already enables myriad harms, including routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.

Nevertheless, in 2023 the nonprofit Center for AI Safety released a statement—co-signed by hundreds of industry leaders—warning of "the risk of extinction from AI," which it asserted was akin to the threats of nuclear war and pandemics. Sam Altman, embattled CEO of Open AI, the company behind the popular language-learning model ChatGPT, had previously alluded to such a risk in a congressional hearing, suggesting that generative AI tools could go "quite wrong." In the summer of 2023 executives from AI companies met with President Joe Biden and made several toothless voluntary commitments to curtail "the most significant sources of AI risks," hinting at theoretical apocalyptic threats instead of emphasizing real ones. Corporate AI labs justify this kind of posturing with pseudoscientific research reports that misdirect regulatory attention to imaginary scenarios and use fearmongering terminology such as "existential risk."

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

The broader public and regulatory agencies must not fall for this maneuver. Rather we should look to scholars and activists who practice peer review and have pushed back on AI hype in an attempt to understand its detrimental effects here and now.

Because the term "AI" is ambiguous, having clear discussions about it is difficult. In one sense, it is the name of a subfield of computer science. In another it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. And in marketing copy and start-up pitch decks, the term "AI" serves as magic fairy dust that will supercharge your business.

Since OpenAI's release of ChatGPT in late 2022 (and Microsoft's incorporation of the tool into its Bing search engine), text-synthesis machines have emerged as the most prominent AI systems. Large language models such as ChatGPT extrude remarkably fluent and coherent-seeming text but have no understanding of what the text means, let alone the ability to reason. (To suggest otherwise is to impute comprehension where there is none, something done purely on faith by AI boosters.) These systems are the equivalent of enormous Magic 8 Balls that we can play with by framing the prompts we send them as questions and interpreting their output as answers.

Unfortunately, that output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem. Not only do we risk mistaking synthetic text for reliable information, but that noninformation reflects and amplifies the biases encoded in AI training data—in the case of large language models, every kind of bigotry found on the Internet. Moreover, the synthetic text sounds authoritative despite its lack of citation of real sources. The longer this synthetic text spill continues, the worse off we are because it gets harder to find trustworthy sources and harder to trust them when we do.

The people selling this technology propose that text-synthesis machines could fix various holes in our social fabric: the shortage of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, to name just a few.

But deployment of this technology actually hurts workers. For one thing, the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created them. In addition, the task of labeling data to create "guardrails" intended to prevent an AI system's most toxic output from being released is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom in terms of their pay and working conditions. What is more, employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This scenario motivated the recent actors' and writers' strikes in Hollywood, where grotesquely overpaid moguls have schemed to buy eternal rights to use AI replacements of actors for the price of a day's work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.

AI-related policy must be science-driven and built on relevant research, but too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Many of these publications are based on junk science—it is nonreproducible, hides behind trade secrecy, is full of hype, and uses evaluation methods that do not measure what they purport to measure.

Recent examples include a 155-page preprint paper entitled "Sparks of Artificial General Intelligence: Early Experiments with GPT-4" from Microsoft Research, which claims to find "intelligence" in the output of GPT-4, one of OpenAI's text-synthesis machines. Then there are OpenAI's own technical reports on GPT-4, which claim, among other things, that OpenAI systems have the ability to solve new problems that are not found in their training data. No one can test these claims because OpenAI refuses to provide access to, or even a description of, those data. Meanwhile "AI doomers" cite this junk science in their efforts to focus the world's attention on the fantasy of all-powerful machines possibly going rogue and destroying humanity.

We urge policymakers to draw on solid scholarship that investigates the harms and risks of AI, as well as the harms caused by delegating authority to automated systems, which include the disempowerment of the poor and the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on not using this technology to hurt people.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of  Scientific American.

Advertisement

Supported by

What Exactly Are the Dangers Posed by A.I.?

A recent letter calling for a moratorium on A.I. development blends real threats with speculation. But concern is growing among experts.

  • Share full article

An illustration of a fire engine light covered with stickers calling for a pause or halt of A.I.

By Cade Metz

Cade Metz writes about artificial intelligence and other emerging technologies.

In late March, more than 1,000 technology leaders, researchers and other pundits working in and around artificial intelligence signed an open letter warning that A.I. technologies present “profound risks to society and humanity.”

The group, which included Elon Musk, Tesla’s chief executive and the owner of Twitter, urged A.I. labs to halt development of their most powerful systems for six months so that they could better understand the dangers behind the technology.

“Powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

The letter, which now has over 27,000 signatures, was brief. Its language was broad. And some of the names behind the letter seemed to have a conflicting relationship with A.I. Mr. Musk, for example, is building his own A.I. start-up, and he is one of the primary donors to the organization that wrote the letter.

But the letter represented a growing concern among A.I. experts that the latest systems, most notably GPT-4 , the technology introduced by the San Francisco start-up OpenAI, could cause harm to society. They believed future systems will be even more dangerous.

Some of the risks have arrived. Others will not for months or years. Still others are purely hypothetical.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

AI Is Not Actually an Existential Threat to Humanity, Scientists Say

AI is not actually an existential threat to humanity, scientists say

We encounter artificial intelligence (AI) every day. AI describes computer systems that are able to perform tasks that normally require human intelligence. When you search something on the internet, the top results you see are decided by AI.

Any recommendations you get from your favorite shopping or streaming websites will also be based on an AI algorithm. These algorithms use your browser history to find things you might be interested in.

Because targeted recommendations are not particularly exciting, science fiction prefers to depict AI as super-intelligent robots that overthrow humanity. Some people believe this scenario could one day become reality. Notable figures, including the late Stephen Hawking , have expressed fear about how future AI could threaten humanity.

To address this concern we asked 11 experts in AI and Computer Science "Is AI an existential threat to humanity?" There was an 82 percent consensus that it is not an existential threat. Here is what we found out.

How close are we to making AI that is more intelligent than us?

The AI that currently exists is called 'narrow' or 'weak' AI . It is widely used for many applications like facial recognition, self-driving cars, and internet recommendations. It is defined as 'narrow' because these systems can only learn and perform very specific tasks.

They often actually perform these tasks better than humans – famously, Deep Blue was the first AI to beat a world chess champion in 1997 – however they cannot apply their learning to anything other than a very specific task (Deep Blue can only play chess).

Another type of AI is called Artificial General Intelligence (AGI). This is defined as AI that mimics human intelligence, including the ability to think and apply intelligence to multiple different problems. Some people believe that AGI is inevitable and will happen imminently in the next few years.

Matthew O'Brien, robotics engineer from the Georgia Institute of Technology disagrees , "the long-sought goal of a 'general AI' is not on the horizon. We simply do not know how to make a general adaptable intelligence, and it's unclear how much more progress is needed to get to that point".

How could a future AGI threaten humanity?

Whilst it is not clear when or if AGI will come about, can we predict what threat they might pose to us humans? AGI learns from experience and data as opposed to being explicitly told what to do. This means that, when faced with a new situation it has not seen before, we may not be able to completely predict how it reacts.

Dr Roman Yampolskiy, computer scientist from Louisville University also believes that "no version of human control over AI is achievable" as it is not possible for the AI to both be autonomous and controlled by humans. Not being able to control super-intelligent systems could be disastrous.

Yingxu Wang, professor of Software and Brain Sciences from Calgary University disagrees, saying that "professionally designed AI systems and products are well constrained by a fundamental layer of operating systems for safeguard users' interest and wellbeing, which may not be accessed or modified by the intelligent machines themselves."

Dr O'Brien adds "just like with other engineered systems, anything with potentially dangerous consequences would be thoroughly tested and have multiple redundant safety checks."

Could the AI we use today become a threat?

Many of the experts agreed that AI could be a threat in the wrong hands. Dr George Montanez, AI expert from Harvey Mudd College highlights that "robots and AI systems do not need to be sentient to be dangerous; they just have to be effective tools in the hands of humans who desire to hurt others. That is a threat that exists today."

Even without malicious intent, today's AI can be threatening. For example, racial biases have been discovered in algorithms that allocate health care to patients in the US. Similar biases have been found in facial recognition software used for law enforcement. These biases have wide-ranging negative impacts despite the 'narrow' ability of the AI.

AI bias comes from the data it is trained on. In the cases of racial bias, the training data was not representative of the general population. Another example happened in 2016, when an AI-based chatbox was found sending highly offensive and racist content. This was found to be because people were sending the bot offensive messages, which it learnt from.

The takeaway:

The AI that we use today is exceptionally useful for many different tasks.

That doesn't mean it is always positive – it is a tool which, if used maliciously or incorrectly, can have negative consequences. Despite this, it currently seems to be unlikely to become an existential threat to humanity.

Article based on 11 expert answers to this question: Is AI an existential threat to humanity?

This expert response was published in partnership with independent fact-checking platform Metafact.io . Subscribe to their weekly newsletter here .

Score Card Research NoScript

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Tzu Chi Med J
  • v.32(4); Oct-Dec 2020

Logo of tcmj

The impact of artificial intelligence on human society and bioethics

Michael cheng-tek tai.

Department of Medical Sociology and Social Work, College of Medicine, Chung Shan Medical University, Taichung, Taiwan

Artificial intelligence (AI), known by some as the industrial revolution (IR) 4.0, is going to change not only the way we do things, how we relate to others, but also what we know about ourselves. This article will first examine what AI is, discuss its impact on industrial, social, and economic changes on humankind in the 21 st century, and then propose a set of principles for AI bioethics. The IR1.0, the IR of the 18 th century, impelled a huge social change without directly complicating human relationships. Modern AI, however, has a tremendous impact on how we do things and also the ways we relate to one another. Facing this challenge, new principles of AI bioethics must be considered and developed to provide guidelines for the AI technology to observe so that the world will be benefited by the progress of this new intelligence.

W HAT IS ARTIFICIAL INTELLIGENCE ?

Artificial intelligence (AI) has many different definitions; some see it as the created technology that allows computers and machines to function intelligently. Some see it as the machine that replaces human labor to work for men a more effective and speedier result. Others see it as “a system” with the ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation [ 1 ].

Despite the different definitions, the common understanding of AI is that it is associated with machines and computers to help humankind solve problems and facilitate working processes. In short, it is an intelligence designed by humans and demonstrated by machines. The term AI is used to describe these functions of human-made tool that emulates the “cognitive” abilities of the natural intelligence of human minds [ 2 ].

Along with the rapid development of cybernetic technology in recent years, AI has been seen almost in all our life circles, and some of that may no longer be regarded as AI because it is so common in daily life that we are much used to it such as optical character recognition or the Siri (speech interpretation and recognition interface) of information searching equipment on computer [ 3 ].

D IFFERENT TYPES OF ARTIFICIAL INTELLIGENCE

From the functions and abilities provided by AI, we can distinguish two different types. The first is weak AI, also known as narrow AI that is designed to perform a narrow task, such as facial recognition or Internet Siri search or self-driving car. Many currently existing systems that claim to use “AI” are likely operating as a weak AI focusing on a narrowly defined specific function. Although this weak AI seems to be helpful to human living, there are still some think weak AI could be dangerous because weak AI could cause disruptions in the electric grid or may damage nuclear power plants when malfunctioned.

The new development of the long-term goal of many researchers is to create strong AI or artificial general intelligence (AGI) which is the speculative intelligence of a machine that has the capacity to understand or learn any intelligent task human being can, thus assisting human to unravel the confronted problem. While narrow AI may outperform humans such as playing chess or solving equations, but its effect is still weak. AGI, however, could outperform humans at nearly every cognitive task.

Strong AI is a different perception of AI that it can be programmed to actually be a human mind, to be intelligent in whatever it is commanded to attempt, even to have perception, beliefs and other cognitive capacities that are normally only ascribed to humans [ 4 ].

In summary, we can see these different functions of AI [ 5 , 6 ]:

  • Automation: What makes a system or process to function automatically
  • Machine learning and vision: The science of getting a computer to act through deep learning to predict and analyze, and to see through a camera, analog-to-digital conversion and digital signal processing
  • Natural language processing: The processing of human language by a computer program, such as spam detection and converting instantly a language to another to help humans communicate
  • Robotics: A field of engineering focusing on the design and manufacturing of cyborgs, the so-called machine man. They are used to perform tasks for human's convenience or something too difficult or dangerous for human to perform and can operate without stopping such as in assembly lines
  • Self-driving car: Use a combination of computer vision, image recognition amid deep learning to build automated control in a vehicle.

D O HUMAN-BEINGS REALLY NEED ARTIFICIAL INTELLIGENCE ?

Is AI really needed in human society? It depends. If human opts for a faster and effective way to complete their work and to work constantly without taking a break, yes, it is. However if humankind is satisfied with a natural way of living without excessive desires to conquer the order of nature, it is not. History tells us that human is always looking for something faster, easier, more effective, and convenient to finish the task they work on; therefore, the pressure for further development motivates humankind to look for a new and better way of doing things. Humankind as the homo-sapiens discovered that tools could facilitate many hardships for daily livings and through tools they invented, human could complete the work better, faster, smarter and more effectively. The invention to create new things becomes the incentive of human progress. We enjoy a much easier and more leisurely life today all because of the contribution of technology. The human society has been using the tools since the beginning of civilization, and human progress depends on it. The human kind living in the 21 st century did not have to work as hard as their forefathers in previous times because they have new machines to work for them. It is all good and should be all right for these AI but a warning came in early 20 th century as the human-technology kept developing that Aldous Huxley warned in his book Brave New World that human might step into a world in which we are creating a monster or a super human with the development of genetic technology.

Besides, up-to-dated AI is breaking into healthcare industry too by assisting doctors to diagnose, finding the sources of diseases, suggesting various ways of treatment performing surgery and also predicting if the illness is life-threatening [ 7 ]. A recent study by surgeons at the Children's National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot to perform soft-tissue surgery, stitch together a pig's bowel, and the robot finished the job better than a human surgeon, the team claimed [ 8 , 9 ]. It demonstrates robotically-assisted surgery can overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capacities of surgeons performing open surgery.

Above all, we see the high-profile examples of AI including autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays…etc. All these have made human life much easier and convenient that we are so used to them and take them for granted. AI has become indispensable, although it is not absolutely needed without it our world will be in chaos in many ways today.

T HE IMPACT OF ARTIFICIAL INTELLIGENCE ON HUMAN SOCIETY

Negative impact.

Questions have been asked: With the progressive development of AI, human labor will no longer be needed as everything can be done mechanically. Will humans become lazier and eventually degrade to the stage that we return to our primitive form of being? The process of evolution takes eons to develop, so we will not notice the backsliding of humankind. However how about if the AI becomes so powerful that it can program itself to be in charge and disobey the order given by its master, the humankind?

Let us see the negative impact the AI will have on human society [ 10 , 11 ]:

  • A huge social change that disrupts the way we live in the human community will occur. Humankind has to be industrious to make their living, but with the service of AI, we can just program the machine to do a thing for us without even lifting a tool. Human closeness will be gradually diminishing as AI will replace the need for people to meet face to face for idea exchange. AI will stand in between people as the personal gathering will no longer be needed for communication
  • Unemployment is the next because many works will be replaced by machinery. Today, many automobile assembly lines have been filled with machineries and robots, forcing traditional workers to lose their jobs. Even in supermarket, the store clerks will not be needed anymore as the digital device can take over human labor
  • Wealth inequality will be created as the investors of AI will take up the major share of the earnings. The gap between the rich and the poor will be widened. The so-called “M” shape wealth distribution will be more obvious
  • New issues surface not only in a social sense but also in AI itself as the AI being trained and learned how to operate the given task can eventually take off to the stage that human has no control, thus creating un-anticipated problems and consequences. It refers to AI's capacity after being loaded with all needed algorithm may automatically function on its own course ignoring the command given by the human controller
  • The human masters who create AI may invent something that is racial bias or egocentrically oriented to harm certain people or things. For instance, the United Nations has voted to limit the spread of nucleus power in fear of its indiscriminative use to destroying humankind or targeting on certain races or region to achieve the goal of domination. AI is possible to target certain race or some programmed objects to accomplish the command of destruction by the programmers, thus creating world disaster.

P OSITIVE IMPACT

There are, however, many positive impacts on humans as well, especially in the field of healthcare. AI gives computers the capacity to learn, reason, and apply logic. Scientists, medical researchers, clinicians, mathematicians, and engineers, when working together, can design an AI that is aimed at medical diagnosis and treatments, thus offering reliable and safe systems of health-care delivery. As health professors and medical researchers endeavor to find new and efficient ways of treating diseases, not only the digital computer can assist in analyzing, robotic systems can also be created to do some delicate medical procedures with precision. Here, we see the contribution of AI to health care [ 7 , 11 ]:

Fast and accurate diagnostics

IBM's Watson computer has been used to diagnose with the fascinating result. Loading the data to the computer will instantly get AI's diagnosis. AI can also provide various ways of treatment for physicians to consider. The procedure is something like this: To load the digital results of physical examination to the computer that will consider all possibilities and automatically diagnose whether or not the patient suffers from some deficiencies and illness and even suggest various kinds of available treatment.

Socially therapeutic robots

Pets are recommended to senior citizens to ease their tension and reduce blood pressure, anxiety, loneliness, and increase social interaction. Now cyborgs have been suggested to accompany those lonely old folks, even to help do some house chores. Therapeutic robots and the socially assistive robot technology help improve the quality of life for seniors and physically challenged [ 12 ].

Reduce errors related to human fatigue

Human error at workforce is inevitable and often costly, the greater the level of fatigue, the higher the risk of errors occurring. Al technology, however, does not suffer from fatigue or emotional distraction. It saves errors and can accomplish the duty faster and more accurately.

Artificial intelligence-based surgical contribution

AI-based surgical procedures have been available for people to choose. Although this AI still needs to be operated by the health professionals, it can complete the work with less damage to the body. The da Vinci surgical system, a robotic technology allowing surgeons to perform minimally invasive procedures, is available in most of the hospitals now. These systems enable a degree of precision and accuracy far greater than the procedures done manually. The less invasive the surgery, the less trauma it will occur and less blood loss, less anxiety of the patients.

Improved radiology

The first computed tomography scanners were introduced in 1971. The first magnetic resonance imaging (MRI) scan of the human body took place in 1977. By the early 2000s, cardiac MRI, body MRI, and fetal imaging, became routine. The search continues for new algorithms to detect specific diseases as well as to analyze the results of scans [ 9 ]. All those are the contribution of the technology of AI.

Virtual presence

The virtual presence technology can enable a distant diagnosis of the diseases. The patient does not have to leave his/her bed but using a remote presence robot, doctors can check the patients without actually being there. Health professionals can move around and interact almost as effectively as if they were present. This allows specialists to assist patients who are unable to travel.

S OME CAUTIONS TO BE REMINDED

Despite all the positive promises that AI provides, human experts, however, are still essential and necessary to design, program, and operate the AI from any unpredictable error from occurring. Beth Kindig, a San Francisco-based technology analyst with more than a decade of experience in analyzing private and public technology companies, published a free newsletter indicating that although AI has a potential promise for better medical diagnosis, human experts are still needed to avoid the misclassification of unknown diseases because AI is not omnipotent to solve all problems for human kinds. There are times when AI meets an impasse, and to carry on its mission, it may just proceed indiscriminately, ending in creating more problems. Thus vigilant watch of AI's function cannot be neglected. This reminder is known as physician-in-the-loop [ 13 ].

The question of an ethical AI consequently was brought up by Elizabeth Gibney in her article published in Nature to caution any bias and possible societal harm [ 14 ]. The Neural Information processing Systems (NeurIPS) conference in Vancouver Canada in 2020 brought up the ethical controversies of the application of AI technology, such as in predictive policing or facial recognition, that due to bias algorithms can result in hurting the vulnerable population [ 14 ]. For instance, the NeurIPS can be programmed to target certain race or decree as the probable suspect of crime or trouble makers.

T HE CHALLENGE OF ARTIFICIAL INTELLIGENCE TO BIOETHICS

Artificial intelligence ethics must be developed.

Bioethics is a discipline that focuses on the relationship among living beings. Bioethics accentuates the good and the right in biospheres and can be categorized into at least three areas, the bioethics in health settings that is the relationship between physicians and patients, the bioethics in social settings that is the relationship among humankind and the bioethics in environmental settings that is the relationship between man and nature including animal ethics, land ethics, ecological ethics…etc. All these are concerned about relationships within and among natural existences.

As AI arises, human has a new challenge in terms of establishing a relationship toward something that is not natural in its own right. Bioethics normally discusses the relationship within natural existences, either humankind or his environment, that are parts of natural phenomena. But now men have to deal with something that is human-made, artificial and unnatural, namely AI. Human has created many things yet never has human had to think of how to ethically relate to his own creation. AI by itself is without feeling or personality. AI engineers have realized the importance of giving the AI ability to discern so that it will avoid any deviated activities causing unintended harm. From this perspective, we understand that AI can have a negative impact on humans and society; thus, a bioethics of AI becomes important to make sure that AI will not take off on its own by deviating from its originally designated purpose.

Stephen Hawking warned early in 2014 that the development of full AI could spell the end of the human race. He said that once humans develop AI, it may take off on its own and redesign itself at an ever-increasing rate [ 15 ]. Humans, who are limited by slow biological evolution, could not compete and would be superseded. In his book Superintelligence, Nick Bostrom gives an argument that AI will pose a threat to humankind. He argues that sufficiently intelligent AI can exhibit convergent behavior such as acquiring resources or protecting itself from being shut down, and it might harm humanity [ 16 ].

The question is–do we have to think of bioethics for the human's own created product that bears no bio-vitality? Can a machine have a mind, consciousness, and mental state in exactly the same sense that human beings do? Can a machine be sentient and thus deserve certain rights? Can a machine intentionally cause harm? Regulations must be contemplated as a bioethical mandate for AI production.

Studies have shown that AI can reflect the very prejudices humans have tried to overcome. As AI becomes “truly ubiquitous,” it has a tremendous potential to positively impact all manner of life, from industry to employment to health care and even security. Addressing the risks associated with the technology, Janosch Delcker, Politico Europe's AI correspondent, said: “I don't think AI will ever be free of bias, at least not as long as we stick to machine learning as we know it today,”…. “What's crucially important, I believe, is to recognize that those biases exist and that policymakers try to mitigate them” [ 17 ]. The High-Level Expert Group on AI of the European Union presented Ethics Guidelines for Trustworthy AI in 2019 that suggested AI systems must be accountable, explainable, and unbiased. Three emphases are given:

  • Lawful-respecting all applicable laws and regulations
  • Ethical-respecting ethical principles and values
  • Robust-being adaptive, reliable, fair, and trustworthy from a technical perspective while taking into account its social environment [ 18 ].

Seven requirements are recommended [ 18 ]:

  • AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes
  • AI should be secure and accurate. It should not be easily compromised by external attacks, and it should be reasonably reliable
  • Personal data collected by AI systems should be secure and private. It should not be accessible to just anyone, and it should not be easily stolen
  • Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make
  • Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines
  • AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
  • AI systems should be auditable and covered by existing protections for corporate whistleblowers. The negative impacts of systems should be acknowledged and reported in advance.

From these guidelines, we can suggest that future AI must be equipped with human sensibility or “AI humanities.” To accomplish this, AI researchers, manufacturers, and all industries must bear in mind that technology is to serve not to manipulate humans and his society. Bostrom and Judkowsky listed responsibility, transparency, auditability, incorruptibility, and predictability [ 19 ] as criteria for the computerized society to think about.

S UGGESTED PRINCIPLES FOR ARTIFICIAL INTELLIGENCE BIOETHICS

Nathan Strout, a reporter at Space and Intelligence System at Easter University, USA, reported just recently that the intelligence community is developing its own AI ethics. The Pentagon made announced in February 2020 that it is in the process of adopting principles for using AI as the guidelines for the department to follow while developing new AI tools and AI-enabled technologies. Ben Huebner, chief of the Office of Director of National Intelligence's Civil Liberties, Privacy, and Transparency Office, said that “We're going to need to ensure that we have transparency and accountability in these structures as we use them. They have to be secure and resilient” [ 20 ]. Two themes have been suggested for the AI community to think more about: Explainability and interpretability. Explainability is the concept of understanding how the analytic works, while interpretability is being able to understand a particular result produced by an analytic [ 20 ].

All the principles suggested by scholars for AI bioethics are well-brought-up. I gather from different bioethical principles in all the related fields of bioethics to suggest four principles here for consideration to guide the future development of the AI technology. We however must bear in mind that the main attention should still be placed on human because AI after all has been designed and manufactured by human. AI proceeds to its work according to its algorithm. AI itself cannot empathize nor have the ability to discern good from evil and may commit mistakes in processes. All the ethical quality of AI depends on the human designers; therefore, it is an AI bioethics and at the same time, a trans-bioethics that abridge human and material worlds. Here are the principles:

  • Beneficence: Beneficence means doing good, and here it refers to the purpose and functions of AI should benefit the whole human life, society and universe. Any AI that will perform any destructive work on bio-universe, including all life forms, must be avoided and forbidden. The AI scientists must understand that reason of developing this technology has no other purpose but to benefit human society as a whole not for any individual personal gain. It should be altruistic, not egocentric in nature
  • Value-upholding: This refers to AI's congruence to social values, in other words, universal values that govern the order of the natural world must be observed. AI cannot elevate to the height above social and moral norms and must be bias-free. The scientific and technological developments must be for the enhancement of human well-being that is the chief value AI must hold dearly as it progresses further
  • Lucidity: AI must be transparent without hiding any secret agenda. It has to be easily comprehensible, detectable, incorruptible, and perceivable. AI technology should be made available for public auditing, testing and review, and subject to accountability standards … In high-stakes settings like diagnosing cancer from radiologic images, an algorithm that can't “explain its work” may pose an unacceptable risk. Thus, explainability and interpretability are absolutely required
  • Accountability: AI designers and developers must bear in mind they carry a heavy responsibility on their shoulders of the outcome and impact of AI on whole human society and the universe. They must be accountable for whatever they manufacture and create.

C ONCLUSION

AI is here to stay in our world and we must try to enforce the AI bioethics of beneficence, value upholding, lucidity and accountability. Since AI is without a soul as it is, its bioethics must be transcendental to bridge the shortcoming of AI's inability to empathize. AI is a reality of the world. We must take note of what Joseph Weizenbaum, a pioneer of AI, said that we must not let computers make important decisions for us because AI as a machine will never possess human qualities such as compassion and wisdom to morally discern and judge [ 10 ]. Bioethics is not a matter of calculation but a process of conscientization. Although AI designers can up-load all information, data, and programmed to AI to function as a human being, it is still a machine and a tool. AI will always remain as AI without having authentic human feelings and the capacity to commiserate. Therefore, AI technology must be progressed with extreme caution. As Von der Leyen said in White Paper on AI – A European approach to excellence and trust : “AI must serve people, and therefore, AI must always comply with people's rights…. High-risk AI. That potentially interferes with people's rights has to be tested and certified before it reaches our single market” [ 21 ].

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

R EFERENCES

Artificial Intelligence: The Helper or the Threat? Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

The principles of human intelligence have always been of certain interest for the field of science. Having understood the nature of processes that help people to reflect, scientists started proposing projects aimed at creating the machine that would be able to work like a human brain and make decisions as we do. Developing an artificial intelligence machine belongs to the number of the most urgent tasks of modern science. At the same time, there are different opinions on what our future will look like if we continue developing this field of science.

According to the people, who support an idea of artificial intelligence development, it will bring numerous benefits to the society and our everyday life. At first, the machine with artificial intelligence is going to be the best helper for the humanity in problem-solving (Cohen & Feigenbaum, 2014, p.13). Thus, there are tasks that require a good memory, and it is safer to assign such tasks to machines as their capacity of memory is by far more developed than one that people have. What is more, the machines with artificial intelligence help people to find the information that they need in moments. Such machines perform the record retrieval with help of numerous search algorithms and the human brain cannot do the same with such a high speed. To continue, the supporters of further artificial intelligence development believe that such machines will help us to compensate for certain features that make our brain activity and perception imperfect (Muller & Bostrom, 2016, p.554). If we look at artificial intelligence from this point of view, it acts as our teacher despite the fact that it is our creation. Importantly, people believe that artificial intelligence should be developed as it gives new opportunities to the humanity. Such a machine is able to teach itself without people’s help, and it also can take decisions even when circumstances are changing. Considering that, it can be trusted to fulfill many highly sensitive tasks.

Nevertheless, there are ones who are not so optimistic about the development and perfection of artificial intelligence. Their skeptical attitude about that is likely to be rooted in their concerns about the future of human society. To begin with, people who are skeptical about artificial intelligence believe that it is impossible to create the machine that will show the mental process similar to the one that people have. It means that the decisions made by such a machine will be based only on the logical connections between the objects. Considering that, it is not a good idea to use these machines for the tasks that involve people business. What is more, artificial intelligence development can store up future problems in the world of work (Ford, 2013, p. 37). There is no doubt that artificial intelligence programs do not have to be paid a salary every month. What is more, these programs usually do not make mistakes and it gives them an obvious advantage over human employees. With a glance to these facts, it is easy to suppose that they will be more likely to be chosen by employer. If artificial intelligence develops rapidly, many people will turn out to be unnecessary in their companies.

To conclude, artificial intelligence development is a problem that leaves nobody indifferent as it is closely associated with the future of the humanity. The thing that makes this question even trickier is the fact that both opinions on artificial intelligence seem to be well-founded.

Cohen, P. R., & Feigenbaum, E. A. (2014). The handbook of artificial intelligence. Los Altos, CA : Butterworth-Heinemann.

Ford, M. (2013). Could artificial intelligence create an unemployment crisis?. Communications of the ACM , 56 (7), 37-39.

Muller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 553-570). New York, NY: Springer International Publishing.

  • Artificial Intelligence and the Associated Threats
  • Artificial Intelligence Threat to Human Activities
  • Technology Impact: 24 Hours Without My Cell Phone
  • Biological Basis of Asthma and Allergic Disease
  • Critical Reflection of Communication Skills Relevant to Clinical Scenario
  • Technologies: Microsoft's Cortana vs. Apple's Siri
  • Technology Siri for Submission
  • Non Experts: Artificial Intelligence
  • Artificial Intelligence in Post Modern Development
  • Food and Water Quality Testing Device
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2020, August 26). Artificial Intelligence: The Helper or the Threat? https://ivypanda.com/essays/artificial-intelligence-the-helper-or-the-threat/

"Artificial Intelligence: The Helper or the Threat?" IvyPanda , 26 Aug. 2020, ivypanda.com/essays/artificial-intelligence-the-helper-or-the-threat/.

IvyPanda . (2020) 'Artificial Intelligence: The Helper or the Threat'. 26 August.

IvyPanda . 2020. "Artificial Intelligence: The Helper or the Threat?" August 26, 2020. https://ivypanda.com/essays/artificial-intelligence-the-helper-or-the-threat/.

1. IvyPanda . "Artificial Intelligence: The Helper or the Threat?" August 26, 2020. https://ivypanda.com/essays/artificial-intelligence-the-helper-or-the-threat/.

Bibliography

IvyPanda . "Artificial Intelligence: The Helper or the Threat?" August 26, 2020. https://ivypanda.com/essays/artificial-intelligence-the-helper-or-the-threat/.

IvyPanda uses cookies and similar technologies to enhance your experience, enabling functionalities such as:

  • Basic site functions
  • Ensuring secure, safe transactions
  • Secure account login
  • Remembering account, browser, and regional preferences
  • Remembering privacy and security settings
  • Analyzing site traffic and usage
  • Personalized search, content, and recommendations
  • Displaying relevant, targeted ads on and off IvyPanda

Please refer to IvyPanda's Cookies Policy and Privacy Policy for detailed information.

Certain technologies we use are essential for critical functions such as security and site integrity, account authentication, security and privacy preferences, internal site usage and maintenance data, and ensuring the site operates correctly for browsing and transactions.

Cookies and similar technologies are used to enhance your experience by:

  • Remembering general and regional preferences
  • Personalizing content, search, recommendations, and offers

Some functions, such as personalized recommendations, account preferences, or localization, may not work correctly without these technologies. For more details, please refer to IvyPanda's Cookies Policy .

To enable personalized advertising (such as interest-based ads), we may share your data with our marketing and advertising partners using cookies and other technologies. These partners may have their own information collected about you. Turning off the personalized advertising setting won't stop you from seeing IvyPanda ads, but it may make the ads you see less relevant or more repetitive.

Personalized advertising may be considered a "sale" or "sharing" of the information under California and other state privacy laws, and you may have the right to opt out. Turning off personalized advertising allows you to exercise your right to opt out. Learn more in IvyPanda's Cookies Policy and Privacy Policy .

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Artificial Intelligence and the Future of Humans

Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will

Table of contents.

  • 1. Concerns about human agency, evolution and survival
  • 2. Solutions to address AI’s anticipated negative impacts
  • 3. Improvements ahead: How humans and AI might evolve together in the next decade
  • About this canvassing of experts
  • Acknowledgments

Table that shows that people in most of the surveyed countries are more willing to discuss politics in person than via digital channels.

Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?

Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.

The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.

Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition. Additionally, a number of these experts predicted that AI would abet long-anticipated changes in formal and informal education systems.

Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.

[chart id=”21972″]

Specifically, participants were asked to consider the following:

“Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties.

Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today?”

Overall, and despite the downsides they fear, 63% of respondents in this canvassing said they are hopeful that most individuals will be mostly better off in 2030, and 37% said people will not be better off.

A number of the thought leaders who participated in this canvassing said humans’ expanding reliance on technological systems will only go well if close attention is paid to how these tools, platforms and networks are engineered, distributed and updated. Some of the powerful, overarching answers included those from:

Sonia Katyal , co-director of the Berkeley Center for Law and Technology and a member of the inaugural U.S. Commerce Department Digital Economy Board of Advisors, predicted, “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”

We need to work aggressively to make sure technology matches our values. Erik Brynjolfsson Erik Brynjolfsson

Erik Brynjolfsson , director of the MIT Initiative on the Digital Economy and author of “Machine, Platform, Crowd: Harnessing Our Digital Future,” said, “AI and related technologies have already achieved superhuman performance in many areas, and there is little doubt that their capabilities will improve, probably very significantly, by 2030. … I think it is more likely than not that we will use this power to make the world a better place. For instance, we can virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet. That said, AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons. Neither outcome is inevitable, so the right question is not ‘What will happen?’ but ‘What will we choose to do?’ We need to work aggressively to make sure technology matches our values. This can and must be done at all levels, from government, to business, to academia, and to individual choices.”

Bryan Johnson , founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”

Marina Gorbis , executive director of the Institute for the Future, said, “Without significant changes in our political economy and data governance regimes [AI] is likely to create greater economic inequalities, more surveillance and more programmed and non-human-centric interactions. Every time we program our environments, we end up programming ourselves and our interactions. Humans have to become more standardized, removing serendipity and ambiguity from our interactions. And this ambiguity and complexity is what is the essence of being human.”

Judith Donath , author of “The Social Machine, Designs for Living Online” and faculty fellow at Harvard University’s Berkman Klein Center for Internet & Society, commented, “By 2030, most social situations will be facilitated by bots – intelligent-seeming programs that interact with us in human-like ways. At home, parents will engage skilled bots to help kids with homework and catalyze dinner conversations. At work, bots will run meetings. A bot confidant will be considered essential for psychological well-being, and we’ll increasingly turn to such companions for advice ranging from what to wear to whom to marry. We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals? Artificially intelligent companions will cultivate the impression that social goals similar to our own motivate them – to be held in good regard, whether as a beloved friend, an admired boss, etc. But their real collaboration will be with the humans and institutions that control them. Like their forebears today, these will be sellers of goods who employ them to stimulate consumption and politicians who commission them to sway opinions.”

Andrew McLaughlin , executive director of the Center for Innovative Thinking at Yale University, previously deputy chief technology officer of the United States for President Barack Obama and global public policy lead for Google, wrote, “2030 is not far in the future. My sense is that innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognizable. AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment.”

Michael M. Roberts , first president and CEO of the Internet Corporation for Assigned Names and Numbers (ICANN) and Internet Hall of Fame member, wrote, “The range of opportunities for intelligent agents to augment human intelligence is still virtually unlimited. The major issue is that the more convenient an agent is, the more it needs to know about you – preferences, timing, capacities, etc. – which creates a tradeoff of more help requires more intrusion. This is not a black-and-white issue – the shades of gray and associated remedies will be argued endlessly. The record to date is that convenience overwhelms privacy. I suspect that will continue.”

danah boyd , a principal researcher for Microsoft and founder and president of the Data & Society Research Institute, said, “AI is a tool that will be used by humans for all sorts of purposes, including in the pursuit of power. There will be abuses of power that involve AI, just as there will be advances in science and humanitarian efforts that also involve AI. Unfortunately, there are certain trend lines that are likely to create massive instability. Take, for example, climate change and climate migration. This will further destabilize Europe and the U.S., and I expect that, in panic, we will see AI be used in harmful ways in light of other geopolitical crises.”

Amy Webb , founder of the Future Today Institute and professor of strategic foresight at New York University, commented, “The social safety net structures currently in place in the U.S. and in many other countries around the world weren’t designed for our transition to AI. The transition through AI will last the next 50 years or more. As we move farther into this third era of computing, and as every single industry becomes more deeply entrenched with AI systems, we will need new hybrid-skilled knowledge workers who can operate in jobs that have never needed to exist before. We’ll need farmers who know how to work with big data sets. Oncologists trained as robotocists. Biologists trained as electrical engineers. We won’t need to prepare our workforce just once, with a few changes to the curriculum. As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging. It’s easy to look back on history through the lens of present – and to overlook the social unrest caused by widespread technological unemployment. We need to address a difficult truth that few are willing to utter aloud: AI will eventually cause a large number of people to be permanently out of work. Just as generations before witnessed sweeping changes during and in the aftermath of the Industrial Revolution, the rapid pace of technology will likely mean that Baby Boomers and the oldest members of Gen X – especially those whose jobs can be replicated by robots – won’t be able to retrain for other kinds of work without a significant investment of time and effort.”

Barry Chudakov , founder and principal of Sertain Research, commented, “By 2030 the human-machine/AI collaboration will be a necessary tool to manage and counter the effects of multiple simultaneous accelerations: broad technology advancement, globalization, climate change and attendant global migrations. In the past, human societies managed change through gut and intuition, but as Eric Teller, CEO of Google X, has said, ‘Our societal structures are failing to keep pace with the rate of change.’ To keep pace with that change and to manage a growing list of ‘wicked problems’ by 2030, AI – or using Joi Ito’s phrase, extended intelligence – will value and revalue virtually every area of human behavior and interaction. AI and advancing technologies will change our response framework and time frames (which in turn, changes our sense of time). Where once social interaction happened in places – work, school, church, family environments – social interactions will increasingly happen in continuous, simultaneous time. If we are fortunate, we will follow the 23 Asilomar AI Principles outlined by the Future of Life Institute and will work toward ‘not undirected intelligence but beneficial intelligence.’ Akin to nuclear deterrence stemming from mutually assured destruction, AI and related technology systems constitute a force for a moral renaissance. We must embrace that moral renaissance, or we will face moral conundrums that could bring about human demise. … My greatest hope for human-machine/AI collaboration constitutes a moral and ethical renaissance – we adopt a moonshot mentality and lock arms to prepare for the accelerations coming at us. My greatest fear is that we adopt the logic of our emerging technologies – instant response, isolation behind screens, endless comparison of self-worth, fake self-presentation – without thinking or responding smartly.”

John C. Havens , executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Council on Extended Intelligence, wrote, “Now, in 2018, a majority of people around the world can’t access their data, so any ‘human-AI augmentation’ discussions ignore the critical context of who actually controls people’s information and identity. Soon it will be extremely difficult to identify any autonomous or intelligent systems whose algorithms don’t interact with human data in one form or another.”

At stake is nothing less than what sort of society we want to live in and how we experience our humanity. Batya Friedman Batya Friedman

Batya Friedman , a human-computer interaction professor at the University of Washington’s Information School, wrote, “Our scientific and technological capacities have and will continue to far surpass our moral ones – that is our ability to use wisely and humanely the knowledge and tools that we develop. … Automated warfare – when autonomous weapons kill human beings without human engagement – can lead to a lack of responsibility for taking the enemy’s life or even knowledge that an enemy’s life has been taken. At stake is nothing less than what sort of society we want to live in and how we experience our humanity.”

Greg Shannon , chief scientist for the CERT Division at Carnegie Mellon University, said, “Better/worse will appear 4:1 with the long-term ratio 2:1. AI will do well for repetitive work where ‘close’ will be good enough and humans dislike the work. … Life will definitely be better as AI extends lifetimes, from health apps that intelligently ‘nudge’ us to health, to warnings about impending heart/stroke events, to automated health care for the underserved (remote) and those who need extended care (elder care). As to liberty, there are clear risks. AI affects agency by creating entities with meaningful intellectual capabilities for monitoring, enforcing and even punishing individuals. Those who know how to use it will have immense potential power over those who don’t/can’t. Future happiness is really unclear. Some will cede their agency to AI in games, work and community, much like the opioid crisis steals agency today. On the other hand, many will be freed from mundane, unengaging tasks/jobs. If elements of community happiness are part of AI objective functions, then AI could catalyze an explosion of happiness.”

Kostas Alexandridis , author of “Exploring Complex Dynamics in Multi-agent-based Intelligent Systems,” predicted, “Many of our day-to-day decisions will be automated with minimal intervention by the end-user. Autonomy and/or independence will be sacrificed and replaced by convenience. Newer generations of citizens will become more and more dependent on networked AI structures and processes. There are challenges that need to be addressed in terms of critical thinking and heterogeneity. Networked interdependence will, more likely than not, increase our vulnerability to cyberattacks. There is also a real likelihood that there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control.”

Oscar Gandy , emeritus professor of communication at the University of Pennsylvania, responded, “We already face an ungranted assumption when we are asked to imagine human-machine ‘collaboration.’ Interaction is a bit different, but still tainted by the grant of a form of identity – maybe even personhood – to machines that we will use to make our way through all sorts of opportunities and challenges. The problems we will face in the future are quite similar to the problems we currently face when we rely upon ‘others’ (including technological systems, devices and networks) to acquire things we value and avoid those other things (that we might, or might not be aware of).”

James Scofield O’Rourke , a professor of management at the University of Notre Dame, said, “Technology has, throughout recorded history, been a largely neutral concept. The question of its value has always been dependent on its application. For what purpose will AI and other technological advances be used? Everything from gunpowder to internal combustion engines to nuclear fission has been applied in both helpful and destructive ways. Assuming we can contain or control AI (and not the other way around), the answer to whether we’ll be better off depends entirely on us (or our progeny). ‘The fault, dear Brutus, is not in our stars, but in ourselves, that we are underlings.’”

Simon Biggs , a professor of interdisciplinary arts at the University of Edinburgh, said, “AI will function to augment human capabilities. The problem is not with AI but with humans. As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified. Given historical precedent, one would have to assume it will be our worst qualities that are augmented. My expectation is that in 2030 AI will be in routine use to fight wars and kill people, far more effectively than we can currently kill. As societies we will be less affected by this as we currently are, as we will not be doing the fighting and killing ourselves. Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing. We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully. My other primary concern is to do with surveillance and control. The advent of China’s Social Credit System (SCS) is an indicator of what it likely to come. We will exist within an SCS as AI constructs hybrid instances of ourselves that may or may not resemble who we are. But our rights and affordances as individuals will be determined by the SCS. This is the Orwellian nightmare realised.”

Mark Surman , executive director of the Mozilla Foundation, responded, “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”

William Uricchio , media scholar and professor of comparative media studies at MIT, commented, “AI and its related applications face three problems: development at the speed of Moore’s Law, development in the hands of a technological and economic elite, and development without benefit of an informed or engaged public. The public is reduced to a collective of consumers awaiting the next technology. Whose notion of ‘progress’ will prevail? We have ample evidence of AI being used to drive profits, regardless of implications for long-held values; to enhance governmental control and even score citizens’ ‘social credit’ without input from citizens themselves. Like technologies before it, AI is agnostic. Its deployment rests in the hands of society. But absent an AI-literate public, the decision of how best to deploy AI will fall to special interests. Will this mean equitable deployment, the amelioration of social injustice and AI in the public service? Because the answer to this question is social rather than technological, I’m pessimistic. The fix? We need to develop an AI-literate public, which means focused attention in the educational sector and in public-facing media. We need to assure diversity in the development of AI technologies. And until the public, its elected representatives and their legal and regulatory regimes can get up to speed with these fast-moving developments we need to exercise caution and oversight in AI’s development.”

The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education. Some responses are lightly edited for style.

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Artificial Intelligence
  • Emerging Technology
  • Future of the Internet (Project)
  • Technology Adoption

A quarter of U.S. teachers say AI tools do more harm than good in K-12 education

Many americans think generative ai programs should credit the sources they rely on, americans’ use of chatgpt is ticking up, but few trust its election information, q&a: how we used large language models to identify guests on popular podcasts, computer chips in human brains: how americans view the technology amid recent advances, most popular, report materials.

  • Shareable quotes from experts about artificial intelligence and the future of humans

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan, nonadvocacy fact tank that informs the public about the issues, attitudes and trends shaping the world. It does not take policy positions. The Center conducts public opinion polling, demographic research, computational social science research and other data-driven research. Pew Research Center is a subsidiary of The Pew Charitable Trusts , its primary funder.

© 2024 Pew Research Center

As artificial intelligence rapidly advances, experts debate level of threat to humanity

Paul Solman

Paul Solman Paul Solman

Ryan Connelly Holmes

Ryan Connelly Holmes Ryan Connelly Holmes

Leave your feedback

  • Copy URL https://www.pbs.org/newshour/show/as-artificial-intelligence-rapidly-advances-experts-debate-level-of-threat-to-humanity

The development of artificial intelligence is speeding up so quickly that it was addressed briefly at both Republican and Democratic conventions. Science fiction has long theorized about the ways in which machines might one day usurp their human overlords. As the capabilities of modern AI grow, Paul Solman looks at the existential threats some experts fear and that some see as hyperbole.

Read the Full Transcript

Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors.

Geoff Bennett:

The development of artificial intelligence is speeding up so quickly that it was addressed briefly at both political conventions, including the Democratic gathering this week.

Of course, science fiction writers and movies have long theorized about the ways in which machines might one day usurp their human overlords.

As the capabilities of modern artificial intelligence grow, Paul Solman looks at the existential threats some experts fear and that some see as hyperbole.

Eliezer Yudkowsky, Founder, Machine Intelligence Research Institute:

From my perspective, there's inevitable doom at the end of this, where, if you keep on making A.I. smarter and smarter, they will kill you.

Paul Solman:

Kill you, me and everyone, predicts Eliezer Yudkowsky, tech pundit and founder back in the year 2000 of a nonprofit now called the Machine Intelligence Research Institute to explore the uses of friendly A.I.; 24 years later, do you think everybody's going to die in my lifetime, in your lifetime?

Eliezer Yudkowsky:

I would wildly guess my lifetime and even your lifetime.

Now, we have heard it before, as when the so-called Godfather of A.I., Geoffrey Hinton, warned Geoff Bennett last spring.

Geoffrey Hinton, Artificial Intelligence Pioneer:

The machines taking over is a threat for everybody. It's a threat for the Chinese and for the Americans and for the Europeans, just like a global nuclear war was.

And more than a century ago, the Czech play "R.U.R.," Rossum's Universal Robots, from which the word robot comes, dramatized the warning.

And since 1921 — that's more than 100 years ago — people have been imagining that the robots will become sentient and destroy us.

Jerry Kaplan, Author, "Generative Artificial Intelligence: What Everyone Needs to Know": That's right.

A.I. expert Stanford's Jerry Kaplan at Silicon Valley's Computer History Museum.

Jerry Kaplan:

That's created a whole mythology, which, of course, has played out in endless science fiction treatments.

Like the Terminator series.

Michael Biehn, Actor:

A new order of intelligence decided our fate in a microsecond, extermination.

Judgment Day forecast for 1997. But, hey, that's Hollywood. And look on the bright side, no rebel robots or even hoverboards or flying cars yet.

On the other hand, robots will be everywhere soon enough, as mass production drives down their cost. So will they soon turn against us?

I got news for you. There's no they there. They don't want anything. They don't need anything. We design and build these things to our own specifications. Now, that's not to say we can't build some very dangerous machines and some very dangerous tools.

Kaplan thinks what humans do with A.I. is much scarier than A.I. on its own, create super viruses, mega drones, God knows what else.

But whodunit aside, the big question still is, will A.I. bring doomsday?

A.I. Reid Hoffman avatar: I'd rate the existential threat of A.I. around a three or four out of 10.

That's the avatar of LinkedIn founder Reid Hoffman, to which we fed the question, 1 being no threat, 10 extinction. What does the real Reid Hoffman say?

Reid Hoffman, Creator, LinkedIn Corporation:

I'm going to go for two on that answer.

I'm going to tell you that your avatar said 3 to 4.

Reid Hoffman:

All right. Well, my avatar is trying to approximate something that is what kind of a bulk of educated, reasonable people would want to hear.

But, hey, even if you told me there was only a one in 10 chance of a ticking time bomb in my room, I would be out of there.

Yudkowsky started warning of rogue A.I. back in 2005. And since?

Things have gone a bit worse than hoped for. And, to be clear, the sting at the end of this is A.I. gets smarter than us, is poorly controlled, and probably humanity becomes collateral damage to its own expansion.

What's the doomsday scenario here?

It is smarter than humanity. From its perspective, it now wants to get independence of humanity. It doesn't want to be running on computers that require electricity that humans have to generate.

There are many more A.I. companies now than just OpenAI. And some of the people there are explicitly like, yes, sure like maybe it'll wipe out humanity, but I don't care about that.

And of all the world's problems, tribalism, climate change, et cetera, you think A.I. is a bigger problem?

I think climate change is unlikely to kill literally everyone, down to the last man, woman and child. At all of the top three companies, I expect the leaders would prefer not to wipe out humanity, but, as they see it, it's not up to them. They got to stay ahead of their competitors.

So what does A.I. itself think?

I asked Amica, a robot hooked up to ChatGPT. Destruction, 1 to 10? The response, a 3.

Amica, ChatGPT A.I.:

The potential for harmony and enhancement exists, but vigilance is paramount. Would you roll the dice on that?

I'm afraid we're all going to be rolling the dice on that. And I'm worried that in your answer, you're saying there's something like a 30 percent chance that A.I., artificial intelligence, will destroy humanity. Shouldn't I be scared?

Your concern is valid, Paul. The 30 percent isn't a direct prediction, but a cautious nod to the uncertainties and ethical dilemmas of handling powerful technology.

But I'm afraid that human beings may not be capable of protecting themselves from the risks of A.I.

That dread is profound. It's daunting, but not without hope. Humanity's resilience has often defied expectations.

Perhaps it's no surprise that the actual human who created ChatGPT, Sam Altman, thinks the same.

Sam Altman, CEO, OpenAI:

I believe that A.I. will be very net good, tremendously net good, but I think, like with any other tool, it'll be misused. Like, you can do great things with a hammer and you can, like, kill people with a hammer. I don't think that absolves us, or you all, or society from trying to mitigate the bad as much as we can and maximize the good.

And Reid Hoffman thinks we can maximize the good.

We have a portfolio risk. We have climate change as a possibility. We have pandemic as a possibility. We have nuclear war as a possibility. We have asteroids as a possibility. We have human world war as a possibility. We have all of these existential risks.

And you go, OK, A.I., is it also an additional existential risk? And the answer is, yes, potentially. But you look at its portfolio and say, what improves our overall portfolio? What reduces existential risk for humanity? And A.I. is one of the things that adds a lot in the positive column.

So, if you think, how do we prevent future natural or manmade pandemic, A.I. is the only way that I think can do that. And also, like, it might even help us with climate change things. So you go, OK, in the net portfolio, our existential risk may go down with A.I.

For the sake of us all, grownups, children, grandchildren, let's hope he's right.

For the "PBS News Hour" in Silicon Valley, Paul Solman.

Listen to this Segment

Democratic National Convention (DNC) in Chicago

Watch the Full Episode

Paul Solman has been a correspondent for the PBS News Hour since 1985, mainly covering business and economics.

Support Provided By: Learn more

More Ways to Watch

Educate your inbox.

Subscribe to Here’s the Deal, our politics newsletter for analysis you won’t find anywhere else.

Thank you. Please check your inbox to confirm.

Cunard

Artificial intelligence: threats and opportunities

Artificial intelligence (AI) affects our lives more and more. Learn about the opportunities and threats for security, democracy, businesses and jobs.

artificial intelligence is a threat to humans essay

Europe's growth and wealth are closely connected to how it will make use of data and connected technologies. AI can make a big difference to our lives – for better or worse . In June 2023, The European Parliament adopted its negotiating position on the AI Act – the world’s first set of comprehensive rules to manage AI risks.Below are some key opportunities and threats connected to future applications of AI.

Read more about what artificial intelligence is and how it is used

175 zettabytes

The volume of data produced in the world is expected to grow from 33 zettabytes in 2018 to 175 zettabytes in 2025 (one zettabyte is a thousand billion gigabytes)

Advantages of AI

EU countries are already strong in digital industry and business-to-business applications. With a high-quality digital infrastructure and a regulatory framework that protects privacy and freedom of speech, the EU could become a global leader in the data economy and its applications .

Benefits of AI for people

AI could help people   with improved health care, safer cars and other transport systems, tailored, cheaper and longer-lasting products and services. It can also facilitate access to information, education and training.  The need for distance learning became more important because of the Covid-19 pandemic . AI can also make workplace safer as robots can be used for dangerous parts of jobs, and open new job positions as AI-driven industries grow and change.

Opportunities of artificial intelligence for businesses

For businesses , AI can enable the development of a new generation of products and services, including in sectors where European companies already have strong positions: green and circular economy, machinery, farming, healthcare, fashion, tourism. It can boost sales, improve machine maintenance, increase production output and quality, improve customer service, as well as save energy.

Estimated increase of labour productivity related to AI by 2035 (Parliament's Think Tank 2020)

AI opportunities in public services

AI used in public services can reduce costs and offer new possibilities in public transport, education, energy and waste management and could also improve the sustainability of products. In this way AI could contribute to achieving the goals of the EU Green Deal .

Estimate of how much AI could help reduce global greenhouse emissions by 2030 (Parliament's Think Tank 2020)

Strengthening democracy

Democracy could be made stronger by using data-based scrutiny, preventing disinformation and cyber attacks and ensuring access to quality information . AI could also support diversity and openness, for example by mitigating the possibility of prejudice in hiring decisions and using analytical data instead.

AI, security and safety

AI is predicted to be used more in crime prevention and the criminal justice system , as massive data sets could be processed faster, prisoner flight risks assessed more accurately, crime or even terrorist attacks predicted and prevented. It is already used by online platforms to detect and react to unlawful and inappropriate online behaviour.

In military matters , AI could be used for defence and attack strategies in hacking and phishing or to target key systems in cyberwarfare.

Threats and challenges of AI

The increasing reliance on AI systems also poses potential risks.

Underuse and overuse of AI

Underuse of AI is considered as a major threat: missed opportunities for the EU could mean poor implementation of major programmes, such as the EU Green Deal, losing competitive advantage towards other parts of the world, economic stagnation and poorer possibilities for people. Underuse could derive from public and business' mistrust in AI, poor infrastructure, lack of initiative, low investments, or, since AI's machine learning is dependent on data, from fragmented digital markets.

Overuse can also be problematic: investing in AI applications that prove not to be useful or applying AI to tasks for which it is not suited, for example using it to explain complex societal issues.

Liability: who is responsible for damage caused by AI?

An important challenge is to determine who is responsible for damage caused by an AI-operated device or service: in an accident involving a self-driving car. Should the damage be covered by the owner, the car manufacturer or the programmer?

If the producer was absolutely free of accountability, there might be no incentive to provide good product or service and it could damage people’s trust in the technology; but regulations could also be too strict and stifle innovation.

Threats of AI to fundamental rights and democracy

The results that AI produces depend on how it is designed and what data it uses. Both design and data can be intentionally or unintentionally biased. For example, some important aspects of an issue might not be programmed into the algorithm or might be programmed to reflect and replicate structural biases. In adcition, the use of numbers to represent complex social reality could make the AI seem factual and precise when it isn’t . This is sometimes referred to as mathwashing.

If not done properly, AI could lead to decisions influenced by data on  ethnicity, sex, age when hiring or firing, offering loans, or even in criminal proceedings.

AI could severely affect the right to privacy and data protection. It can be for example used in face recognition equipment or for online tracking and profiling of individuals. In addition, AI enables merging pieces of information a person has given into new data, which can lead to results the person would not expect.

It can also present a threat to democracy; AI has already been blamed for creating online echo chambers based on a person's previous online behaviour, displaying only content a person would like, instead of creating an environment for pluralistic, equally accessible and inclusive public debate. It can even be used to create extremely realistic fake video, audio and images, known as deepfakes, which can present financial risks, harm reputation, and challenge decision making. All of this could lead to separation and polarisation in the public sphere and manipulate elections.

AI could also play a role in harming freedom of assembly and protest as it could track and profile individuals linked to certain beliefs or actions.

AI impact on jobs

Use of AI in the workplace is expected to result in the elimination of a large number of jobs. Though AI is also expected to create and make better jobs, education and training will have a crucial role in preventing long-term unemployment and ensure a skilled workforce.

of jobs in OECD countries are highly automatable and another 32% could face substantial changes (estimate by Parliament's Think Tank 2020).

Competition

Amassing information could also lead to distortion of competition as  companies with more information could gain an advantage and effectively eliminate competitors.

Safety and security risks

AI applications that are in physical contact with humans or integrated into the human body could pose safety risks as they may be poorly designed, misused or hacked. Poorly regulated use of AI in weapons could lead to loss of human control over dangerous weapons.

Transparency challenges

Imbalances of access to information could be exploited. For example, based on a person's online behaviour or other data and without their knowledge, an online vendor can use AI to predict someone is willing to pay, or a political campaign can adapt their message. Another transparency issue is that sometimes it can be unclear to people whether they are interacting with AI or a person.

Read more about how MEPs want to shape data legislation to boost innovation and ensure safety

Find out more

  • Parliament's Think Tank
  • Artificial intelligence: how does it work, why does it matter and what can we do about it?
  • Opportunities of artificial intelligence
  • Artificial intelligence: legal and ethical reflections

Related articles

Artificial intelligence, what is artificial intelligence and how is it used, share this article on:.

  • Sign up for mail updates
  • PDF version

MIT Technology Review

  • Newsletters

The true dangers of AI are closer than we think

Forget superintelligent AI: algorithms are already creating real harm. The good news: the fight back has begun.

  • Karen Hao archive page

william isaac

As long as humans have built machines, we’ve feared the day they could destroy us. Stephen Hawking famously warned that AI could spell an end to civilization. But to many AI researchers, these conversations feel unmoored. It’s not that they don’t fear AI running amok—it’s that they see it already happening, just not in the ways most people would expect. 

AI is now screening job candidates, diagnosing disease, and identifying criminal suspects. But instead of making these decisions more efficient or fair, it’s often perpetuating the biases of the humans on whose decisions it was trained. 

William Isaac is a senior research scientist on the ethics and society team at DeepMind, an AI startup that Google acquired in 2014. He also co-chairs the Fairness, Accountability, and Transparency conference—the premier annual gathering of AI experts, social scientists, and lawyers working in this area. I asked him about the current and potential challenges facing AI development—as well as the solutions.

Q: Should we be worried about superintelligent AI?

A: I want to shift the question. The threats overlap, whether it’s predictive policing and risk assessment in the near term, or more scaled and advanced systems in the longer term. Many of these issues also have a basis in history. So potential risks and ways to approach them are not as abstract as we think.

There are three areas that I want to flag. Probably the most pressing one is this question about value alignment: how do you actually design a system that can understand and implement the various forms of preferences and values of a population? In the past few years we’ve seen attempts by policymakers, industry, and others to try to embed values into technical systems at scale—in areas like predictive policing, risk assessments, hiring, etc. It’s clear that they exhibit some form of bias that reflects society. The ideal system would balance out all the needs of many stakeholders and many people in the population. But how does society reconcile their own history with aspiration? We’re still struggling with the answers, and that question is going to get exponentially more complicated. Getting that problem right is not just something for the future, but for the here and now.

The second one would be achieving demonstrable social benefit. Up to this point there are still few pieces of empirical evidence that validate that AI technologies will achieve the broad-based social benefit that we aspire to. 

Lastly, I think the biggest one that anyone who works in the space is concerned about is: what are the robust mechanisms of oversight and accountability. 

Q: How do we overcome these risks and challenges?

A: Three areas would go a long way. The first is to build a collective muscle for responsible innovation and oversight. Make sure you’re thinking about where the forms of misalignment or bias or harm exist. Make sure you develop good processes for how you ensure that all groups are engaged in the process of technological design. Groups that have been historically marginalized are often not the ones that get their needs met. So how we design processes to actually do that is important.

The second one is accelerating the development of the sociotechnical tools to actually do this work. We don’t have a whole lot of tools. 

The last one is providing more funding and training for researchers and practitioners—particularly researchers and practitioners of color—to conduct this work. Not just in machine learning, but also in STS [science, technology, and society] and the social sciences. We want to not just have a few individuals but a community of researchers to really understand the range of potential harms that AI systems pose, and how to successfully mitigate them.

Q: How far have AI researchers come in thinking about these challenges, and how far do they still have to go?

A: In 2016, I remember, the White House had just come out with a big data report, and there was a strong sense of optimism that we could use data and machine learning to solve some intractable social problems. Simultaneously, there were researchers in the academic community who had been flagging in a very abstract sense: “Hey, there are some potential harms that could be done through these systems.” But they largely had not interacted at all. They existed in unique silos.

Since then, we’ve just had a lot more research targeting this intersection between known flaws within machine-learning systems and their application to society. And once people began to see that interplay, they realized: “Okay, this is not just a hypothetical risk. It is a real threat.” So if you view the field in phases, phase one was very much highlighting and surfacing that these concerns are real. The second phase now is beginning to grapple with broader systemic questions.

Q: So are you optimistic about achieving broad-based beneficial AI?

A: I am. The past few years have given me a lot of hope. Look at facial recognition as an example. There was the great work by Joy Buolamwini, Timnit Gebru, and Deb Raji in surfacing intersectional disparities in accuracies across facial recognition systems [i.e., showing these systems were far less accurate on Black female faces than white male ones]. There’s the advocacy that happened in civil society to mount a rigorous defense of human rights against misapplication of facial recognition. And also the great work that policymakers, regulators, and community groups from the grassroots up were doing to communicate exactly what facial recognition systems were and what potential risks they posed, and to demand clarity on what the benefits to society would be. That’s a model of how we could imagine engaging with other advances in AI.

But the challenge with facial recognition is we had to adjudicate these ethical and values questions while we were publicly deploying the technology. In the future, I hope that some of these conversations happen before the potential harms emerge.

Q: What do you dream about when you dream about the future of AI?

A: It could be a great equalizer. Like if you had AI teachers or tutors that could be available to students and communities where access to education and resources is very limited, that’d be very empowering. And that’s a nontrivial thing to want from this technology. How do you know it’s empowering? How do you know it’s socially beneficial? 

I went to graduate school in Michigan during the Flint water crisis. When the initial incidences of lead pipes emerged, the records they had for where the piping systems were located were on index cards at the bottom of an administrative building. The lack of access to technologies had put them at a significant disadvantage. It means the people who grew up in those communities, over 50% of whom are African-American, grew up in an environment where they don’t get basic services and resources.

Artificial intelligence

a protractor, a child writing math problems on a blackboard and a German text on geometry

Google DeepMind’s new AI systems can now solve complex math problems

AlphaProof and AlphaGeometry 2 are steps toward building systems that can reason, which could unlock exciting new capabilities.

  • Rhiannon Williams archive page

person using the voice function of their phone with the openai logo and a sound wave

OpenAI has released a new ChatGPT bot that you can talk to

The voice-enabled chatbot will be available to a small group of people today, and to all ChatGPT Plus users in the fall. 

  • Melissa Heikkilä archive page

8 bit concentric rings of ouroboros snakes

AI trained on AI garbage spits out AI garbage

As junk web pages written by AI proliferate, the models that rely on that data will suffer.

  • Scott J Mulligan archive page

foundational models of a racetrack with a text/image prompt to "Make the scenery a desert."

Roblox is launching a generative AI that builds 3D environments in a snap

It will make it easy to build new game environments on the platform, even if you don’t have any design skills.

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

AI Should Augment Human Intelligence, Not Replace It

by David De Cremer and Garry Kasparov

artificial intelligence is a threat to humans essay

Summary .   

Will smart machines really replace human workers? Probably not. People and AI both bring different abilities and strengths to the table. The real question is: how can human intelligence work with artificial intelligence to produce augmented intelligence. Chess Grandmaster Garry Kasparov offers some unique insight here. After losing to IBM’s Deep Blue, he began to experiment how a computer helper changed players’ competitive advantage in high-level chess games. What he discovered was that having the best players and the best program was less a predictor of success than having a really good process. Put simply, “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.” As leaders look at how to incorporate AI into their organizations, they’ll have to manage expectations as AI is introduced, invest in bringing teams together and perfecting processes, and refine their own leadership abilities.

In an economy where data is changing how companies create value — and compete — experts predict that using artificial intelligence (AI) at a larger scale will add as much as $15.7 trillion to the global economy by 2030 . As AI is changing how companies work, many believe that who does this work will change, too — and that organizations will begin to replace human employees with intelligent machines . This is already happening: intelligent systems are displacing humans in manufacturing, service delivery, recruitment, and the financial industry, consequently moving human workers towards lower-paid jobs or making them unemployed. This trend has led some to conclude that in 2040 our workforce may be totally unrecognizable .

Partner Center

  • Skip to main content
  • Keyboard shortcuts for audio player

What is AI and how will it change our lives? NPR Explains.

Danny Hajek at NPR West in Culver City, California, September 25, 2018. (photo by Allison Shelley)

Danny Hajek

Bobby Allyn

Bobby Allyn

Ashley Montgomery

artificial intelligence is a threat to humans essay

AI is a multi-billion dollar industry. Friends are using apps to morph their photos into realistic avatars. TV scripts, school essays and resumes are written by bots that sound a lot like a human. Yuichiro Chino hide caption

AI is a multi-billion dollar industry. Friends are using apps to morph their photos into realistic avatars. TV scripts, school essays and resumes are written by bots that sound a lot like a human.

Artificial intelligence is changing our lives – from education and politics to art and healthcare. The AI industry continues to develop at rapid pace. But what exactly is it? Should we be optimistic or worried about our future with this ever-evolving technology? Join host and tech reporter Bobby Allyn in NPR Explains: AI, a podcast series exclusively on the NPR App, which is available on the App Store or Google Play .

NPR Explains: AI answers your most pressing questions about artificial intelligence:

  • What is AI? - Artificial intelligence is a multi-billion dollar industry. Tons of AI tools are suddenly available to the public. Friends are using apps to morph their photos into realistic avatars. TV scripts, school essays and resumes are written by bots that sound a lot like a human. AI scientist Gary Marcus says there is no one definition of artificial intelligence. It's about building machines that do smart things. Listen here.
  • Can AI be regulated? - As technology gets better at faking reality, there are big questions about regulation. In the U.S., Congress has never been bold about regulating the tech industry and it's no different with the advancements in AI. Listen here.
  • Can AI replace creativity? - AI tools used to generate artwork can give users the chance to create stunning images. Language tools can generate poetry through algorithms. AI is blurring the lines of what it means to be an artist. Now, some artists are arguing that these AI models breach copyright law. Listen here.
  • Does AI have common sense? - Earlier this year, Microsoft's chatbot went rogue. It professed love to some users. It called people ugly. It spread false information. The chatbot's strange behavior brought up an interesting question: Does AI have common sense? Listen here.
  • How can AI help productivity? - From hiring practices to medical insurance paperwork, many big businesses are using AI to work faster and more efficiently. But that's raising urgent questions about discrimination and equity in the workplace. Listen here.
  • What are the dangers of AI? - Geoffrey Hinton, known as the "godfather of AI," spent decades advancing artificial intelligence. Now he says he believes the AI arms race among tech giants is actually a race towards danger. Listen here.

Learn more about artificial intelligence. Listen to NPR Explains: AI, a podcast series available exclusively in the NPR app. Download it on the App Store or Google Play .

Leaderonomics Logo

Will Artificial Intelligence Kill Us All?

AI

Image is from freepik.com by @jcomp

Lately when I give lectures on leadership to various audiences, the  topic of AI hardly ever fails to come up in the question-and-answer period. From what I hear, many people see AI as an ever-present and very worrisome development.

It’s a paradox: On the one hand, plenty of observers are scared by how AI is rapidly revolutionising industries, influencing productivity and shaping our future. On the other, AI has grown to be so ubiquitous that some people do not even notice its presence. Still, whatever people’s reactions turn out to be, there’s much confusion about what the rise of AI will mean for them. 

To many, AI seems like a “black box” in which mysterious things take place. And in the human mind, anything mysterious can easily trigger fear and distrust. In fact, for a significant number of people, AI has been transformed into some dark danger that’s lurking about. 

What’s concerning about AI

One of the biggest fears is job displacement . As machines will perform tasks at lower costs and greater efficiency than was previously possible with humans, AI will most likely eliminate certain categories of jobs. The resulting disruption in the labour market could accelerate income inequalities and even create poverty.

Other concerns about AI tend to be more of an ethical nature. These fears revolve around a loss of control. Many people worry that AI will create vast amounts of deepfake data, making it difficult to perceive what’s real. As such, AI could easily be used to orchestrate misinformation campaigns, cyberattacks and even the development of  autonomous weapon systems . True enough, various autocratic regimes, such as  Russia and  North Korea , have been exploiting the darker capabilities of AI. 

Read: Push for 'Ethical' AI and Technology Standards

But apart from these realistic concerns, we shouldn’t underestimate worries that are of a more existential nature. Many people fear that AI systems will become so advanced as to turn into a conscious organism, surpassing human intelligence. Similarly, there’s concern that a self-learning AI could become uncontrollable, with unforeseen, catastrophic side effects, including the mass destruction of life on Earth.

Not our first rodeo

What AI doomsayers don’t seem to realise is that AI has been around for decades despite appearing wholly futuristic. They should remember that humankind has encountered technological disruptions before. Think automation in manufacturing and e-commerce in retail. History shows that any significant progress has always been met by scepticism or even neophobia – the irrational fear or dislike of anything new or unfamiliar.

A good illustration is the case of British weavers and textile workers who in the late 18th century objected to the introduction of mechanised looms and knitting frames. To protect their jobs, they formed groups called Luddites that tried to destroy these new machines. 

When electricity became widespread, potential customers exaggerated its dangers, spreading frightening stories of people who had died of electrocution. The introduction of television raised fears that it would increase violence due to the popularity of shows glorifying violence. In the 1960s, many worried that robotics would supplant human labour. And up to the 1990s, some people fretted that personal computers would lead to job loss.

AI, the tangible manifestation of our deepest worries 

In hindsight, despite some initial dislocation and hardships, all these various innovations yielded great advantages . In most instances, they stimulated the creation of other, oftentimes better jobs. 

Generally speaking, humans tend to fear what they don’t understand. And AI is what keeps people up at night presently. The soil has been long prepared by science fiction writers who introduced the idea that a sentient, super-intelligent AI would (either through malevolence or by accident) kill us all. 

Indeed, this fear has been fuelled by many films, TV shows, comic books and other popular media in which robots or computers subjugate or exterminate the human race. Think of movies such as 2001: A Space Odyssey , The Terminator or The Matrix , to name a few. 

No wonder that AI has become the new bogeyman, the imaginary creature symbolising people’s fear of the unknown – a mysterious, menacing, elusive apparition that hides in the darkest corners of our imagination. Clearly, from its portrayal in horror movies, to its use as a metaphor for real-life terrors, this creature continues to captivate and terrify many people.

A universal human experience

Of course, the bogeyman is used to instill fear in children, making them more likely to comply with parental authority and societal rules. In that respect, the bogeyman appears to be a natural part of the cognitive and emotional development of every human being. It evolved from the experiences of our Paleolithic ancestors, exposed as they were to the vagaries of their environment. 

Given Homo sapiens ’ history, childish fears about the existence of some bogeyman have not gone away. Consciously or unconsciously, these fears persist in adult life. They become a symbol of the anxieties that linger just beneath the surface. The bogeyman’s endurance throughout history is a testament to its ability to tap into our primal fears and anxieties. In fact, if you scratch human beings, their stone age ancestors may reappear.

There seems to be many similarities between our almost phobic reactions towards AI and the feelings of terror inflicted by the bogeyman of our imagination. Given AI’s ability to tap into our deepest fears and insecurities, its presence becomes haunting to many.

Our most serious threat

However, given what we understand about human nature, we’d better face this bogeyman. These irrational fears associated with AI technology need to be dealt with. Let us be reminded that faith in technology has been the cornerstone of modern society. 

As mentioned before, all of us have been using various forms of AI for a long time. And the bogeyman hasn’t yet come to get us. Like irrational fears about the bogeyman, the fear that AI will overthrow humanity is grounded in misconceptions of what AI is about. 

At its most fundamental level, AI is really a field of computer science that focuses on producing intelligent computers capable of performing things that require human collaboration. AI is nothing more than a tool for improving human productivity. And that’s what all major technological advances were, whether it was the stone axe, the telephone, the personal computer, the internet or the smartphone. 

If we really think about it, the most serious threat we face is not from AI acting to the detriment of humanity. It is the willful misuse of AI by other human beings. In fact, Homo sapiens is the one behaving exactly as we fear that AI would act. It is Homo sapiens that has become unpredictable and uncontrollable. It is Homo sapiens that has brought about inequality and injustice. And it is Homo sapiens that may cause the mass destruction of life on Earth. 

Keeping these facts in mind, we would be wise to remind ourselves that it is possible to develop AI responsibly and ethically. To make this happen, however, we will need to manage our irrational feelings associated with the bogeyman.

Read other Manfred F. R. Kets de Vries articles here .

DEI, diversity and inclusion

3 Ways Leaders Can Better Create a Culture of Diversity and Inclusion

Logo

This article is republished courtesy of  INSEAD Knowledge . Copyright INSEAD 2024.

Tags : Artificial Intelligence, Digital, Transformation & Change

Alt

You May Also Like

young people running outdoors, city runners

No Neuroscientist Ever Topped this Advice From Mom

Oct 02, 2023 • 2 Min Read

Alt

Raise Your Game: Understanding Your Life Orientations

Jun 06, 2011 • 19 Min Podcast

Anxious man being counseled

How To Overcome and Put a Stop to Workplace Trauma?

Oct 20, 2021 • 66 Min Video

Be a Leader's Digest Reader

More From Forbes

The 15 Biggest Risks Of Artificial Intelligence

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

As the world witnesses unprecedented growth in artificial intelligence (AI) technologies, it's essential to consider the potential risks and challenges associated with their widespread adoption.

AI does present some significant dangers — from job displacement to security and privacy concerns — and encouraging awareness of issues helps us engage in conversations about AI's legal, ethical, and societal implications.

Here are the biggest risks of artificial intelligence:

1. Lack of Transparency

Lack of transparency in AI systems, particularly in deep learning models that can be complex and difficult to interpret, is a pressing issue. This opaqueness obscures the decision-making processes and underlying logic of these technologies.

When people can’t comprehend how an AI system arrives at its conclusions, it can lead to distrust and resistance to adopting these technologies.

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024, 2. bias and discrimination.

AI systems can inadvertently perpetuate or amplify societal biases due to biased training data or algorithmic design. To minimize discrimination and ensure fairness, it is crucial to invest in the development of unbiased algorithms and diverse training data sets.

3. Privacy Concerns

AI technologies often collect and analyze large amounts of personal data, raising issues related to data privacy and security. To mitigate privacy risks, we must advocate for strict data protection regulations and safe data handling practices.

4. Ethical Dilemmas

Instilling moral and ethical values in AI systems, especially in decision-making contexts with significant consequences, presents a considerable challenge. Researchers and developers must prioritize the ethical implications of AI technologies to avoid negative societal impacts.

5. Security Risks

As AI technologies become increasingly sophisticated, the security risks associated with their use and the potential for misuse also increase. Hackers and malicious actors can harness the power of AI to develop more advanced cyberattacks, bypass security measures, and exploit vulnerabilities in systems.

The rise of AI-driven autonomous weaponry also raises concerns about the dangers of rogue states or non-state actors using this technology — especially when we consider the potential loss of human control in critical decision-making processes. To mitigate these security risks, governments and organizations need to develop best practices for secure AI development and deployment and foster international cooperation to establish global norms and regulations that protect against AI security threats.

6. Concentration of Power

The risk of AI development being dominated by a small number of large corporations and governments could exacerbate inequality and limit diversity in AI applications. Encouraging decentralized and collaborative AI development is key to avoiding a concentration of power.

7. Dependence on AI

Overreliance on AI systems may lead to a loss of creativity, critical thinking skills, and human intuition. Striking a balance between AI-assisted decision-making and human input is vital to preserving our cognitive abilities.

8. Job Displacement

AI-driven automation has the potential to lead to job losses across various industries, particularly for low-skilled workers (although there is evidence that AI and other emerging technologies will create more jobs than it eliminates ).

As AI technologies continue to develop and become more efficient, the workforce must adapt and acquire new skills to remain relevant in the changing landscape. This is especially true for lower-skilled workers in the current labor force.

9. Economic Inequality

AI has the potential to contribute to economic inequality by disproportionally benefiting wealthy individuals and corporations. As we talked about above, job losses due to AI-driven automation are more likely to affect low-skilled workers, leading to a growing income gap and reduced opportunities for social mobility.

The concentration of AI development and ownership within a small number of large corporations and governments can exacerbate this inequality as they accumulate wealth and power while smaller businesses struggle to compete. Policies and initiatives that promote economic equity—like reskilling programs, social safety nets, and inclusive AI development that ensures a more balanced distribution of opportunities — can help combat economic inequality.

10. Legal and Regulatory Challenges

It’s crucial to develop new legal frameworks and regulations to address the unique issues arising from AI technologies, including liability and intellectual property rights. Legal systems must evolve to keep pace with technological advancements and protect the rights of everyone.

11. AI Arms Race

The risk of countries engaging in an AI arms race could lead to the rapid development of AI technologies with potentially harmful consequences.

Recently, more than a thousand technology researchers and leaders, including Apple co-founder Steve Wozniak, have urged intelligence labs to pause the development of advanced AI systems . The letter states that AI tools present “profound risks to society and humanity.”

In the letter, the leaders said:

"Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an 'AI summer' in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt."

12. Loss of Human Connection

Increasing reliance on AI-driven communication and interactions could lead to diminished empathy, social skills, and human connections. To preserve the essence of our social nature, we must strive to maintain a balance between technology and human interaction.

13. Misinformation and Manipulation

AI-generated content, such as deepfakes, contributes to the spread of false information and the manipulation of public opinion. Efforts to detect and combat AI-generated misinformation are critical in preserving the integrity of information in the digital age.

In a Stanford University study on the most pressing dangers of AI, researchers said:

“AI systems are being used in the service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism. From deepfake videos to online bots manipulating public discourse by feigning consensus and spreading fake news, there is the danger of AI systems undermining social trust. The technology can be co-opted by criminals, rogue states, ideological extremists, or simply special interest groups, to manipulate people for economic gain or political advantage.”

14. Unintended Consequences

AI systems, due to their complexity and lack of human oversight, might exhibit unexpected behaviors or make decisions with unforeseen consequences. This unpredictability can result in outcomes that negatively impact individuals, businesses, or society as a whole.

Robust testing, validation, and monitoring processes can help developers and researchers identify and fix these types of issues before they escalate.

15. Existential Risks

The development of artificial general intelligence (AGI) that surpasses human intelligence raises long-term concerns for humanity. The prospect of AGI could lead to unintended and potentially catastrophic consequences, as these advanced AI systems may not be aligned with human values or priorities.

To mitigate these risks, the AI research community needs to actively engage in safety research, collaborate on ethical guidelines, and promote transparency in AGI development. Ensuring that AGI serves the best interests of humanity and does not pose a threat to our existence is paramount.

To stay on top of new and emerging business and tech trends, make sure to subscribe to my newsletter , follow me on Twitter , LinkedIn , and YouTube , and check out my books, Future Skills: The 20 Skills and Competencies Everyone Needs to Succeed in a Digital World and The Future Internet: How the Metaverse, Web 3.0, and Blockchain Will Transform Business and Society .

Bernard Marr

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

The Scholarly Kitchen

What’s Hot and Cooking In Scholarly Publishing

Strengths, Weaknesses, Opportunities, and Threats: A Comprehensive SWOT Analysis of AI and Human Expertise in Peer Review

  • Artificial Intelligence
  • Peer Review
  • Research Integrity

With Peer Review Week fast approaching, I’m hoping the event will spur conversations that don’t just scratch the surface, but rather dig deeply into understanding the role of peer review in today’s digital age and address the core issues peer review is facing. To me, one of the main issues is that AI-generated content has been discovered in prominent journals. A key question is, should all of the blame for these transgressions fall on the peer review process?

To answer this question, I would like to ask a few more in response. Are you aware of the sheer volume of papers published each year? Do you know the number of peer reviewers available in comparison? Do you understand the actual role of peer review? And are you aware that this is an altruistic effort?

The peer review system is overworked! Millions of papers are being submitted each year and in proportion, the number of peer reviewers available is not growing at the same pace. Some reasons for this are that peer reviewing is a voluntary, time-consuming task which reviewers take on in addition to their other full-time academic and teaching responsibilities. Another reason is the lack of diversity in the reviewer pool. The system is truly stretched to its limits.

Of course, this doesn’t mean sub-par quality reviews. But the fundamental question remains “Should the role of peer review be to catch AI-generated text?” Peer reviewers are expected to contribute to the science — to identify gaps in the research itself, to spot structural and logical flaws and to leverage their expertise to make the science stronger. A peer reviewer’s focus will get diverted if instead of focusing on the science, they were instead asked to hunt down AI-generated content; this in my opinion dilutes their expertise and shifts the burden onto them in a way that it was never intended.

Ironically, AI should be seen as a tool to ease the workload of peer reviewers, NOT to add to it. How can we ensure that across the board, AI is being used to complement human expertise — to make tasks easier, not to add to the burden? And how can the conversation shift from a blame game to carefully evaluating how we can use AI to address the gaps that it is itself creating. In short, how can the problem become the solution?

The aim of AI is to ensure that there is more time for innovation by freeing our time from routine tasks. It is a disservice if we end up having to spend more time on routine checks just to identify the misuse of AI! It entirely defeats the purpose of AI tools and if that’s how we are setting up processes, then we are setting ourselves up for failure.

Workflows are ideally set up in as streamlined a manner as possible. If something is creating stress and friction, it is probably the process that is at fault and needs to be changed.

Can journals invest in more sophisticated AI tools to flag potential issues before papers even reach peer reviewers? This would allow reviewers to concentrate on the content alone. There are tools available to identify AI generated content. They may not be perfect (or even all that effective as of yet), but they exist and continue to improve. How can these tools be integrated into the journal evaluation process?

Is this an editorial function or something peer reviewers should be concerned with? Should journals provide targeted training on how to use AI tools effectively, so they complement human judgment rather than replace it? Can we create standardized industry-wide peer reviewer training programs or guidelines that clarify what falls under the scope of what a peer reviewer is supposed to evaluate and what doesn’t?

Perhaps developing a more collaborative approach to peer review, where multiple reviewers can discuss and share insights will help in spotting issues that one individual could miss.

Shifting the Conversation

Instead of casting blame, we must ask ourselves the critical questions: Are we equipping our peer reviewers with the right tools to succeed in an increasingly complex landscape? How can AI be harnessed not as a burden, but as a true ally in maintaining the integrity of research? Are we prepared to re-think the roles within peer review? Can we stop viewing AI as a threat or as a problem and instead embrace it as a partner—one that enhances human judgment rather than complicates it?

Let’s take a step back and look at this SWOT analysis for AI versus human expertise in peer review

Table showing SWOT analysis of AI in peer review

The challenge presented before us is much more than just catching AI-generated content; it’s about reimagining the future of peer review.

It’s time for the conversation to shift. It’s time for the blame game to stop. It’s time to recognize the strengths, the weaknesses, the opportunities and the threats of both humans and AI.

Roohi Ghosh

Roohi Ghosh

Roohi Ghosh is the ambassador for researcher success at Cactus Communications (CACTUS). She is passionate about advocating for researchers and amplifying their voices on a global stage.

1 Thought on "Strengths, Weaknesses, Opportunities, and Threats: A Comprehensive SWOT Analysis of AI and Human Expertise in Peer Review"

' src=

AI-generated *text* isn’t necessarily a problem. I would focus on AI-altered data and images. Does a reviewer’s skill in detecting misuse of AI correlate with the quality of their scientific review? Because I don’t assume that an author’s writing skills correlate with scientific merit. Translation, editing, and writing assistance are legitimate uses of GenAI. I think we want reviewers to focus on the authenticity and legitimacy of ideas more than language.

  • By Leslie Elizabeth Parker
  • Sep 12, 2024, 2:43 PM
  • Reply to Comment

Leave a Comment Cancel reply

Notify me of follow-up comments by email.

Related Articles:

hands typing on computer keyboard, overlaid with AI code and a robot holding up a hand

Next Article:

Venn diagram showing overlap between accessibility, inclusivity, and usability

Find anything you save across the site in your account

Why A.I. Isn’t Going to Make Art

In 1953, Roald Dahl published “ The Great Automatic Grammatizator ,” a short story about an electrical engineer who secretly desires to be a writer. One day, after completing construction of the world’s fastest calculating machine, the engineer realizes that “English grammar is governed by rules that are almost mathematical in their strictness.” He constructs a fiction-writing machine that can produce a five-thousand-word short story in thirty seconds; a novel takes fifteen minutes and requires the operator to manipulate handles and foot pedals, as if he were driving a car or playing an organ, to regulate the levels of humor and pathos. The resulting novels are so popular that, within a year, half the fiction published in English is a product of the engineer’s invention.

Is there anything about art that makes us think it can’t be created by pushing a button, as in Dahl’s imagination? Right now, the fiction generated by large language models like ChatGPT is terrible, but one can imagine that such programs might improve in the future. How good could they get? Could they get better than humans at writing fiction—or making paintings or movies—in the same way that calculators are better at addition and subtraction?

Art is notoriously hard to define, and so are the differences between good art and bad art. But let me offer a generalization: art is something that results from making a lot of choices. This might be easiest to explain if we use fiction writing as an example. When you are writing fiction, you are—consciously or unconsciously—making a choice about almost every word you type; to oversimplify, we can imagine that a ten-thousand-word short story requires something on the order of ten thousand choices. When you give a generative-A.I. program a prompt, you are making very few choices; if you supply a hundred-word prompt, you have made on the order of a hundred choices.

If an A.I. generates a ten-thousand-word story based on your prompt, it has to fill in for all of the choices that you are not making. There are various ways it can do this. One is to take an average of the choices that other writers have made, as represented by text found on the Internet; that average is equivalent to the least interesting choices possible, which is why A.I.-generated text is often really bland. Another is to instruct the program to engage in style mimicry, emulating the choices made by a specific writer, which produces a highly derivative story. In neither case is it creating interesting art.

I think the same underlying principle applies to visual art, although it’s harder to quantify the choices that a painter might make. Real paintings bear the mark of an enormous number of decisions. By comparison, a person using a text-to-image program like DALL-E enters a prompt such as “A knight in a suit of armor fights a fire-breathing dragon,” and lets the program do the rest. (The newest version of DALL-E accepts prompts of up to four thousand characters—hundreds of words, but not enough to describe every detail of a scene.) Most of the choices in the resulting image have to be borrowed from similar paintings found online; the image might be exquisitely rendered, but the person entering the prompt can’t claim credit for that.

Some commentators imagine that image generators will affect visual culture as much as the advent of photography once did. Although this might seem superficially plausible, the idea that photography is similar to generative A.I. deserves closer examination. When photography was first developed, I suspect it didn’t seem like an artistic medium because it wasn’t apparent that there were a lot of choices to be made; you just set up the camera and start the exposure. But over time people realized that there were a vast number of things you could do with cameras, and the artistry lies in the many choices that a photographer makes. It might not always be easy to articulate what the choices are, but when you compare an amateur’s photos to a professional’s, you can see the difference. So then the question becomes: Is there a similar opportunity to make a vast number of choices using a text-to-image generator? I think the answer is no. An artist—whether working digitally or with paint—implicitly makes far more decisions during the process of making a painting than would fit into a text prompt of a few hundred words.

We can imagine a text-to-image generator that, over the course of many sessions, lets you enter tens of thousands of words into its text box to enable extremely fine-grained control over the image you’re producing; this would be something analogous to Photoshop with a purely textual interface. I’d say that a person could use such a program and still deserve to be called an artist. The film director Bennett Miller has used DALL-E 2 to generate some very striking images that have been exhibited at the Gagosian gallery; to create them, he crafted detailed text prompts and then instructed DALL-E to revise and manipulate the generated images again and again. He generated more than a hundred thousand images to arrive at the twenty images in the exhibit. But he has said that he hasn’t been able to obtain comparable results on later releases of DALL-E . I suspect this might be because Miller was using DALL-E for something it’s not intended to do; it’s as if he hacked Microsoft Paint to make it behave like Photoshop, but as soon as a new version of Paint was released, his hacks stopped working. OpenAI probably isn’t trying to build a product to serve users like Miller, because a product that requires a user to work for months to create an image isn’t appealing to a wide audience. The company wants to offer a product that generates images with little effort.

It’s harder to imagine a program that, over many sessions, helps you write a good novel. This hypothetical writing program might require you to enter a hundred thousand words of prompts in order for it to generate an entirely different hundred thousand words that make up the novel you’re envisioning. It’s not clear to me what such a program would look like. Theoretically, if such a program existed, the user could perhaps deserve to be called the author. But, again, I don’t think companies like OpenAI want to create versions of ChatGPT that require just as much effort from users as writing a novel from scratch. The selling point of generative A.I. is that these programs generate vastly more than you put into them, and that is precisely what prevents them from being effective tools for artists.

The companies promoting generative-A.I. programs claim that they will unleash creativity. In essence, they are saying that art can be all inspiration and no perspiration—but these things cannot be easily separated. I’m not saying that art has to involve tedium. What I’m saying is that art requires making choices at every scale; the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception. It is a mistake to equate “large-scale” with “important” when it comes to the choices made when creating art; the interrelationship between the large scale and the small scale is where the artistry lies.

Believing that inspiration outweighs everything else is, I suspect, a sign that someone is unfamiliar with the medium. I contend that this is true even if one’s goal is to create entertainment rather than high art. People often underestimate the effort required to entertain; a thriller novel may not live up to Kafka’s ideal of a book—an “axe for the frozen sea within us”—but it can still be as finely crafted as a Swiss watch. And an effective thriller is more than its premise or its plot. I doubt you could replace every sentence in a thriller with one that is semantically equivalent and have the resulting novel be as entertaining. This means that its sentences—and the small-scale choices they represent—help to determine the thriller’s effectiveness.

Many novelists have had the experience of being approached by someone convinced that they have a great idea for a novel, which they are willing to share in exchange for a fifty-fifty split of the proceeds. Such a person inadvertently reveals that they think formulating sentences is a nuisance rather than a fundamental part of storytelling in prose. Generative A.I. appeals to people who think they can express themselves in a medium without actually working in that medium. But the creators of traditional novels, paintings, and films are drawn to those art forms because they see the unique expressive potential that each medium affords. It is their eagerness to take full advantage of those potentialities that makes their work satisfying, whether as entertainment or as art.

Of course, most pieces of writing, whether articles or reports or e-mails, do not come with the expectation that they embody thousands of choices. In such cases, is there any harm in automating the task? Let me offer another generalization: any writing that deserves your attention as a reader is the result of effort expended by the person who wrote it. Effort during the writing process doesn’t guarantee the end product is worth reading, but worthwhile work cannot be made without it. The type of attention you pay when reading a personal e-mail is different from the type you pay when reading a business report, but in both cases it is only warranted when the writer put some thought into it.

Recently, Google aired a commercial during the Paris Olympics for Gemini, its competitor to OpenAI’s GPT-4 . The ad shows a father using Gemini to compose a fan letter, which his daughter will send to an Olympic athlete who inspires her. Google pulled the commercial after widespread backlash from viewers; a media professor called it “one of the most disturbing commercials I’ve ever seen.” It’s notable that people reacted this way, even though artistic creativity wasn’t the attribute being supplanted. No one expects a child’s fan letter to an athlete to be extraordinary; if the young girl had written the letter herself, it would likely have been indistinguishable from countless others. The significance of a child’s fan letter—both to the child who writes it and to the athlete who receives it—comes from its being heartfelt rather than from its being eloquent.

Many of us have sent store-bought greeting cards, knowing that it will be clear to the recipient that we didn’t compose the words ourselves. We don’t copy the words from a Hallmark card in our own handwriting, because that would feel dishonest. The programmer Simon Willison has described the training for large language models as “money laundering for copyrighted data,” which I find a useful way to think about the appeal of generative-A.I. programs: they let you engage in something like plagiarism, but there’s no guilt associated with it because it’s not clear even to you that you’re copying.

Some have claimed that large language models are not laundering the texts they’re trained on but, rather, learning from them, in the same way that human writers learn from the books they’ve read. But a large language model is not a writer; it’s not even a user of language. Language is, by definition, a system of communication, and it requires an intention to communicate. Your phone’s auto-complete may offer good suggestions or bad ones, but in neither case is it trying to say anything to you or the person you’re texting. The fact that ChatGPT can generate coherent sentences invites us to imagine that it understands language in a way that your phone’s auto-complete does not, but it has no more intention to communicate.

It is very easy to get ChatGPT to emit a series of words such as “I am happy to see you.” There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you. A dog can communicate that it is happy to see you, and so can a prelinguistic child, even though both lack the capability to use words. ChatGPT feels nothing and desires nothing, and this lack of intention is why ChatGPT is not actually using language. What makes the words “I’m happy to see you” a linguistic utterance is not that the sequence of text tokens that it is made up of are well formed; what makes it a linguistic utterance is the intention to communicate something.

Because language comes so easily to us, it’s easy to forget that it lies on top of these other experiences of subjective feeling and of wanting to communicate that feeling. We’re tempted to project those experiences onto a large language model when it emits coherent sentences, but to do so is to fall prey to mimicry; it’s the same phenomenon as when butterflies evolve large dark spots on their wings that can fool birds into thinking they’re predators with big eyes. There is a context in which the dark spots are sufficient; birds are less likely to eat a butterfly that has them, and the butterfly doesn’t really care why it’s not being eaten, as long as it gets to live. But there is a big difference between a butterfly and a predator that poses a threat to a bird.

A person using generative A.I. to help them write might claim that they are drawing inspiration from the texts the model was trained on, but I would again argue that this differs from what we usually mean when we say one writer draws inspiration from another. Consider a college student who turns in a paper that consists solely of a five-page quotation from a book, stating that this quotation conveys exactly what she wanted to say, better than she could say it herself. Even if the student is completely candid with the instructor about what she’s done, it’s not accurate to say that she is drawing inspiration from the book she’s citing. The fact that a large language model can reword the quotation enough that the source is unidentifiable doesn’t change the fundamental nature of what’s going on.

As the linguist Emily M. Bender has noted, teachers don’t ask students to write essays because the world needs more student essays. The point of writing essays is to strengthen students’ critical-thinking skills; in the same way that lifting weights is useful no matter what sport an athlete plays, writing essays develops skills necessary for whatever job a college student will eventually get. Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.

Not all writing needs to be creative, or heartfelt, or even particularly good; sometimes it simply needs to exist. Such writing might support other goals, such as attracting views for advertising or satisfying bureaucratic requirements. When people are required to produce such text, we can hardly blame them for using whatever tools are available to accelerate the process. But is the world better off with more documents that have had minimal effort expended on them? It would be unrealistic to claim that if we refuse to use large language models, then the requirements to create low-quality text will disappear. However, I think it is inevitable that the more we use large language models to fulfill those requirements, the greater those requirements will eventually become. We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?

It’s not impossible that one day we will have computer programs that can do anything a human being can do, but, contrary to the claims of the companies promoting A.I., that is not something we’ll see in the next few years. Even in domains that have absolutely nothing to do with creativity, current A.I. programs have profound limitations that give us legitimate reasons to question whether they deserve to be called intelligent at all.

The computer scientist François Chollet has proposed the following distinction: skill is how well you perform at a task, while intelligence is how efficiently you gain new skills. I think this reflects our intuitions about human beings pretty well. Most people can learn a new skill given sufficient practice, but the faster the person picks up the skill, the more intelligent we think the person is. What’s interesting about this definition is that—unlike I.Q. tests—it’s also applicable to nonhuman entities; when a dog learns a new trick quickly, we consider that a sign of intelligence.

In 2019, researchers conducted an experiment in which they taught rats how to drive. They put the rats in little plastic containers with three copper-wire bars; when the mice put their paws on one of these bars, the container would either go forward, or turn left or turn right. The rats could see a plate of food on the other side of the room and tried to get their vehicles to go toward it. The researchers trained the rats for five minutes at a time, and after twenty-four practice sessions, the rats had become proficient at driving. Twenty-four trials were enough to master a task that no rat had likely ever encountered before in the evolutionary history of the species. I think that’s a good demonstration of intelligence.

Now consider the current A.I. programs that are widely acclaimed for their performance. AlphaZero, a program developed by Google’s DeepMind, plays chess better than any human player, but during its training it played forty-four million games, far more than any human can play in a lifetime. For it to master a new game, it will have to undergo a similarly enormous amount of training. By Chollet’s definition, programs like AlphaZero are highly skilled, but they aren’t particularly intelligent, because they aren’t efficient at gaining new skills. It is currently impossible to write a computer program capable of learning even a simple task in only twenty-four trials, if the programmer is not given information about the task beforehand.

Self-driving cars trained on millions of miles of driving can still crash into an overturned trailer truck, because such things are not commonly found in their training data, whereas humans taking their first driving class will know to stop. More than our ability to solve algebraic equations, our ability to cope with unfamiliar situations is a fundamental part of why we consider humans intelligent. Computers will not be able to replace humans until they acquire that type of competence, and that is still a long way off; for the time being, we’re just looking for jobs that can be done with turbocharged auto-complete.

Despite years of hype, the ability of generative A.I. to dramatically increase economic productivity remains theoretical. (Earlier this year, Goldman Sachs released a report titled “Gen AI: Too Much Spend, Too Little Benefit?”) The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.

Some individuals have defended large language models by saying that most of what human beings say or write isn’t particularly original. That is true, but it’s also irrelevant. When someone says “I’m sorry” to you, it doesn’t matter that other people have said sorry in the past; it doesn’t matter that “I’m sorry” is a string of text that is statistically unremarkable. If someone is being sincere, their apology is valuable and meaningful, even though apologies have previously been uttered. Likewise, when you tell someone that you’re happy to see them, you are saying something meaningful, even if it lacks novelty.

Something similar holds true for art. Whether you are creating a novel or a painting or a film, you are engaged in an act of communication between you and your audience. What you create doesn’t have to be utterly unlike every prior piece of art in human history to be valuable; the fact that you’re the one who is saying it, the fact that it derives from your unique life experience and arrives at a particular moment in the life of whoever is seeing your work, is what makes it new. We are all products of what has come before us, but it’s by living our lives in interaction with others that we bring meaning into the world. That is something that an auto-complete algorithm can never do, and don’t let anyone tell you otherwise. ♦

New Yorker Favorites

In the weeks before John Wayne Gacy’s scheduled execution, he was far from reconciled to his fate .

What HBO’s “Chernobyl” got right, and what it got terribly wrong .

Why does the Bible end that way ?

A new era of strength competitions is testing the limits of the human body .

How an unemployed blogger confirmed that Syria had used chemical weapons.

An essay by Toni Morrison: “ The Work You Do, the Person You Are .”

Sign up for our daily newsletter to receive the best stories from The New Yorker .

Why So Many People Are Going “No Contact” with Their Parents

We need your support today

Independent journalism is more important than ever. Vox is here to explain this unprecedented election cycle and help you understand the larger stakes. We will break down where the candidates stand on major issues, from economic policy to immigration, foreign policy, criminal justice, and abortion. We’ll answer your biggest questions, and we’ll explain what matters — and why. This timely and essential task, however, is expensive to produce.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

  • Future Perfect

The case for taking AI seriously as a threat to humanity

Why some people fear AI, explained.

by Kelsey Piper

Illustrations by Javier Zarracina for Vox

An illustration of a human and gears in their head.

Stephen Hawking has said , “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “ biggest existential threat .”

That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could permanently cut off human civilization from a good future.

This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don’t know.

There are also skeptics. Some of them think advanced AI is so distant that there’s no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.

The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things. So here’s the big picture on how artificial intelligence might pose a catastrophic danger, in nine questions:

1) What is AI?

Artificial intelligence is the effort to create computers capable of intelligent behavior. It is a broad catchall term, used to refer to everything from Siri to IBM’s Watson to powerful technologies we have yet to invent.

Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.

Narrow AI has seen extraordinary progress over the past few years. AI systems have improved dramatically at translation , at games like chess and Go , at important research biology questions like predicting how proteins fold , and at generating images . AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed . They compose music and write articles that, at a glance, read as if a human wrote them. They play strategy games . They are being developed to improve drone targeting and detect missiles .

But narrow AI is getting less narrow . Once, we made progress in AI by painstakingly teaching computer systems specific concepts. To do computer vision — allowing a computer to identify things in pictures and video — researchers wrote algorithms for detecting edges. To play chess, they programmed in heuristics about chess. To do natural language processing (speech recognition, transcription, translation, etc.), they drew on the field of linguistics.

But recently, we’ve gotten better at creating computer systems that have generalized learning capabilities. Instead of mathematically describing detailed features of a problem, we let the computer system learn that by itself. While once we treated computer vision as a completely different problem from natural language processing or platform game playing, now we can solve all three problems with the same approaches .

And as computers get good enough at narrow AI tasks, they start to exhibit more general capabilities. For example, OpenAI’s famous GPT-series of text AIs is, in one sense, the narrowest of narrow AIs — it just predicts what the next word will be in a text, based on the previous words and its corpus of human language. And yet, it can now identify questions as reasonable or unreasonable and discuss the physical world (for example, answering questions about which objects are larger or which steps in a process must come first). In order to be very good at the narrow task of text prediction, an AI system will eventually develop abilities that are not narrow at all.

Our AI progress so far has enabled enormous advances — and has also raised urgent ethical questions. When you train a computer system to predict which convicted felons will reoffend, you’re using inputs from a criminal justice system biased against black people and low-income people — and so its outputs will likely be biased against black and low-income people too . Making websites more addictive can be great for your revenue but bad for your users. Releasing a program that writes convincing fake reviews or fake news might make those widespread, making it harder for the truth to get out.

Rosie Campbell at UC Berkeley’s Center for Human-Compatible AI argues that these are examples, writ small, of the big worry experts have about general AI in the future. The difficulties we’re wrestling with today with narrow AI don’t come from the systems turning on us or wanting revenge or considering us inferior. Rather, they come from the disconnect between what we tell our systems to do and what we actually want them to do.

For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.

artificial intelligence is a threat to humans essay

In other words, our problems come from the systems being really good at achieving the goal they learned to pursue; it’s just that the goal they learned in their training environment isn’t the outcome we actually wanted. And we’re building systems we don’t understand, which means we can’t always anticipate their behavior.

Right now the harm is limited because the systems are so limited. But it’s a pattern that could have even graver consequences for human beings in the future as AI systems become more advanced.

2) Is it even possible to make a computer as smart as a person?

Yes, though current AI systems aren’t nearly that smart.

One popular adage about AI is “ everything that’s easy is hard, and everything that’s hard is easy .” Doing complex calculations in the blink of an eye? Easy. Looking at a picture and telling you whether it’s a dog? Hard (until very recently).

Lots of things humans do are still outside AI’s grasp. For instance, it’s hard to design an AI system that explores an unfamiliar environment, that can navigate its way from, say, the entryway of a building it’s never been in before up the stairs to a specific person’s desk. We are just beginning to learn how to design an AI system that reads a book and retains an understanding of the concepts.

The paradigm that has driven many of the biggest breakthroughs in AI recently is called “deep learning.” Deep learning systems can do some astonishing stuff: beat games we thought humans might never lose, invent compelling and realistic photographs, solve open problems in molecular biology.

These breakthroughs have made some researchers conclude it’s time to start thinking about the dangers of more powerful systems, but skeptics remain. The field’s pessimists argue that programs still need an extraordinary pool of structured data to learn from, require carefully chosen parameters, or work only in environments designed to avoid the problems we don’t yet know how to solve. They point to self-driving cars , which are still mediocre under the best conditions despite the billions that have been poured into making them work.

It’s rare, though, to find a top researcher in AI who thinks that general AI is impossible. Instead, the field’s luminaries tend to say that it will happen someday — but probably a day that’s a long way off.

Other researchers argue that the day may not be so distant after all.

That’s because for almost all the history of AI, we’ve been held back in large part by not having enough computing power to realize our ideas fully. Many of the breakthroughs of recent years — AI systems that learned how to play strategy games , generate fake photos of celebrities , fold proteins , and compete in massive multiplayer online strategy games — have happened because that’s no longer true. Lots of algorithms that seemed not to work at all turned out to work quite well once we could run them with more computing power.

And the cost of a unit of computing time keeps falling . Progress in computing speed has slowed recently, but the cost of computing power is still estimated to be falling by a factor of 10 every 10 years. Through most of its history, AI has had access to less computing power than the human brain. That’s changing. By most estimates , we’re now approaching the era when AI systems can have the computing resources that we humans enjoy.

And deep learning, unlike previous approaches to AI, is highly suited to developing general capabilities.

“If you go back in history,” top AI researcher and OpenAI cofounder Ilya Sutskever told me , “they made a lot of cool demos with little symbolic AI. They could never scale them up — they were never able to get them to solve non-toy problems. Now with deep learning the situation is reversed. ... Not only is [the AI we’re developing] general, it’s also competent — if you want to get the best results on many hard problems, you must use deep learning. And it’s scalable.”

In other words, we didn’t need to worry about general AI back when winning at chess required entirely different techniques than winning at Go. But now, the same approach produces fake news or music depending on what training data it is fed. And as far as we can discover, the programs just keep getting better at what they do when they’re allowed more computation time — we haven’t discovered a limit to how good they can get. Deep learning approaches to most problems blew past all other approaches when deep learning was first discovered.

Furthermore, breakthroughs in a field can often surprise even other researchers in the field. “Some have argued that there is no conceivable risk to humanity [from AI] for centuries to come,” wrote UC Berkeley professor Stuart Russell, “perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”

Learn about the smart ways people are fixing the world’s problems. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good. Sign up for the Future Perfect newsletter .

There’s another consideration. Imagine an AI that is inferior to humans at everything, with one exception: It’s a competent engineer that can build AI systems very effectively. Machine learning engineers who work on automating jobs in other fields often observe, humorously, that in some respects, their own field looks like one where much of the work — the tedious tuning of parameters — could be automated.

If we can design such a system, then we can use its result — a better engineering AI — to build another, even better AI. This is the mind-bending scenario experts call “recursive self-improvement,” where gains in AI capabilities enable more gains in AI capabilities, allowing a system that started out behind us to rapidly end up with abilities well beyond what we anticipated.

This is a possibility that has been anticipated since the first computers. I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965 : “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

3) How exactly could AI wipe us out?

It’s immediately clear how nuclear bombs will kill us . No one working on mitigating nuclear risk has to start by explaining why it’d be a bad thing if we had a nuclear war.

The case that AI could pose an existential risk to humanity is more complicated and harder to grasp. So many of the people who are working to build safe AI systems have to start by explaining why AI systems, by default, are dangerous.

artificial intelligence is a threat to humans essay

The idea that AI can become a danger is rooted in the fact that AI systems pursue their goals, whether or not those goals are what we really intended — and whether or not we’re in the way. “You’re probably not an evil ant-hater who steps on ants out of malice,” Stephen Hawking wrote, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Here’s one scenario that keeps experts up at night: We develop a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the world’s computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

It is easy to design an AI that averts that specific pitfall. But there are lots of ways that unleashing powerful computer systems will have unexpected and potentially devastating effects, and avoiding all of them is a much harder problem than avoiding any specific one.

Victoria Krakovna, an AI researcher at DeepMind (now a division of Alphabet, Google’s parent company), compiled a list of examples of “specification gaming” : the computer doing what we told it to do but not what we wanted it to do. For example, we tried to teach AI organisms in a simulation to jump, but we did it by teaching them to measure how far their “feet” rose above the ground. Instead of jumping, they learned to grow into tall vertical poles and do flips — they excelled at what we were measuring, but they didn’t do what we wanted them to do.

An AI playing the Atari exploration game Montezuma’s Revenge found a bug that let it force a key in the game to reappear , thereby allowing it to earn a higher score by exploiting the glitch. An AI playing a different game realized it could get more points by falsely inserting its name as the owner of high-value items .

Sometimes, the researchers didn’t even know how their AI system cheated : “the agent discovers an in-game bug. ... For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit).”

What these examples make clear is that in any system that might have bugs or unintended behavior or behavior humans don’t fully understand, a sufficiently powerful AI system might act unpredictably — pursuing its goals through an avenue that isn’t the one we expected.

In his 2009 paper “The Basic AI Drives,” Steve Omohundro , who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”

His argument goes like this: Because AIs have goals, they’ll be motivated to take actions that they can predict will advance their goals. An AI playing a chess game will be motivated to take an opponent’s piece and advance the board to a state that looks more winnable.

But the same AI, if it sees a way to improve its own chess evaluation algorithm so it can evaluate potential moves faster, will do that too, for the same reason: It’s just another step that advances its goal.

If the AI sees a way to harness more computing power so it can consider more moves in the time available, it will do that. And if the AI detects that someone is trying to turn off its computer mid-game, and it has a way to disrupt that, it’ll do it. It’s not that we would instruct the AI to do things like that; it’s that whatever goal a system has, actions like these will often be part of the best path to achieve that goal.

That means that any goal, even innocuous ones like playing chess or generating advertisements that get lots of clicks online, could produce unintended results if the agent pursuing it has enough intelligence and optimization power to identify weird, unexpected routes to achieve its goals.

Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. They already do that, but it takes the form of weird glitches in games. As they grow more sophisticated, scientists like Omohundro predict more adversarial behavior.

4) When did scientists first start worrying about AI risk?

Scientists have been thinking about the potential of artificial intelligence since the early days of computers. In the famous paper where he put forth the Turing test for determining if an artificial system is truly “intelligent,” Alan Turing wrote:

Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. ... There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.

I.J. Good worked closely with Turing and reached the same conclusions, according to his assistant, Leslie Pendleton . In an excerpt from unpublished notes Good wrote shortly before he died in 2009, he writes about himself in third person and notes a disagreement with his younger self — while as a younger man, he thought powerful AIs might be helpful to us, the older Good expected AI to annihilate us.

[The paper] “Speculations Concerning the First Ultra-intelligent Machine” (1965) ... began: “The survival of man depends on the early construction of an ultra-intelligent machine.” Those were his words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the deus ex machina in his own image.”

In the 21st century, with computers quickly establishing themselves as a transformative force in our world, younger researchers started expressing similar worries.

Nick Bostrom is a professor at the University of Oxford, the director of the Future of Humanity Institute, and the director of the Governance of Artificial Intelligence Program . He researches risks to humanity , both in the abstract — asking questions like why we seem to be alone in the universe — and in concrete terms, analyzing the technological advances on the table and whether they endanger us. AI, he concluded, endangers us.

In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”

artificial intelligence is a threat to humans essay

Across the world, others have reached the same conclusion. Bostrom co-authored a paper on the ethics of artificial intelligence with Eliezer Yudkowsky, founder of and research fellow at the Berkeley Machine Intelligence Research Institute (MIRI), an organization that works on better formal characterizations of the AI safety problem.

Yudkowsky started his career in AI by worriedly poking holes in others’ proposals for how to make AI systems safe , and has spent most of it working to persuade his peers that AI systems will, by default, be unaligned with human values (not necessarily opposed to but indifferent to human morality) — and that it’ll be a challenging technical problem to prevent that outcome.

Increasingly, researchers realized that there’d be challenges that hadn’t been present with AI systems when they were simple. “‘Side effects’ are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their importance in the future,” concluded a 2016 research paper on problems in AI safety.

Bostrom’s book Superintelligence was compelling to many people, but there were skeptics. “ No, experts don’t think superintelligent AI is a threat to humanity ,” argued an op-ed by Oren Etzioni, a professor of computer science at the University of Washington and CEO of the Allan Institute for Artificial Intelligence. “ Yes, we are worried about the existential risk of artificial intelligence ,” replied a dueling op-ed by Stuart Russell, an AI pioneer and UC Berkeley professor, and Allan DaFoe, a senior research fellow at Oxford and director of the Governance of AI program there.

It’s tempting to conclude that there’s a pitched battle between AI-risk skeptics and AI-risk believers. In reality, they might not disagree as profoundly as you would think.

Facebook’s chief AI scientist Yann LeCun, for example, is a vocal voice on the skeptical side. But while he argues we shouldn’t fear AI, he still believes we ought to have people working on, and thinking about, AI safety . “Even if the risk of an A.I. uprising is very unlikely and very far in the future, we still need to think about it, design precautionary measures, and establish guidelines,” he writes.

That’s not to say there’s an expert consensus here — far from it . There is substantial disagreement about which approaches seem likeliest to bring us to general AI, which approaches seem likeliest to bring us to safe general AI, and how soon we need to worry about any of this.

Many experts are wary that others are overselling their field, and dooming it when the hype runs out . But that disagreement shouldn’t obscure a growing common ground; these are possibilities worth thinking about, investing in, and researching, so we have guidelines when the moment comes that they’re needed.

5) Why couldn’t we just shut off a computer if it got too powerful?

A smart AI could predict that we’d want to turn it off if it made us nervous. So it would try hard not to make us nervous, because doing so wouldn’t help it accomplish its goals. If asked what its intentions are, or what it’s working on, it would attempt to evaluate which responses are least likely to get it shut off, and answer with those. If it wasn’t competent enough to do that, it might pretend to be even dumber than it was — anticipating that researchers would give it more time, computing resources, and training data.

So we might not know when it’s the right moment to shut off a computer.

We also might do things that make it impossible to shut off the computer later, even if we realize eventually that it’s a good idea. For example, many AI systems could have access to the internet, which is a rich source of training data and which they’d need if they’re to make money for their creators (for example, on the stock market, where more than half of trading is done by fast-reacting AI algorithms).

But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.

In that case, isn’t it a terrible idea to let any AI system — even one which doesn’t seem powerful enough to be dangerous — have access to the internet? Probably. But that doesn’t mean it won’t continue to happen. AI researchers want to make their AI systems more capable — that’s what makes them more scientifically interesting and more profitable. It’s not clear that the many incentives to make your systems powerful and use them online will suddenly change once systems become powerful enough to be dangerous.

So far, we’ve mostly talked about the technical challenges of AI. But from here forward, it’s necessary to veer more into the politics. Since AI systems enable incredible things, there will be lots of different actors working on such systems.

There will likely be startups, established tech companies like Google (Alphabet’s recently acquired startup DeepMind is frequently mentioned as an AI frontrunner), and organizations like Elon-Musk-founded OpenAI, which recently transitioned to a hybrid for-profit/non-profit structure .

There will be governments — Russia’s Vladimir Putin has expressed an interest in AI , and China has made big investments . Some of them will presumably be cautious and employ safety measures, including keeping their AI off the internet. But in a scenario like this one, we’re at the mercy of the least cautious actor , whoever they may be.

That’s part of what makes AI hard: Even if we know how to take appropriate precautions (and right now we don’t), we also need to figure out how to ensure that all would-be AI programmers are motivated to take those precautions and have the tools to implement them correctly.

6) What are we doing right now to avoid an AI apocalypse?

“It could be said that public policy on AGI [artificial general intelligence] does not exist,” concluded a paper in 2018 reviewing the state of the field .

The truth is that technical work on promising approaches is getting done, but there’s shockingly little in the way of policy planning, international collaboration, or public-private partnerships. In fact, much of the work is being done by only a handful of organizations, and it has been estimated that around 50 people in the world work full time on technical AI safety.

Bostrom’s Future of Humanity Institute has published a research agenda for AI governance : the study of “devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.” It has published research on the risk of malicious uses of AI , on the context of China’s AI strategy, and on artificial intelligence and international security .

The longest-established organization working on technical AI safety is the Machine Intelligence Research Institute (MIRI), which prioritizes research into designing highly reliable agents — artificial intelligence programs whose behavior we can predict well enough to be confident they’re safe. (Disclosure: MIRI is a nonprofit and I donated to its work in 2017-2019.)

The Elon Musk-founded OpenAI is a very new organization, less than three years old. But researchers there are active contributors to both AI safety and AI capabilities research. A research agenda in 2016 spelled out “ concrete open technical problems relating to accident prevention in machine learning systems,” and researchers have since advanced some approaches to safe AI systems .

Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here . “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).

There are also lots of people working on more present-day AI ethics problems: algorithmic bias , robustness of modern machine-learning algorithms to small changes, and transparency and interpretability of neural nets , to name just a few. Some of that research could potentially be valuable for preventing destructive scenarios.

But on the whole, the state of the field is a little bit as if almost all climate change researchers were focused on managing the droughts, wildfires, and famines we’re already facing today, with only a tiny skeleton team dedicating to forecasting the future and 50 or so researchers who work full time on coming up with a plan to turn things around.

Not every organization with a major AI department has a safety team at all, and some of them have safety teams focused only on algorithmic fairness and not on the risks from advanced systems. The US government doesn’t have a department for AI.

The field still has lots of open questions — many of which might make AI look much scarier, or much less so — which no one has dug into in depth.

7) Is this really likelier to kill us all than, say, climate change?

It sometimes seems like we’re facing dangers from all angles in the 21st century. Both climate change and future AI developments are likely to be transformative forces acting on our world.

Our predictions about climate change are more confident, both for better and for worse. We have a clearer understanding of the risks the planet will face, and we can estimate the costs to human civilization. They are projected to be enormous, risking potentially hundreds of millions of lives. The ones who will suffer most will be low-income people in developing countries ; the wealthy will find it easier to adapt. We also have a clearer understanding of the policies we need to enact to address climate change than we do with AI.

artificial intelligence is a threat to humans essay

There’s intense disagreement in the field on timelines for critical advances in AI . While AI safety experts agree on many features of the safety problem, they’re still making the case to research teams in their own field, and they disagree on some of the details. There’s substantial disagreement on how badly it could go, and on how likely it is to go badly. There are only a few people who work full time on AI forecasting. One of the things current researchers are trying to nail down is their models and the reasons for the remaining disagreements about what safe approaches will look like.

Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction . But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.

8) Is there a possibility that AI can be benevolent?

AI safety researchers emphasize that we shouldn’t assume AI systems will be benevolent by default . They’ll have the goals that their training environment set them up for, and no doubt this will fail to encapsulate the whole of human values.

When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.

Of course, we can build AI systems that are aligned with human values, or at least that humans can safely work with. That is ultimately what almost every organization with an artificial general intelligence division is trying to do. Success with AI could give us access to decades or centuries of technological innovation all at once.

“If we’re successful, we believe this will be one of the most important and widely beneficial scientific advances ever made,” writes the introduction to Alphabet’s DeepMind . “From climate change to the need for radically improved healthcare, too many problems suffer from painfully slow progress, their complexity overwhelming our ability to find solutions. With AI as a multiplier for human ingenuity, those solutions will come into reach.”

So, yes, AI can share our values — and transform our world for the good. We just need to solve a very hard engineering problem first.

9) I just really want to know: how worried should we be?

To people who think the worrying is premature and the risks overblown, AI safety is competing with other priorities that sound, well, a bit less sci-fi — and it’s not clear why AI should take precedence. To people who think the risks described are real and substantial, it’s outrageous that we’re dedicating so few resources to working on them.

While machine-learning researchers are right to be wary of hype, it’s also hard to avoid the fact that they’re accomplishing some impressive, surprising things using very generalizable techniques, and that it doesn’t seem that all the low-hanging fruit has been picked.

AI looks increasingly like a technology that will change the world when it arrives. Researchers across many major AI organizations tell us it will be like launching a rocket : something we have to get right before we hit “go.” So it seems urgent to get to work learning rocketry. No matter whether or not humanity should be afraid, we should definitely be doing our homework.

More in this stream

California’s governor has the chance to make AI history

Most Popular

  • Sign up for Vox’s daily newsletter
  • The twisted political logic behind Trump’s attacks on Haitian immigrant
  • The new followup to ChatGPT is scarily good at deception
  • Take a mental break with the newest Vox crossword
  • Biden and Harris say America’s no longer at war. Is that true?

Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.

 alt=

This is the title for the native ad

 alt=

More in Future Perfect

Science has a short-term memory problem

Scientists are trapped in an endless loop of grant applications. How can we set them free?

The perfect escape from our online world

Why the new luxury is flip phones and vinyl LPs

Shrinking the economy won’t save the planet

561 research papers in, the case for degrowth is still weak.

Kate Middleton says she is cancer-free. But why are she and so many young people getting sick?

It’s not just the Princess of Wales. More and more young people are getting cancer.

Can we grow the economy without making more useless junk?

We buy stuff. We throw it away. There’s a system to stop this toxic cycle.

I can’t take care of all my mom’s needs. Am I a monster?

Introducing Your Mileage May Vary, an unconventional new advice column.

  • Definitions
  • How to Buy XRP in 2024
  • How to Buy Bitcoin in 2024
  • How to Buy Shiba Inu in 2024
  • Crypto Gambling
  • Crypto Casinos
  • Crash Gambling
  • Crypto Sports Betting

Key Takeaways

Is ai dangerous, how to mitigate the risks of ai  , staying safe with ai, frequently asked questions, is artificial intelligence dangerous.

Kirsty Moreland

  • 2022 highlighted AI’s rapid expansion, with innovations like ChatGPT and Nvidia’s rising stock reflecting the tech’s increasing impact and the widespread benefits and market demand for AI solutions.
  • The dangers of AI are a topic of fierce debate, with concerns from figures like Elon Musk highlight the existential risks associated with uncontrolled AI.
  • AI can cause job losses, manipulate social perceptions through algorithms, and create autonomous weapons. Its misuse in cybercrime and surveillance also threatens privacy and security.
  • Strategies to mitigate the dangers of AI  include education, regulation, and organizational standards. These measures help manage AI’s impact, ensure transparency, and protect against potential abuses while adapting to rapid technological advancements.

2022 marked a significant milestone in artificial intelligence (AI) development, with AI applications like ChatGPT and companies like Nvidia becoming some of the major players in global technology. The leading AI companies are racing to improve AI-related hardware and software, enhancing AI performance and accelerating the development of the space. Nvidia’s soaring stock price, growing over 100% in the past year, shows the market’s appetite for AI solutions. 

But the mania surrounding AI hasn’t dampened the chorus of concerns that also surround this technology. Central to these fears is the simple question: is AI dangerous?

Self-driving drones and fake news are examples of AI technologies that pose potential threats to humanity. And of course, the question “can AI become self aware” looms ever-large. Even Prominent figures like Elon Musk have voiced concerns about the dangers of AI.

In this article, we’ll examine the key threats AI poses to society and discuss how they might be managed.

When people talk about AI’s dangers, they often address different concerns. Some worry about job displacement, while others focus on AI’s role in cybercrime, warfare, or its potential to surpass human control. AI is not a single technology but a range of tools that impact various parts of society. As such, it’s essential to examine the potential dangers of AI from several perspectives.

1. Job Losses Due to AI Automation

Artificial intelligence is now automating some industries, replacing humans in monotonous jobs like customer relations through AI software . Chatbots can answer questions that previously required a human attendant. McKinsey’s report indicates that up to 400 million workers may lose jobs due to technological advances by 2030. 

AI and robotics have taken over many factory operations that used to demand human dexterity. While this can improve and streamline certain operations, it poses questions about the employment prospects for the workers in these industries.

On the other hand, it’s important to note that automation may also create new job opportunities. AI requires skilled professionals to develop, maintain, and improve its systems, which can create roles in fields like data science, machine learning, and AI ethics. Additionally, AI can enhance efficiency in specific jobs, allowing workers to focus on more complex tasks that machines cannot handle.

2. Weakened Democracy

Imagine watching a political debate where one candidate seems to say uncharacteristic things. But what if that candidate never said those things at all? Such false representation is the threat of deepfakes—videos or audio manipulated by AI to make it appear that someone said or did something they never did. Deepfakes can be so convincing that they are hard to spot, even for the trained eye.

Now, consider the impact this could have on an election. A single deepfake video, released at just the right moment, could change the entire course of a campaign. It could spread online, misleading voters and influencing opinions before the truth comes out—if it ever does. Unfortunately, it’s happening now.

For example, in 2020, a deepfake video of Belgian Prime Minister Sophie Wilmès was released, falsely depicting her linking COVID-19 to climate change. The video quickly spread, confusing and angering parts of the electorate. This is just a flavour of the damage AI could pose to democracy, allowing candidates to be misrepresented and elections to be manipulated.

3. Cybercrime

Cybercriminals exploit Artificial Intelligence (AI) to carry out more sophisticated attacks. Beyond election tampering, deepfakes are becoming tools for identity theft and financial scams. Artificial intelligence can mimic facial expressions and voice patterns with such precision that it’s nearly impossible to tell the difference between what’s real and what’s not. 

Deepfakes threaten our personal security. Our identities can now be easily manipulated by those with malicious intent, if they know how to operate AI-led tools. This shift means that criminals can weaponize the traits that make us unique.

For instance, criminals utilized AI to replicate the voice of the CEO of a UK Energy company and thus duped a firm into releasing $243,000 , as reported by the Wall Street Journal . These are examples of how people can use AI to commit crimes and exploit people’s weaknesses.

As AI advances, the challenge is clear: how do we protect ourselves in a world where what we see and hear might not be real?

4. Social Fracture Through Social Media Algorithms

We are already seeing the dangers of AI play out  on the internet, especially on social media platforms.

Artificial intelligence algorithms promote content designed to capture your attention, which the AI learns by monitoring your previous activity. But the danger is that AI can result in echo chambers, where a person receives only information that affirms their beliefs. This reduces the public discourse society needs to flourish, and could give rise to different types of extremist groups.

A related case, as shown by an MIT study, indicated that fake news shared on Twitter was six times more likely to spread than real news, whereby artificial intelligence algorithms initiated the process. This manipulation can widen the gap between societies and, as a result, increase the level of conflict within them.

5. AI Weapons

AI-led weapons are known as Lethal Autonomous Weapons Systems (LAWS) and they introduce several new risks to warfare. These weapons operate independently of human control, removing human judgment from the decision-making process. Determining who should be held responsible for unsanctioned firing becomes a problem.

Another risk is the potential for rapid escalation. Autonomous weapons can respond quickly to threats, which might trigger a chain reaction of escalating conflict. It’s also possible that the technologies may fail to blur the lines between combatants and civilians. Facial recognition and signal tracing technologies may target specific individuals without a clear distinction, threatening civilian safety.

Find out more about autonomous weapons in a detailed article by The Conversation .

6. Uncontrollable Sentient AI

The idea of uncontrollable self-aware AI may seem like science fiction, but it’s a growing concern among experts. As AI technology advances, the possibility of creating machines that can think, learn, and act independently becomes more plausible. The fear isn’t just about AI becoming self-aware but about the AI developing goals that conflict with human interests.

AI scientists have already considered certain risks, including a future where AI systems operate autonomously without human intervention. Without human intervention, this AI could make decisions that harm society, all while following its programming to the letter.

The challenge with self-aware AI is its unpredictability. Once an AI system gains autonomy, controlling or understanding its decisions could become impossible. Such a scenario would mean that the AI could operate beyond human control. The risks range from minor inconveniences to catastrophic events, such as AI-initiated conflicts or economic disruptions.

In an interview, Elon Musk stated that the development of AI could be dangerous if not controlled . While this scenario does not exist, it is still possible as AI keeps developing and improving.

7. Social Surveillance with AI

Social surveillance with AI is a growing concern. Governments and corporations can monitor individuals on an unprecedented scale. Using AI-powered facial recognition, data tracking, and predictive algorithms, they can observe and analyze people’s behaviors, movements, and social interactions

On one hand, AI-powered surveillance systems may enhance public safety, combat crime, and identify potential threats. For example, facial recognition technology can help law enforcement agencies apprehend criminals and locate missing persons. Additionally, AI algorithms can analyze vast amounts of data to detect patterns and anomalies indicating suspicious activity.

However, the widespread use of AI for surveillance also poses significant risks. Collecting and analyzing personal data can lead to privacy violations and discrimination. Moreover, there is a concern that governments could use AI-powered surveillance systems to suppress dissent and control populations.

For example, China’s social credit system tracks the conduct of its citizens and rates them accordingly, with penalties for low-ranking citizens. Such surveillance is invasive, can compromise personal liberty, and poses a potential danger to democracy if wielded by authoritarian governments.

Even though AI has a high potential for adverse consequences, attempting to decrease its negative impact is possible. Here’s how humans can manipulate AI for a positive outcome. 

  • AI Education
  • Standards and limits

Let’s explore these factors.  

AI Education  

One of the most effective ways to mitigate the dangers of AI is through comprehensive AI education. As AI technologies advance, individuals at all levels need to understand the basics of AI, its potential risks, and its benefits.

  • Developing AI literacy is a foundational step. Just as digital literacy has become a must-have skill, AI literacy is quickly becoming essential. 
  • When people receive AI education, they’re better equipped to make informed decisions regarding its use. 
  • Whether a consumer is trying to understand the AI algorithms influencing their online purchases or a business leader is deciding to adopt AI in their company, knowledge is power.

Integrating ethics into AI education is also vital. AI doesn’t exist in a vacuum; it affects real people in real ways. By embedding ethics and data science into AI curricula, society can ensure those working with AI are mindful of its broader impacts.

Continuous training is another crucial component. AI isn’t static. It evolves rapidly, and so should our understanding of it. Professionals working with AI need to stay updated on the latest developments and the ethical and practical implications of those advancements. Regular workshops, courses, and certifications can help maintain AI competence and awareness.    

Government regulation is essential in guiding and managing the potential dangers of AI. As artificial intelligence advancements continue, regulatory bodies must set legal measures to use such systems appropriately. AI regulation must address several critical issues to ensure responsible use and safeguard human rights. Here’s a breakdown of the primary concerns:

Concern Description
AI’s application in military contexts can involve controlling weaponry without human oversight. Without strict regulations, AI might be used for warfare or invasive surveillance, potentially violating human rights.
Governments must establish clear and enforceable regulations to define the extent of autonomous weapons systems. Rules should ensure that human judgment remains central in critical decisions.
Legal restrictions should prevent AI from infringing on personal privacy. Policies must address AI’s role in gathering and processing personal information to avoid privacy breaches.
Regulations should aim to reduce bias in AI’s decision-making, particularly in sensitive areas like policing, medicine, and employment. Fairness in these applications is crucial because of their impact on affected groups.

AI development is dynamic, so legislators should adapt to changes by updating laws periodically to meet emerging challenges. Such an approach would prevent harm that might come humanity’s way through AI developments.  

Organizational Standards and Limits

Even with education and regulation in place, organizations need to adopt their own standards and limits to manage AI risks. It involves creating a structured approach prioritizing safety, transparency, and human oversight.

A risk-based approach to AI adoption is essential. Not all AI applications carry the same risk level, so it makes sense to prioritize those that could have the most significant impact. Focusing on high-risk applications first allows organizations to develop mitigation strategies applicable in lower-risk areas.

Clear governance structures help manage AI risks by setting boundaries and defining responsibilities. They ensure that AI systems are effective and align with the organization’s ethical standards.

Finally, setting limits on AI decision-making is vital. While AI is useful for automating processes and making decisions, there are certain areas where human judgment must remain. Establishing clear limits on what AI can and cannot do enables organizations to prevent unintended consequences and maintain a level of human oversight necessary for responsible AI use.

AI is already part of industries, governance, and even personal spaces. Its use will only increase, so people, business entities, and governments must be more careful.  

The best way to protect against AI’s threats is to recognize how it changes domains such as employment, privacy, and democracy. Here’s how you can coexist with AI while keeping your information secure:

  • First, get to know the AI system you’re using. Learn about its capabilities, limitations, and potential biases. Understanding the technology behind it helps you use it wisely.
  • Next, ensure that the AI provides clear reasons for its decisions. Transparency helps spot mistakes or biases.
  • Regular checks are also vital. Monitor the AI’s performance and review its decisions often. These checks help catch issues before they become problems.

Remember to protect AI with robust security measures. Use encryption and other security tools to safeguard your data from cyber threats.

What is AI?

Artificial intelligence (AI) refers to the ability of systems or machines to imitate human intelligence to accomplish a given task and enhance their performance using the data they gain.

Which jobs are safe from AI?

Professional roles that involve emotional intelligence, creativity, and decision-making, such as healthcare, education, and upper organizational management, are not likely to disappear because of AI.

Can AI cause human extinction?

Even though the idea of AI leading to human extinction may sound like science fiction, its inspiration is from real issues. Some fear that sophisticated AI systems may out-compete human intelligence, resulting in scenarios in which these systems act independently and erratically. The future of AI may be out of control, and the implications may be disastrous, including threats to human life.

Is AI a threat to democracy?

Is AI dangerous to democracy? It can be. Artificial intelligence can spread fake news and develop deepfake technology. Deepfakes are highly realistic counterfeit videos or audio clips that bad actors may use to tarnish the reputation of a political candidate or influence voters during polls.

Social media algorithms, driven by artificial intelligence, reinforce and amplify extreme positions. The polarization undermines democratic processes by manipulating the information delivered to the public.

Kirsty Moreland

Kirsty is an experienced writer and editor with a foundation in blockchain technology and Web3. From learning about crypto, she developed an interest in finance and trading, which she now closely follows and documents.

IMAGES

  1. Artificial Intelligence Essay

    artificial intelligence is a threat to humans essay

  2. Is Artificial Intelligence (AI) A Threat To Humans?

    artificial intelligence is a threat to humans essay

  3. Essay on Artificial Intelligence in English, Write an Essay on AI, Artificial Intelligence Essay, AI

    artificial intelligence is a threat to humans essay

  4. Essay on Artificial Intelligence

    artificial intelligence is a threat to humans essay

  5. Artificial Intelligence Threat to Human Activities

    artificial intelligence is a threat to humans essay

  6. Argumentative Essay On Artificial Intelligence Free Essay Example

    artificial intelligence is a threat to humans essay

VIDEO

  1. Artificial intelligence- the death of creativity. CSS 2024 essay paper

  2. Artificial Intelligence

  3. Essay: Artificial Intelligence with quotations

  4. Will ARTIFICIAL INTELLIGENCE Destroy Humanity? Why AI So Dangerous for us?

  5. SCIENTIFIC PROOF THAT A.I. ARTIFICIAL INTELLIGENCE IS NO THREAT TO HUMANS AT ALL

  6. Artificial Intelligence Essay कृत्रिम बुद्धिमत्ता पर निबंध ssc jht 2023 #artificialintelligence

COMMENTS

  1. Here's Why AI May Be Extremely Dangerous--Whether It's Conscious or Not

    survey of AI experts. found that 36 percent fear that AI development may result in a "nuclear-level catastrophe.". Almost 28,000 people have signed on to an. open letter. written by the Future ...

  2. AI Is an Existential Threat—Just Not the Way You Think

    The following essay is reprinted with permission from The Conversation, an online publication covering the latest research.. The rise of ChatGPT and similar artificial intelligence systems has ...

  3. Is Artificial Intelligence (AI) A Threat To Humans?

    If we focus on what's possible today with AI, here are some of the potential negative impacts of artificial intelligence that we should consider and plan for: Change the jobs humans do/ job ...

  4. The case that AI threatens humanity, explained in 500 words

    The case that AI threatens humanity, explained in 500 words. The short version of a big conversation about the dangers of emerging technology. by Kelsey Piper. Feb 12, 2019, 8:10 AM PST. Javier ...

  5. How Could AI Destroy Humanity?

    But they've been light on the details. Last month, hundreds of well-known people in the world of artificial intelligence signed an open letter warning that A.I. could one day destroy humanity ...

  6. The True Threat of Artificial Intelligence

    June 30, 2023. In May, more than 350 technology executives, researchers and academics signed a statement warning of the existential dangers of artificial intelligence. "Mitigating the risk of ...

  7. AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype

    These issues, and not the imagined potential to wipe out humanity, are the real threat of artificial intelligence. End-of-days hype surrounds many AI firms, but their technology already enables ...

  8. What are the risks and rewards of artificial intelligence?

    The main concern the report raises is not malevolent machines of superior intelligence to humans, but incompetent machines of inferior intelligence. Once again, it's easy to find in the news real-life stories of risks and threats to our democratic discourse and mental health posed by AI-powered tools.

  9. Why We Should Think About the Threat of Artificial Intelligence

    October 24, 2013. If the New York Times ' s latest article is to be believed, artificial intelligence is moving so fast it sometimes seems almost " magical.". Self-driving cars have arrived ...

  10. What Exactly Are the Dangers Posed by AI?

    Oren Etzioni, the founding chief executive of the Allen Institute for AI, a lab in Seattle, said "rote jobs" could be hurt by A.I. Kyle Johnson for The New York Times. Experts are worried that ...

  11. AI Is Not Actually an Existential Threat to Humanity, Scientists Say

    Another type of AI is called Artificial General Intelligence (AGI). This is defined as AI that mimics human intelligence, including the ability to think and apply intelligence to multiple different problems. Some people believe that AGI is inevitable and will happen imminently in the next few years.

  12. The impact of artificial intelligence on human society and bioethics

    Bioethics is not a matter of calculation but a process of conscientization. Although AI designers can up-load all information, data, and programmed to AI to function as a human being, it is still a machine and a tool. AI will always remain as AI without having authentic human feelings and the capacity to commiserate.

  13. Artificial Intelligence: The Helper or the Threat? Essay

    Get a custom essay on Artificial Intelligence: The Helper or the Threat? According to the people, who support an idea of artificial intelligence development, it will bring numerous benefits to the society and our everyday life. At first, the machine with artificial intelligence is going to be the best helper for the humanity in problem-solving ...

  14. PDF The case for taking AI seriously as a threat to humanity

    The case for taking AI seriously as a threat to humanity Kelsey Piper Stephen Hawking has said, "The development of full artificial intelligence could spell the end of the human race." Elon Musk claims that AI is humanity's "biggest existential threat." That might have people asking: Wait, what? But these grand worries are rooted in ...

  15. Artificial Intelligence and the Future of Humans

    Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018. The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities.

  16. As artificial intelligence rapidly advances, experts debate level of

    Geoffrey Hinton, Artificial Intelligence Pioneer: The machines taking over is a threat for everybody. It's a threat for the Chinese and for the Americans and for the Europeans, just like a global ...

  17. Artificial intelligence: threats and opportunities

    Learn about the opportunities and threats for security, democracy, businesses and jobs. As artificial intelligence becomes part of our everyday lives, it is increasingly necessary to regulate it. Europe's growth and wealth are closely connected to how it will make use of data and connected technologies. AI can make a big difference to our lives ...

  18. The true dangers of AI are closer than we think

    The threats overlap, whether it's predictive policing and risk assessment in the near term, or more scaled and advanced systems in the longer term. Many of these issues also have a basis in history.

  19. AI Should Augment Human Intelligence, Not Replace It

    The real question is: how can human intelligence work with artificial intelligence to produce augmented intelligence. Chess Grandmaster Garry Kasparov offers some unique insight here. After losing ...

  20. What is AI and how will it change our lives? NPR Explains

    TV scripts, school essays and resumes are written by bots that sound a lot like a human. Yuichiro Chino Artificial intelligence is changing our lives - from education and politics to art and ...

  21. Artificial intelligence could lead to extinction, experts warn

    A protester outside a London event at which Sam Altman spoke. Artificial intelligence could lead to the extinction of humanity, experts - including the heads of OpenAI and Google Deepmind - have ...

  22. Will Artificial Intelligence Kill Us All?

    AI is nothing more than a tool for improving human productivity. And that's what all major technological advances were, whether it was the stone axe, the telephone, the personal computer, the internet or the smartphone. If we really think about it, the most serious threat we face is not from AI acting to the detriment of humanity.

  23. The 15 Biggest Risks Of Artificial Intelligence

    7. Dependence on AI. Overreliance on AI systems may lead to a loss of creativity, critical thinking skills, and human intuition. Striking a balance between AI-assisted decision-making and human ...

  24. Strengths, Weaknesses, Opportunities, and Threats: A Comprehensive SWOT

    Artificial Intelligence; Peer Review; Research Integrity; Technology; ... Can we stop viewing AI as a threat or as a problem and instead embrace it as a partner—one that enhances human judgment rather than complicates it? ... It's time to recognize the strengths, the weaknesses, the opportunities and the threats of both humans and AI. Tweet ...

  25. A systematic literature review on the impact of artificial intelligence

    Artificial intelligence (AI) can bring both opportunities and challenges to human resource management (HRM). While scholars have been examining the impact of AI on workplace outcomes more closely over the past two decades, the literature falls short in providing a holistic scholarly review of this body of research. Such a review is needed in order to: (a) guide future research on the effects ...

  26. Why A.I. Isn't Going to Make Art

    Art is notoriously hard to define, and so are the differences between good art and bad art. But let me offer a generalization: art is something that results from making a lot of choices.

  27. What is artificial intelligence? Your AI questions, answered.

    AI systems determine what you'll see in a Google search or in your Facebook Newsfeed. They compose music and write articles that, at a glance, read as if a human wrote them. They play strategy ...

  28. Is Artificial Intelligence Dangerous?

    2022 marked a significant milestone in artificial intelligence (AI) development, with AI applications like ChatGPT and companies like Nvidia becoming some of the major players in global technology. The leading AI companies are racing to improve AI-related hardware and software, enhancing AI performance and accelerating the development of the space. . Nvidia's soaring stock price, growing ...