Try Googling, "is AI the antichrist?" or "will AI take over the world?" and you will find no shortage of articles from a wide range of perspectives.
This is how it always is with new technology: Some people fiercely resist it as a dangerous innovation while others rush to embrace it. In the end, most people come to terms with the new technology, and life goes on.
But at the same time, the more technology advances, the greater the chance it could develop beyond our ability to control it. "Can AI become sentient?" remains an open question for society as a whole.
A story quickly spread in late May/early June about how a US Air Force drone went rogue and killed its operator. The AI equipment was trained to take out surface-to-air missiles, but only with the go-ahead from a human overseer. However, the drone was denied this approval and, so the story goes, it turned on its handler—a dark confirmation of our worst fears about AI.
But the thing is, New Scientist explains, the story is a fake. But many people were ready to believe it.
And just because one story was a fake does not mean we necessarily have nothing to worry about.
The Future of Life Institute published an open letter in March that asked some hard questions and called for a six-month pause in work on highly advanced forms of artificial intelligence.
"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," the letter begins. "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening."
AI could pose a threat to our very civilization, the letter warns: "Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk the loss of control of our civilization?"
"AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal… Humanity can enjoy a flourishing future with AI."
And the letter is not signed by just random nobodies or fringe weirdos, but some true tech heavyweights: Elon Musk, Steve Wozniak, Jaan Tallinn, Evan Sharp, and many more. For them, the problem is not AI in and of itself, but in how it is used.
(Though it should be noted that Musk has spoken about AI in more hyperbolic terms in the past. At the MIT Aeronautics and Astronautics Department's Centennial Symposium in 2014, he said: "I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that. With artificial intelligence, we are summoning the demon.")
While the open letter itself largely takes a more middle-of-the-road approach, it has sparked much debate and given rise to a number of articles about "worrying wisely" about AI.
Tech outlet Make Use Of published, "How to Resist AI Anxiety While Following the Fast-Developing Technology" and The Economist published, "How to worry wisely about artificial intelligence."
The Economist notes that it is so-called "large language models," (LLMs) that have caused the most concern. An LLM stands behind the hugely popular ChatGPT, which has surprised even its own creators with its abilities—"everything from solving logic puzzles and writing computer code to identifying films from plot summaries written in emoji."
For some, the fact that AI is already outpacing what its creators envisioned is a sign of the coming apocalypse, where machines will take over the world and enslave humanity. Or at least, machines will make human employees redundant.
But proponents of AI cannot help but note its massive potential for solving massive problems, for example, "by developing new drugs, designing new materials to help fight climate change, or untangling the complexities of fusion power."
Unlike the AI of a decade ago, today's systems no longer rely on carefully human-labeled data—in essence, LLMs, such as ChatGPT, have the entire internet available to them. But then again, the internet is also full of junk—both information that is factually incorrect, and the opinions and teachings of ill-intentioned or mentally disturbed people.
ChatGPT is enjoyable to chat with, but at least as of now, it cannot be relied on completely. Its answers to questions posed to it are often a mixture of truth and fiction. If the user knows that the chat tool provided false information, they can correct the program, and it will immediately recognize its fault and typically come back with correct information. But if a user is asking about a topic they know little or nothing about, they will not be able to discern what is true and what is false.
There are stories of high school and college students using ChatGPT or similar programs to write essays for them, but this is most likely only useful for introductory-level essays. The program requires very specific prompts in order to go beyond the surface level of a given topic. And if a user knows how to formulate such specific prompts, they probably know the topic well enough to be writing an essay themselves.
That is, at least for the moment, AI has not advanced to the level where we need to fear that it will take over all jobs. It is certainly a helpful tool, but any of its output still needs to be checked by a living, breathing human being.
Headlines coming even from reputable outlets like BBC News can spark fear: "AI Could replace equivalent of 300 million jobs.". But the few who actually read articles behind the headlines also found the author saying that while AI could replace a quarter of the work in the US and Europe, it "may also mean new jobs and a productivity boom." That is, while new technology can replace some jobs, it also creates new jobs for people who have to interact with the new technology. And it can increase demand for those jobs that the technology cannot handle.
But just how much of a threat AI poses to our work force remains a topic of heated debate. According to The Economist:
In a survey of AI researchers carried out in 2022, 48% thought there was at least a 10% chance that AI's impact would be "extremely bad (e.g., human extinction)." But 25% said the risk was 0%; the median researcher put the risk at 5%.
The nightmare is that an advanced AI causes harm on a massive scale, by making poisons or viruses, or persuading humans to commit terrorist acts. It need not have evil intent: researchers worry that future AIs may have goals that do not align with those of their human creators.
While such scenarios cannot be ruled out altogether, there is little doubt that they are still far in the future, if they ever come. The technology simply isn't there yet. Cooler heads note much more mundane dangers of AI: concerns about bias, privacy, and intellectual property rights. But these are, quite frankly, not something the ordinary user even needs to worry about.
The key task, for governments, tech companies, and individual users, is to have a proper understanding of both the promise and the risks of AI and to be ready to adapt.
To accomplish this task, it helps to have thorough and even-keeled AI news sources. Some good examples include:
- The Batch: An education technology company offering a weekly newsletter on AI news and breakthroughs.
- MIT News: Includes a dedicated AI news section that delves into the true tech side of AI without fear-mongering.
- ScienceDaily: Breaking AI news and reliable information.
- The Verge: While not exclusively focused on AI, The Verge covers technology news extensively and often includes AI-related articles.
- AI News: A dedicated news platform that focuses specifically on AI-related topics.