Skip to main content

Advances in AI are spurring a language processing arms race between the world’s biggest tech companies. But a lot of people are terrified – reasonably so

Open this photo in gallery:

Illustration by Anson Chan

This was in January, just before the fear of the machines began to set in, and before the terror went public this week.

My pal Ben Bell, the PhD-wielding president of Eduworks, a company that builds artificially intelligent training systems for the United States Department of Defence, asked ChatGPT to write our wives a letter.

ChatGPT is the now-famous wonder and scourge of generative machine learning that is changing the world and our understanding of ourselves as human beings. An artificially intelligent chatbot, ChatGPT can answer questions and write essays and solve problems in its own words without any human help at all – which is another way of saying that in reply to a request or question, it surveys a compressed version of all relevant existing text on the internet and repackages the most statistically likely combinations of words as answers. It does all this in seconds. You can ask it how to fold a flag or to explain relativity or to write your marriage vows or to invent a conversation between two shoes.

The illusion that it is “intelligent” and “thinks” is convincing. ChatGPT (Generative Pre-Trained Transformer, in case you were wondering) passed the management exam in Wharton’s MBA program at the University of Pennsylvania (it earned a grade of B minus), scored 710 out of 800 on the SATs, and has aced both the LSAT and bar exams. The list of its humiliations to human endeavour grows longer every day.

Open this photo in gallery:

A ChatGPT prompt is shown on a device near a public school in Brooklyn, New York, on Jan. 5. New York City has banned the use of ChatGPT in schools.Peter Morgan/AP

The spectre is so alarming that more than 1,000 AI researchers and executives – including Elon Musk, Apple co-founder Steve Wozniak and Peter Norvig, former director of research at Google – this week called for a six-month moratorium on the development of advanced artificial intelligence systems like the latest version of ChatGPT. The danger, the researchers claim in an open letter published by The Future of Life Institute, a non-profit think tank with offices in California and Brussels, is “ever more powerful digital minds that no one – not even their creators – can understand, predict or control.” It added: “AI systems with human-competitive intelligence can pose profound risks to society and humanity.”

As an amusing demonstration of its powers, Ben asked Chat-GPT to convince our wives that he and I ought to play golf on a week’s vacation we were planning together in Hawaii. “Dear [Wife],” the chatbot’s letter began, moments later:

I hope this message finds you well. I know that you had some reservations about my suggestion to play golf with Ian during our trip to Kauai, but I wanted to provide you with some additional information that may help alleviate your concerns.

First and foremost, I want to assure you that this would not take up a significant amount of our vacation time. We can easily schedule a round of golf for a morning or afternoon, leaving plenty of time for us to enjoy all of the other activities and attractions that Kauai has to offer.

Additionally, golf can be a great way for Ian and I [sic] to bond and spend quality time together. It gives us the opportunity to catch up and have meaningful conversations in a relaxed and enjoyable setting. I understand that you may not be interested in playing golf yourself, but I hope you can see the value in Ian and I [sic] having this shared experience.

Finally, I want to assure you that I will be mindful of our budget and not let this activity break the bank. We can find a course that fits our price range and I promise to be mindful of any additional expenses that may come up.

I hope this helps to address your concerns and I look forward to discussing this further with you.

Sincerely, [Your Name].

Needless to say, no one objected to the letter. Was it the banal thoroughness of its reasoning? The inoffensiveness of the computer, which had no emotional history with any of us? We made a reservation to play golf.


Open this photo in gallery:
Open this photo in gallery:

Founder and CEO of OpenAI, Sam Altman (top), speaks to media about the integration of the Microsoft Bing search engine and Edge browser with OpenAI on Feb. 7, in Redmond, Wash. Microsoft employee Alex Buscher demonstrates the new search feature integration at the introductory event.Stephen Brashear/The Associated Press

This is the alarming part of the story. ChatGPT, developed over the past seven years by OpenAI LP, is the most significant development in the 65-year history of artificial intelligence, and the longest stride so far in AI’s quest to produce a machine with a human’s intellectual capacities. The chatty bot already attracts over 100 million users a month: it was released last November. Instagram needed a year to amass 10 million subscribers.

A language processing arms race is underway: Microsoft has attached a version of ChatGPT to Bing, its search engine, and Edge, its browser, with more AI to come soon in Excel and Word. Google (fearing for its $162-billion-a-year search business) released Bard a week and a half ago; others are in development. ChatGPT can write computer code and computer malware as well as letters and reports; has composed hundreds of pieces of journalism for the Associated Press and other outlets; and has upended the academic world by squirting out essays and research papers galore, just for starters. Microsoft’s version is available as a free app on your phone. Have to give a speech? Let ChatGPT write it! Need a spreadsheet or a Powerpoint presentation or a poem or a short story? Done! New York City has banned the use of ChatGPT in schools (good luck with that). As an antidote, OpenAI has released a version of GPT that calculates the likelihood of the chatbot having been used in a piece of writing.

A lot of people are terrified, reasonably so. In 2015, Sam Altman, the soft-spoken founder of OpenAI, wrote that “superhuman machine intelligence is probably the greatest threat to the continuous existence of humanity.” Last week, on Kara Swisher’s podcast On, he admitted he still feels the same way.

Yes, ChatGPT could boost global productivity. It could also vaporize any job that qualifies as rote intellectual human labour, where originality is less important than dogged thoroughness. Goldman Sachs unleashed a report this week that claims two-thirds of U.S. occupations will be susceptible to “some degree of automation” by the likes of ChatGPT: the greatest carnage will occur in the legal profession (where 44 per cent of jobs are vulnerable) and among administrative positions (46 per cent). But the list of potential casualties is as long as a royal funeral. The reeking irony is that OpenAI started life in 2015 as a non-profit (it isn’t any more) dedicated to developing “safe AI” that “benefits all of humanity.”

Meanwhile, the capability of AI chatbots is growing exponentially. When OpenAI (now valued at US$30-billion, double what it was a year ago) released GPT-2 in 2019, it consisted of 1.5 billion data parameters, which is one way of measuring its capabilities. GPT-3, born last November, comprises 175 billion data parameters. GPT-4 – released this month, and the inspiration for the Future of Life Institute’s public warning – is reportedly capable of managing 100 trillion parameters. Pause here to freak out.

ChatGPT is hard proof, in other words, of the shocking capabilities of a particular species of artificial intelligence: “unsupervised” machine learning driven by generative adversarial networks, or GANS. Similar technology is responsible for the explosion in “deepfakes”: manipulated or “synthetic” audio and video fare that is showing up everywhere, from porn flicks to the websites of insurrectionists trying to foment political chaos.

Will AI take over the world? And other questions Canadians are asking Google about the new technology

How does machine learning work? Mostly by brute computing power. Let’s say you want your algorithm to create a synthetic (invented) picture of a flower (or a sentence or a song or a painting). At its most basic, a generator algorithm proffers multiple candidates for the new flower, drawing on statistically likely bits and bobs from the countless flower photographs stored in its catacomb-like memory. At the same time, an opposing “discriminator” network tries to reject those choices, based on its own vast bank of flower pictures. The goal of the competing networks is to generate a picture (or sentence or song or painting) as close as possible to an existing thing without being an existing thing.

Open this photo in gallery:

Over the weekend, a falsified image of Pope Francis looking dapper in a white puffer jacket went mega-viral on social media. The image was made using the AI art tool Midjourney.Supplied

GANs (of which ChatGPT is one) can dredge and paraphrase a sea of existing data and content – or plagiarize it, depending on your point of view – in seconds. They can create utterly convincing photographs of people who don’t exist and never did (https://this-person-does-not-exist.com/en). Ditto fake real estate listings, MPs, automobiles, literary quotes and cityscapes (https://thisxdoesnotexist.com/). Machine-generated deepfakes of stock promoters have lured victims into crypto scams.

Earlier this year, Microsoft announced the development of VALL-E, a generative AI gizmo that convincingly reproduces any human voice based on three seconds of recorded material, and then makes it say anything the operator wants it to say. A few weeks later, a deepfake interview slithered into view on YouTube, in which the deepfaked voice of Justin Trudeau told a deepfaked Joe Rogan that he wished he’d nuked Ottawa during last year’s trucker protest. More than 200,000 souls have ingested the audio, and quite a few think it’s hilarious and/or real. It is not real.

Open this photo in gallery:

Falsified images created and posted to Twitter by Eliot Higgins using a sophisticated and widely accessible image generator show a fictitious scene of Donald Trump being arrested by the New York City police. Similar Deepfake programs are now available to anyone at virtually no cost.J. David Ake

Deepfake apps are now available to anyone at virtually no cost. Even low-quality fakes – the infamous video of Ukraine President Volodymyr Zelensky conceding defeat a year ago, the faked “photographs” of Donald Trump being arrested that forced image-generator Midjourney to halt its free service this week – can have an effect. As New York University’s quarterly Threat Landscape Report pointed out last August, “synthetic and otherwise manipulated assets are a weapon for disillusionment and dissuasion more than for persuasion.” Studies have shown that people exposed to deepfakes don’t learn how to spot them, but they do start to mistrust all media.

Machine learning can be fast and convincing and impossible for humans to detect, but it’s often unreliable. GPT-4′s database is good only up to September of last year. Because ChatGPT relies for its answers on pre-existing web content, the prose robot can be lured into spouting whatever cant (racist, anti-capitalist, pro-Nazi, take your pick) has been fed into it. It can also “hallucinate” (the term computer engineers use) false and/or inappropriate answers if a question or request strays into less mainstream territory that requires statistically uncommon language and answers. Hence the now-infamous incident in which a New York Times reporter asked Microsoft’s GPT bot about its Jungian shadow: the program eventually declared its love for the reporter and told him to leave his wife. (The alarming transcript – alarming, because it creates the impression that the chatbot has some form of recombinatory unconscious – is here). Whatever else it can do, ChatGPT can’t reassuringly check the factual, artistic or moral reliability of its work.

There are, of course, effective and ethical uses for generative AI. Machine learning already drives the financial sector (making loan decisions, running portfolios, detecting patterns of fraud); national security (where the speed with which AI can analyze outcomes and enact new tactics has coined a new word, hyperwar); health care (generative AI helped develop the COVID-19 vaccines and promises to be a boon to diagnosis and self-care); and the management of cities (the fire department in Cincinnati lets AI tell it how to respond to the 80,000 calls it gets every year). Artificially intelligent algorithms are used to determine when wind turbines should hook up to existing power grids. In this and countless other ways, according to the Brookings Institute, AI could add $16-trillion to global GDP by 2030.

Even deepfake technology has its upside: Amazon has reportedly experimented with synthesizing the voices of people who were recorded on home-based Alexa systems before they passed away. That way, Grandma can be dead and still read bedtime stories to the kids. You’re there even when you’re no longer there. The point is, chatbots and their deepfake cousins are making it increasingly hard to know what is human and what is not. Not that we seem to mind.


In a sample video from the media company DeepMedia, footage of U.S. President Joe Biden speaking to the United Nations is automated and re-animated to move between five languages.

Ben – Dr. Bell to those who don’t know him – has worked in artificial intelligence for more than thirty years. Deep fakes and the perils of machine learning caught his attention in 2017. By then, AI bore almost no resemblance to the creaky contraption he first worked on.

In the age of innocence, between the 1960s and 1980s, AI researchers debated what qualified as intelligence. Joseph Weizenbaum, the late MIT computer scientist, made a sharp and permanent distinction between computing and human intelligence. Others, such as his colleague Marvin Minsky, conceived of the human mind as a bureaucracy of decision-making, and tried to create computer programs that duplicated that bureaucracy.

Their common goal was to create a machine that performed tasks which, if a human did them, would be considered intelligent. Is that actually intelligence? The question is still fiercely debated, 60 years on. We have machines that can make more and more informed decisions faster than ever, but we still haven’t agreed what choices the machines should be making.

Machine learning, the cause of the recent furor, leapt ahead in the 2010s. It requires tera-bushels of data and Thor-like computing power – both of which materialized in the past half-decade thanks to our addiction to smartphones and social media (which have provided a lot of the data) and the Cloud and faster microchips. Machine learning is very good at surveying raw data and classifying it, spotting patterns and exceptions. The recommendation function on Netflix and Apple Music – ”if you liked this, you’ll also like this” – is classic machine learning. It makes its recommendations by comparing statistical probabilities and correlations – people who buy these also buy those – not by exercising subjective judgment. It’s calculating, not thinking.

To which an AI zealot might reply: In the end, what’s the difference?

Whereupon a skeptic might answer: thinking entails judgment. Do you mind if a machine-learning program such as TurboTax prepares your return? Of course not. But what if the algorithm is deciding whether to make an acquisition or fend off a shareholder revolt – or how to police a demonstration with racial overtones? A major drawback of unsupervised machine learning is that it’s a black box: the more sophisticated its algorithms are, the more impossible it is to know how the machine learned what it learned as it crawled the internet, or where it made mistakes.

Ben is an exceptionally thoughtful person. “Didn’t the drive to create a machine with human capacities ever alarm you?” I asked him one morning.

“I assumed that market forces and client preferences would naturally steer us away from anything dangerous, or anything that we didn’t fully understand,” he replied matter-of-factly.

Open this photo in gallery:

Educators attend a workshop on ChatGPT organized by the School Media Service (SEM) in Geneva, on Feb. 1. ChatGPT has upended the academic world by producing essays and research papers on demand.AFP/Getty Images

“So are you disappointed by the breakneck speed at which deepfakes and ChatGPT are being developed?”

“I don’t feel disappointed,” he said. “I just feel like the guardrails are gone and that’s because the machines can do so much. People are willing to suspend their concern that they don’t really know what’s happening, in order to get the benefit of all of the great things that these poorly understood software algorithms are capable of providing.”

I reminded Ben that 35 years ago, when we first met, I asked him why he and his fellow computer engineers wanted to create a machine that thought like a human. Back then, he didn’t have an answer. Now he did.

“My suspicion is that these brilliant people are also, like every one of us, flawed, particularly when it comes to interpersonal dynamics and human interaction. I think they understand and value intellect. A lot of them are flummoxed by all the ways people interact that are not intellectual – the emotional body language, the cues that a lot of these people seem to miss in everyday life. Creating artificially intelligent minds gives them an entity that matches the intellect that they value, but doesn’t exhibit the perplexing tendencies real people exhibit, that they can’t understand or process. Artificially intelligent entities aren’t judgy. And they’re not needy. They have no expectations.”


Here is a strange thing about machine learning: it feeds on the future. It promises endless possibilities but contains very few limits, and so it thrills us until it starts to scare us. To demonstrate the dilemma, let me introduce you to Richard Boyd.

Mr. Boyd is the president of Tanjo Inc. in North Carolina. He’s an entrepreneur in the world of AI-assisted virtual reality simulations. But he’s almost as famous for having built an AI version of his late father. A month after his father died in 2017, he rounded up all the material he could find pertaining to his old man, digitized it, and then transformed the data into a machine-learning avatar.

As a result, Mr. Boyd can (and does, sometimes daily) ask his AI “Dad” – equipped with a Chat-GPT-like deep language program – what he thinks of current developments in the world. A former Air Force pilot who served in Vietnam, his father tends, in death, to skew Republican. “Any time there’s a school shooting or anything like that,” Mr. Boyd said, “he’s still very Second Amendment.”

Mr. Boyd has lectured in South America about his invention. “South American cultures really love this idea of being able to talk to your ancestors. Like, I’ve got a big life decision I have to make, should I marry this person? Right now they pray to their ancestors. But what if they could actually ask them?” Mr. Boyd has told his own son, Dylan, that with more and more sophisticated updates, “you will know your grandfather, perhaps better than I ever did.”

Mr. Boyd first imagined online resurrections in 2009, after meeting the computer scientist and futurist Ray Kurzweil. Mr. Kurzweil (an acolyte of Mr. Minsky) is now 75 and famously said humankind will achieve “the Singularity” in 2045: the year machine and human intelligence will merge as equals, whereupon intelligence will emanate through the universe. When Mr. Kurzweil is asked if God exists, he likes to say “not yet.”

Mr. Boyd admits that his father’s online ghost is limited by sparse data – he never owned a smartphone or a Google account, and had no social media presence. “This thing that I’m interacting with, it’s a shadow of my father,” Mr. Boyd said. But someone with a more extensive online profile will be rich territory for ChatGPT. “GPT4, as I understand it, is capable of trillions of data transformations. That’s where it gets extraordinarily interesting. If you have that much training information about a person, you can model them really pretty accurately. And that is gonna create a whole new set of issues that we’re going to have to deal with.”

This is the point in the conversation – a moment that occurs in every conversation about AI and machine learning lately – where the future begins to expand and warp while the ground of the present trembles. Mr. Boyd is a hard-headed entrepreneur, but he has no trouble foreseeing a future in which machines not only learn, but improve themselves as they learn. “That’s the last thing we’ll need to invent,” he told me one afternoon not long ago. We were on a Zoom call and he knew how to project slides onto the screen of my laptop, which I thought was the height of genius. “Because once we invent that, the machines will outstrip us. The only question then will be, will the machines keep us around?”

Until a few weeks ago, that was just a science fiction scare trope – carbon-based life supplanted by its own invention, a silicon successor! But the immense capabilities of ChatGPT and generative machine learning are suddenly raising existential questions. Could a computer’s prodigious memory and recombinatory speed eventually replace instinct and learning and memory and talent, and even some form of the unconscious – the traits (so we tell ourselves) that make human intelligence unique and non-replaceable?

No one knows. Mr. Boyd said he thinks it may already be happening: the human brain, he likes to joke, “hasn’t had an upgrade since the Pleistocene.” A more immediate and practical concern, if chatbots can do as much as their inventors promise and now fear, is what will be left for humans to do. Maybe we’ll have enough spare time to create a more just society. “One new job will be resource officer,” Mr. Boyd suggests, and he’s dead serious. “Someone who determines what machines should be doing and what humans should be doing. That’s a new job that didn’t exist in December, and you and I are talking at the end of January.”

One thing he is certain of: with machines pretending to be human and chatbots performing so much of our human work – calculating, listening, fighting, writing, lawyering, doctoring, making stories and music and pictures – it will be the purely human that will become rare and valuable. “I think we’ll get to a point where we really value authentic human art, authentic human writing and music, that sort of thing. That is, if it can be proven to be so.”


Open this photo in gallery:

British soccer star David Beckham launched a global video appeal on Tuesday to end malaria in which he appeared to be speaking nine languages, including Swahili and Yoruba, aided by artificial intelligence.

The Defense Advanced Research Projects Agency – the U.S. government-funded research and development hub that helped invent the internet, the cellphone and Moderna’s COVID-19 vaccine, among other accomplishments – has been investing in artificial intelligence since the 1960s. But DARPA now has a division dedicated entirely to detecting machine-generated output and deepfakes, finding out who made them, and characterizing their intent as innocent or evil.

To do that, DARPA is already devising what Matt Turek, deputy director of DARPA’s information innovation office, calls “a third wave of AI, which is really being able to do contextual reasoning. And in order to do that, we think we’re going to need to hybridize some of those traditional, symbolic-style approaches to AI with the best-of-breed statistical machine learning approaches.” Translation: we have to inject human judgment back into machine-learning.

That’s the first stinging irony of advanced AI’s breakneck development: we already need smarter AI to detect evil AI. The second irony is that you can’t have one without the other, which is why inventing learning machines that can detect machine learning is one of the hottest sectors in the kingdom of AI these days.

Rijul Gupta is the 30-year-old chief executive of DeepMedia AI, which this fall is planning to launch its Universal Translator. If you feed video and audio of someone speaking, say, English, into Universal Translator, it instantly converts the audio and video into 50 different languages – with a precisely replicated voice and convincing mouth movements. When the UN commissioned David Beckham to make a video about malaria in 2019, the result – with Mr. Beckham speaking in nine languages – required up to 20 hours of training video. “Our model,” Mr. Gupta said, “typically would need just five seconds of David Beckham’s face and voice, and can make him say anything.” But by creating a state-of-the-art deepfake generator, Mr. Gupta has also created a state-of-the-art deepfake detector, capable of spotting machine-learning that humans can’t discern.

Like Ben, Mr. Gupta saw his first deepfake in 2017. He was 25, and immediately jumped into the field. “I wanted to understand the technology better because I think the best remedy to fear is knowledge. But in all honesty, the more I’ve understood this, the more afraid I’ve become.” Until more forms of generative AI can be turned into reliable AI-detection technology, “people are going to be harmed. Even my father has been the victim of a telephone scam. But in the next iteration of robo-calls, my father will be getting a phone call from me – a deepfake version of me, but my face and voice will be perfect. And until people like Apple or AT&T integrate deepfake-detection tools into their platforms, everyone’s at risk of that.”

Generative AI has already attracted attention from government watchdogs in Europe (especially Italy, where ChatGPT was banned this week) and the UK (where a white paper was published). Mr. Gupta didn’t sign the stall-AI manifesto--”You can’t stop it”--but he welcomes the interference (as does Sam Altman, the founder of OpenAI). “It’s rare for an executive of a synthetic media company to say we need regulation in the space,” Mr. Gupta said. “But I just don’t see a world where this technology is protected against unethical uses without some type of government intervention. It’s very similar to social media, which claimed they would regulate themselves. And it was only after a lot of disasters happened that government tried to regulate them. By that point, it was too late.”

Open this photo in gallery:

Rijul Gupta, founder and CEO of DeepMedia.Supplied

We were coming to the end of our conversation when I asked Mr. Gupta if he thought there was anything to the exploding paranoia that AI might one day become, in some authentic way, human, or even a threat to humans. By way of an answer, he cited an episode of Star Trek: The Next Generation, in which the android Data, a fully synthetic being, defends his claim to consciousness. “He felt like a human being. He acted like a human being. He very quickly admitted that he didn’t have emotions, but he still exhibited interactivity,” Mr. Gupta said. “But if you think about it, human beings are just machines built with a different machinery, right? We’re just a bunch of electrical signals and molecules and cells interacting with each other. It’s still unclear whether human beings have free will ourselves. And so when we think about whether a machine has free will and whether a machine has emotion, all we can go on, in my opinion, is whether the machine claims it has emotion and consciousness. I don’t think we’re there yet. But it’s probably best to start humanity’s relationship with AI on a footing of trust.”

“Why?” I said, reminding myself that he has a degree from Yale.

“Well,” Mr. Gupta continued, “the alternative is we start out in an atmosphere of mistrust, and fifty years down the line, we have an AI that is significantly more capable than us in every single respect. That might not be in the best interests of humanity, if you know what I mean.”


The problem isn’t that ChatGPT isn’t smart; the problem is that it isn’t human, and never will be. It isn’t us. That matters to a growing phalanx of the AI-wary. Emily Tucker, a professor at Georgetown Law School’s Centre on Privacy and Technology, banned the use of the phrase “artificial intelligence” at her institute, on grounds that it’s a marketing term that demeans the word “intelligence.” Prof. Tucker, a former ballet dancer who became a human-rights lawyer, isn’t afraid of a machine takeover. “But I am alarmed that AI has produced the level of discourse it has. I am worried that so many people seem not to be able to distinguish between real human intelligence and the AI world’s deepfake of intelligence.” An AI-enhanced chatbot looks smart, but “it can’t ask why. So it can never come close to being human. It does not have the capacity to question its own capacity.” So far, anyway.

Her concern is not that machines will become human, but that machine learning’s decontextualized decision-making is already making humans more machine-like. For evidence, according to Justin Hendrix, CEO and editor of the Tech Policy Press, look only to the management of people by algorithms–the performance review on robot crack, a trend AI will encourage. “It’s increasingly common at companies like Uber and Lyft and Amazon and other places where people don’t so much have a boss,” Mr. Hendrix told me recently. “And even if they have a boss or a person somehow responsible for them, the vast majority of their labour is measured and dictated by a binary, by checking performance boxes, yes or no. And whatever the human experience is underneath that, no one cares.”

That might be one salutary effect of the chatbot bombshell: it is making us think about what we value in ourselves. Is it our ability to calculate and reason and succeed? Or our ability to feel, and even fail? To answer that, Prof. Tucker told me to call her partner, the writer and philosopher David McNeill (formerly of the University of Essex).

“Sentience, technically, means feeling,” he told me over the phone a few days later. He’d been rereading Plato’s Republic, which is in part a debate about whether you can have a complex society as well as a just one. “And sapience is knowing, right? And in our world, for human beings, sapience can only be an outgrowth of sentience.” I feel therefore I think. That’s the equation that makes us human. A computer can’t feel, for all its power. And being able to identify a description of a feeling – which ChatGPT can do – is not the same as feeling. And so, “machine thinking” is automatically, by Prof. McNeill’s definition, fake, and therefore incomplete and therefore alien.

Microsoft, which recently laid off an entire team of AI ethicists, claims ChatGPT will “empower people to unlock the joy of discovery, feel the wonder of creation.” Prof. McNeill thinks it will more likely promote convenience and mediocrity, because it eliminates the possibility of failure, which is how people learn and improve. ChatGPT does away with apprenticeship. But without the discouraging scut work ChatGPT promises to eliminate, we can’t evolve.


The last time I spoke to Ben about deepfakes and chatbots, I mentioned another friend who had just become a grandfather. He fears the world being left to his grandchild is worse than the one he inherited. “You’re a leader in the tech world,” I said to Ben. “A leader in AI, which looks like it’ll be one of most significant and disruptive inventions in human history. It seems very promising but very scary at the same time. Do you feel hopeful? Or full of despair?”

He paused. He always does before he speaks. “I feel hope and despair simultaneously,” he said. “I’ve been thinking about this since I entered the field.”

He stopped talking for a moment and looked out the window. “I think now we have the means at our disposal, software-wise, to tackle problems in a principled and systematic way. We can use AI to discover what we can do about climate change, what we can do about food scarcity, about sources of fresh water and our general quality of life. When I see the popular press focusing on the evils of deep fakes and the evils of AI, I think, maybe you’re right. But I also think 95 per cent of what this is all working toward can contribute positively to life on the planet.”

I hope he’s right. I hope we never have to choose, in a decisive way, between human awareness on the one hand, and the efficiency and speed of machines on the other. I know what I’d choose. I’m just not sure which one to bet on.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe

Trending