How generative AI could spur original thinking — with the right guardrails

Artificially intelligent systems are unconstrained by the norms that guide our own thoughts. That’s both good and bad.

How generative AI could spur original thinking — with the right guardrails

The Pope wore Balenciaga. More specifically: He wore a snow-white puffer jacket, buttons snapped and waist cinched, stalking the streets of Rome in late March with a giant silver cross thumping against his chest. He looked cozy; he looked super-duper fly. OK, he looked too good to be true, and, of course, he was — the photo had been created by a 30-something construction worker tripping on shrooms, who’d plugged a couple keywords into an image generator called Midjourney and posted the result on Reddit. But faster than a gust of white smoke escaping from the chimney of the Sistine Chapel, the Pope in a Coat went viral, hoodwinking not only gullible Boomers on Facebook but members of the media and terminally online celebs.

It’s possible that this AI-generated hoax slipped past our bounds of credulity because its deception wasn’t political (like the pretend pictures, days earlier, of New York City police officers perp-walking Donald Trump) or historical (more faked moon-landing shots) but merely sartorial. The stakes here were pretty low. But the stylish Pope caught the tail end of a wave of anxious think pieces about the potential perils of generative AI, a buzzy model of artificial intelligence that can produce reams of content — including text, audio, images, video and code — from fairly basic commands.

Microsoft’s speech synthesizer, VALL-E, convincingly mimics a voice from just three seconds of recorded material, which can then be used to say anything at all. Left to engage long enough with a user (maybe two hours), Microsoft’s AI-powered search engine, Bing, displays an alarming tendency to express dark fantasies of sabotage — complete with purple devil emoji — and even tried to dismantle one reporter’s marriage. There’s been talk of ChatGPT, the extremely easy-to-use chatbot developed by OpenAI, disrupting academia, upending journalism and coming for all manner of white-collar work. This would all be unnerving enough, but chatbots also have a weakness for what AI researchers call “hallucinations,” which is a slightly dreamy way to say that they flat-out make up facts.

On March 22, the Future of Life Institute, a global non-profit that aims to reduce “extreme risks from transformative technologies,” released an open letter, signed by the likes of Elon Musk and Montreal-based deep learning pioneer Yoshua Bengio, that called for a six-month pause on the training of powerful AI systems until safety protocols could be developed. As if on cue: several of the signatures on the letter were quickly revealed to be fake.

It’s unclear what exactly a six-month hiatus would accomplish or how it might be enforced. (Just a few weeks later, Musk went ahead and launched his new OpenAI competitor, X.AI.) Besides, the problem isn’t necessarily the technology — as usual, the bigger problem is us.

 

Breaking through conventions

Chatbots learn by identifying statistical patterns in a trove of data hoovered up from Wikipedia and news articles, blog posts and books. Thanks to our own considerable shortcomings, that data can be wrong, it can be biased or it can be problematic. As a result, when The Intercept asked for an airline-screening code that flagged risky passengers, ChatGPT came back with a program for racial profiling, identifying anyone who was born in (or had visited) Afghanistan, Syria, North Korea and Iraq as a security threat. Lensa, an AI portrait generator developed from Stable Diffusion, depicts men as astronauts and explorers. Its pictures of women are far more likely to be nude.

“These models are not self-aware, and they’re not trying to do anything nefarious,” says Graham Taylor, research director of the Vector Institute for Artificial Intelligence. “They’re trying to solve something in the most efficient way possible, based on what they’ve been given.” The system doesn’t know it is stepping on social decorum — it’s just math picking up on correlations. But AI’s propensity to bust conventions can also be harnessed for creative and fruitful uses. We can use it to push us past the constraints of our own thinking.

Researchers like to bring up the second match of a 2016 best-of-five series between Go grandmaster Lee Sedol and AlphaGo, an artificially intelligent system designed by researchers at DeepMind. People typically learn the game through a combination of practice and theory, with considerable weight given to the application of proverbs like “the second line is the route to defeat.” However, AlphaGo, which learned by playing tens of millions of games against itself, doesn’t care a whit for proverbs and, on the 37th move, did something so bananas it would never — had never — occurred to a human, leading Sedol to compose himself outside the room for 15 minutes and AlphaGo to take the match.

“We learn to do things a certain way from our teachers or our mentors, and we can be locked into that approach,” Taylor says. “AI gives us a way to break free of the limits of our experiences. It’s this amazing tool to power up.” That has implications for everything from the food we consume and the buildings we live in to how we heal and explore worlds beyond our own.

Recipe creation is no sweat for generative AI — simply plug in a few ingredients, or ask it for suggestions based on what’s in season, and ChatGPT will spit out ideas for dinner. (Right now, the menu can be a bit light on flavour, or a little optimistic about cooking time, but let’s be honest: recipe writers have always fibbed about how long it takes to caramelize onions.) GPT-4, OpenAI’s latest and most advanced language model, goes further. With it, you can just upload a photo of the contents of your fridge or pantry and AI will generate recipes that take advantage of what you’ve got.

But there’s greater potential in this technology to help solve a huge problem in food security. The livestock industry is a nightmare for our planet. Roughly 40 per cent of the earth’s land surface is used to raise farmed animals or grow the food they eat, and when you add up the carbon, methane and nitrous oxide caused by meat and dairy production, you get nearly 15 per cent of all greenhouse gas emissions. Plant-based and lab-grown alternatives can help dramatically, but there’s one big problem: far too few people actually like the way they taste.

In the spring, Dana McCauley, CEO of the Canadian Food Innovation Network, attended the Future Food-Tech conference in San Francisco. “I tried all this fish and dairy made from cellular agriculture, and even the very best of them fell short of where they should be,” she says. While the texture of the lab-grown smoked salmon wasn’t terrible, its flavour skewed awfully fishy. “It just wasn’t what a consumer would accept as an alternative.” McCauley is bullish on generative AI’s ability to tap into a chef’s discerning palate and vastly improve what we’ve been able to produce. “Chefs will get involved in the creation of foundational ingredients,” she predicts. “We’re closing in on the opportunity for them to say, ‘I would like a lab-grown pork chop where the flesh tastes like Dutch tulip–fed pork and the fat tastes like Spanish Iberico pork.’ ” That pork chop won’t be an instant knockout, of course — as with all things generated by AI, it’ll get better as humans tinker with their commands to finesse the output — but it would mean we’d no longer be limited by the quality and availability of our ingredients. It could usher in a new world of taste.

 

 

Generating new possibilities

Already, generative AI is helping researchers make enormous strides in the field of medical discovery. Our bodies naturally produce some 20,000 proteins, which start out as strings of chemical compounds before folding into three-dimensional shapes that determine their function — building muscle and body tissue, breaking down nutrients, transporting oxygen through the bloodstream, attacking and removing unwanted intruders. Using an image generator like DALL-E, engineers can produce blueprints for proteins that aren’t found in nature at all but instead take on new shapes to serve critical tasks, like binding to specific cell types to fight cancer or ward off the flu.

In February, Insilico Medicine announced it had applied generative AI to design a COVID drug that helped immune systems stop the spread of the virus and reduced inflammation in the lungs. The drug is slated to start clinical trials in China soon. “AI has the potential to powerfully accelerate science and engineering,” Taylor says. “And as a scientist and an engineer, I’m very excited about that.”

Architects and builders are also tapping into the potential of generative design. They’re using specialized AI — or genetic algorithms — to develop optimized structures that find the precise sweet spot in a bunch of different tensions, including sustainability, cost-efficiency, visual impact and profit. “It gives you the ability to make the least number of trade-offs with how you design the best-performing building across the factors you care about, which gets infinitely more difficult when you consider everything contrasting with everything else,” says Sol Amour, a senior product manager at Autodesk, a software company for architects, manufacturers, designers, builders, engineers, 3D artists and production teams. (Autodesk helped create an incredibly detailed three-dimensional model of Notre-Dame Cathedral that is supporting conservation and restoration efforts after the devastating 2019 fire.)

Sometimes, those structures — much like the cancer-fighting proteins — will look like nothing we’ve ever seen before. Humans have a bias for symmetry and beauty, Amour notes, and our designs reflect that. But when NASA used genetic AI to design strong, lightweight mission hardware, the results turned out to be eerie and strange, like the frame of some sort of alien ATV. “It wasn’t aesthetically pleasing, and it wasn’t what a human designer would think of, but practically, which is the most important thing for NASA, it performed way, way better,” Amour says — three times better, in fact, according to the space agency.

 

 

Brushing up on critical thinking

To harness all that technological potential, however, humans will need to get very, very good at crafting inventive and specific prompts. When it comes to image generation, there’s a world of difference in the results to “fish in tuxedo” and “two slugs in wedding attire getting married, stunning editorial photo for bridal magazine shot at golden hour,” a command concocted by American technologist Andy Baio. Still, it can be daunting to open up DALL-E or ChatGPT and see a blank window blinking back at you: When the creative possibilities are endless, it’s easy to bump up against the brick wall of your imaginative powers. “The most popular lab that we have ever offered at Vector Institute is in prompt engineering,” Taylor says. “My colleague says that English is the hottest new programming language.”

Similarly, we’re going to require tools to determine the credibility and accuracy of any content generated by AI. Traditionally, high schools and universities have largely relied on student essays to develop critical thinking skills — they’re where young people learn to research, reflect, evaluate and write. “When that becomes automated, we get a huge gap in our ability both to teach and assess critical thinking,” says Robert Clapperton, an associate professor at Toronto Metropolitan University. Banning ChatGPT outright from the classroom is not going to cut it: some kids will obey, while others will find a workaround, which creates equity issues. So instructors must come up with new strategies for students to learn those skills, through a process that Clapperton suspects will have to engage with AI. He’s partial to Ametros Learning, a company he co-founded that uses AI-powered interactive scenarios to help people make strong arguments and develop skills like persuasiveness and empathy. (You can consider Ametros a virtual workplace, which is a tagline ChatGPT devised.) “I do think AI can become a phenomenal teaching tool,” Clapperton says, “if we figure out how to use it properly and effectively.”

Generative AI models may not be inherently nefarious, but it can be scarily hard to anticipate all the nefarious ways humans might put them to use. At a time when fewer and fewer people trust the media, for example, ChatGPT has come along to fabricate newspaper articles or academic studies and misrepresent facts with cheerful abandon. That tendency is only likely to get worse as the material used to train generative AI skews more and more heavily to material it has actually produced. “I do think it’s a danger, and could make it very difficult to find the information we take for granted now,” Taylor says. “In the future, it will be harder to separate truth from hallucination.”

A toasty pope in a flashy coat is just the beginning of convincing AI-generated trickery. We’ll need to be equally creative dreaming up ways to corral the damage it could unleash.

 
MaRS believes “innovation” means advancing Canadian technology for the benefit of all people. Join our mission.

 
Illustrations: Made by Emblem