You’re designing an office environment for 250 software workers. Parameters? Many. Time and resources? Limited. Number of employees who would like to have their desk near a window? All of them.
And then your creative collaborator, having considered all the options, comes back to you with 10,000 possible floor plans.
That’s a lot, unless the collaborator is a program, not a person.
In fact, this surfeit of choices was just what Autodesk, a global creator of 3D design, engineering and entertainment software, wanted from Project Discover, the artificial-intelligence program it developed to help design its new two-storey office space at MaRS, an innovation hub located in downtown Toronto.
Organizations have long relied on a boardroom filled with people powered by coffee and surrounded by flip charts to generate ideas. But old-fashioned brainstorming has fallen into some disrepute. The fit and feasibility of the ideas it produces often can’t be assessed quickly or easily, given the ethos of “every idea is a good idea.”
What’s more, the brainwaves are limited to the ingenuity of who happens to be in the room.
Azam Khan, director of complex systems research at Autodesk, initiated the project, and says “the team’s intention in designing the system was for them to discover something that they couldn’t do on their own.”
As well as that, he adds, the system assesses the ideas it produces, narrows them down “to a manageable set of outcomes,” and then “presents those to the human designers from The Living, an Autodesk studio that wrote the original code to help them understand the tradeoffs that you wouldn’t normally experience unless you did 10,000 designs.”
In the process, Project Discover, well, discovered, that a “generative” design system can see things we can’t. And because the algorithm “uses concepts found in natural evolution,” according to its creators, it also can ensure survival of the fittest, by “gradually promoting the best options” for serious consideration.
What Autodesk has done reveals the immense potential of using AI to turbo-charge the creative process: It can come up with a huge number of possible solutions, but then winnow them down to a practical number. By rapidly running through virtual prototypes of solutions, a learning machine becomes a kind of serendipity engine that powers creativity.
But does this serendipity engine still need a driver?
The creative approach adopted by Project Discover, says Khan, has sparked “ravenous interest” in all sorts of applications of the software the project is to produce. He sees it being used “to optimize, for example, factory layouts, electronics designs and even entire neighbourhoods.” But AI-driven creation is far from a turnkey operation. There are still some big philosophical hurdles ahead.
For one thing, what exactly is creativity?
Machine-learning specialist Inmar Givoni calls it “an abstract concept that is hard to nail down and properly define. It’s one of those things where we know it when we see it. Or more accurately, we think we know it when we see it.”
Givoni, who works for Kindred, a Toronto company trying to “enable robots to understand and participate in our world,” says that, although AI can already produce poetry, pop music or movie scripts, “there’s the sense that something is fundamentally missing. In a way, the algorithm doesn’t really get it.
“It doesn’t understand what makes sense and what doesn’t, and when is it interesting to not make sense.”
Sanja Fidler at the University of Toronto says the advent of AI that “gets it” may just be a matter of time. The assistant professor of computer science and her colleagues have a program that generates pop music — they call it a “neural karaoke” — from photos, as well as another program focused on neuro-esthetics (it can calculate what’s in fashion on the basis of thousands of clothing images from social media).
Now she is trying to determine if such neural networks can go “beyond the data you give them.” For example, “if you are asking AI to generate, say, 50 to 100 new stories based on data from thousands of books, what it creates is going to be somewhat biased because it’s taking stuff from that existing pool of information.” Because there is no AI that is “embodied” — able to “just go around and build its own stories” — all that technology can do “is read about other people’s experiences, and kind of blend them together.”
But that won’t last forever, Fidler says.
Today, the story — perhaps the most fundamental “unit” of human creativity — is too complex to be reduced to narrative patterns or data sets that a computer can use to generate satisfying results. But in the future, “there are going to be embodied AI agents that build their own world of experiences,” she predicts. And when that happens — when machines can explore the world and gather the stuff of stories — we may see work that isn’t generated by the human imagination.
Meanwhile, as scientists struggle to endow a computer with that ethereal quality, the imagination, AI continues to expand its assistive reach into practical realms, from refining Netflix recommendations to detecting email spam and transcribing interviews. Highly useful, but not creative.
But back at Autodesk, moviemaking veteran Hilmar Koch looks ahead and envisions a creative middle ground: AI that can do more than assist human storytellers; it can empower them.
Koch recently joined the company, after a career as a visual-effects pioneer on such films as Avatar, Star Wars: The Force Awakens and Jurassic World, in a new position: director of research and development for the future of storytelling.
He, like Fidler, sees technology’s limitations. “Are computers becoming creative? That is questionable,” he says. “They don’t know about the human condition. They don’t know what it’s like to be jobless, or a parent, or to swim in the ocean.” As a result, “I do not look at [AI] at all as here to replace people’s jobs.”
What it can do, however, “is open up possibilities, as the canvas gets stretched out beyond the edges of what we know right now.”
So, the team Koch leads at Autodesk, called Project Narrativa, is exploring what lies ahead for the story in any art form. And with the emergence of AI that can generate endless choices, “the role of the creative, of the artist, has to change,” he says. “You are going to have to guide the process. You need to prune this decision tree at a very rapid pace, or it could very much overwhelm you. How are we going to behave in the presence of too many possibilities?”
In other words, 10,000 machine-generated screenplays aren’t going to advance the art of cinema, but Koch can foresee using AI to develop what he calls “the story-information model — everything that you need to know about your story.”
An example: Television and cinema are now home to hugely complex story worlds — think of The Walking Dead and its multi-platform spinoffs, or the many layers of Star Wars productions, now so dense with plots, characters and “rules” that Lucasfilm has to employ archivists just to keep track. One of them is called the Keeper of the Holocron, a database that now houses more than 80,000 entries.
Such ambitious properties, with so many humans involved in their creation, could benefit from the all-seeing technology that is to come, Koch says. AI could be a valued partner “as we search along a guided path through otherwise overwhelming data sets, and carve the story out of that big mass of marble, the potential story that’s already in there.” Before computers can do anything creative with all this information, he adds, “we need to shape the data so that it is better for humans to deal with.”
He says the magic-button allure of AI doesn’t cloud his faith in, and passion for, the human creative exchange. “What makes me tick is that … creative spark, the handing over of an idea, one person talking to another.” From that spark, “our ideas multiply and build on one another, and they become that much stronger.”
As long as the creative force is liberated, Koch says, “I do not care what technology is running behind the curtain.”