How three Toronto companies are changing the way we interact with our computers

How three Toronto companies are changing the way we interact with our computers

Note: This post originally appeared in Yonge Street Media. It has been reposted here with permission. 

“Okay, Ubi, say my name.”

“Hi Hisenberg—how are you today?”

“Okay, Ubi—how’s the weather today?”

“Toronto, Ontario: it is currently partly cloudy.”

Leor Grebler, the CEO and co-founder of Unified Computer Intelligence Corporation, is speaking to The Ubiquitous Computer—Ubi for short. The Ubi is a small computer that is WiFi enabled and voice activated. It represents the company’s first foray in to the realm of natural language computing. The Ubi can complete a variety of tasks, including looking up information on the Internet, sending quick messages to contacts, and playing songs on request.

What’s amazing about it is not that it can do any one of these functions—after all, these are all things your computer can do today—it’s that the user doesn’t need to twiddle with an abstracted interface to access those functions. That is to say, almost every one the Ubi’s features can be accessed by simply speaking to it.

It turns out, however, that speaking to a computer is merely one of the new ways we will be interacting with our devices in the future. Computer interface design is currently undergoing a renaissance that will result in a multitude of new user interaction paradigms. And that future will, in no small part, be determined by individuals working out of Toronto.

Ubi

The aforementioned Ubi was born out of a frustration Grebler and his co-founders felt with the technology they were using in their day-to-day lives.

“We thought people had become really distracted by technology,” he says. “So we asked ourselves, ‘When is technology is going to get to the point where it’s simply going to do what we need it to do and only chime in when we need it?'”

He goes on to say, “We wanted to make it extremely easy to access different services and connected devices. Currently each device that someone owns requires that they download several apps, and that they learn how to use each ones of those app.”

The Ubi is the company’s first attempt at solving that perceived problem, and it’s an ambitious first attempt at that.  Of course, anyone that has used Siri or a Kinect knows how finicky computer voice recognition can be. It is an immense undertaking to program a machine built on binary to understand natural language, and while the Ubi shares some of the quirks of its predecessors, it does a remarkably good job of parsing human speech.

There’s also something vaguely utopian about the device.

Not everyone can use a traditional computer interface. Whether it’s because of a visual impairment, lack of fine motor control, or simply because an interface is unintuitive or convoluted, there are people for whom the utility of the Internet has been locked away due to previous user interface paradigms.

It’s probably why voice-based computing has been a part of popular culture and science fiction since the inception of the computer. It’s also one of the reasons Grebler thinks voice-based computing will see a lot of iteration and experimentation in the next five- to ten years.

xTouch

Gerbler and company aren’t the only ones using sound to push forward user interface design. A company based out of the University of Toronto called xTouch is using sound detection to accomplish something that, at first glance, seems almost magical.

Almost everyone is familiar with the concept of sound triangulation. Place three microphones in an environment, and when a sound is made in that environment its location can be pinpointed with a high degree of accuracy. Professor Parham Aarabi and several of his former graduate students have developed a way to detect and locate a sound using only one microphone.

That feat in itself is impressive. However, it’s the application they’ve devised for their technology that is novel and will have implications for how we interact with our devices.

“The essential promise [of xTouch] is that it can take any surface that you put your device on and turn it in to a touch surface,” says Dr. Aarabi. “For example, instead of using the tiny keyboard on your iPhone, you can place the iPhone on a flat surface and use the entire surface as a keyboard.”

Of course, the technology isn’t perfect yet: in a non-test environment with a user that hasn’t had time to acclimate to its quirks, the company’s software correctly inputs about 80 per cent of commands. That’s lower than the almost 100 per cent correct input rate one can get with a touchscreen, but for certain applications it’s more than enough.

That said, finding the applications that, given the accuracy rating of xTouch, make the best use of its software has been the biggest challenge the company has faced thus far. Replicating the iPhone’s onscreen keyboard on a desk surface, for example, is not a good use xTouch’s software; most users would quickly give up on it if every fifth keystroke was misinterpreted. However, a game like Hungry Hungry Hippos, where rapid inputs are more important than precise ones, is one of the software’s ideal applications; there’s the added advantage of not having four children fight over a single iPad screen.

Dr. Aarabi says, “We’re heading into a future where every device will be in some way touch sensitive.” He adds, “What’s unique about xTouch is that it can make a surface touch sensitive without any extra hardware. I think it’s one step in the right direction, but there are many other steps that are being taken.”

Verold

Of course voice and sound-based interfaces aren’t the only ones receiving thoughtful iteration; the classic visual interface is still being worked and improved upon.

That’s where Ross McKegney and his team at Verold come in. They’ve spent the better part of the last three years working on a platform they hope will bring the interactivity of a native app—that is an app that one can download from an app store and install on their smartphone or computer—to web-based applications.

Their platform allows web developers to create 3D content for web-based apps. If the company is successful, users will start accessing most of their apps from a web browser, which means that they will see greater parity between apps on different platforms. Gone will be the days where Android users enviously look at the apps their iPhone using peers have access to.

Verold
Ross McKegney of Verold; Verold office in Toronto.

For the developers and companies that build apps, a robust and powerful platform for creating web-based content will mean a streamlined development pipeline. “People expect the interactivity of a native app, but no one really wants to build an app,” says McKegney. “You don’t want to have separate web and app teams, and you don’t want to have separate iOS team and Android teams. Verold will allow developers to use one skill set to create applications.”

Moreover, the company’s success will mean that sectors like education, which app developers have traditionally shied away from creating native apps for, will see a flourishing of interactive visual content.

“In the education sector, the fastest growing platform is the Chromebook [a laptop running Google’s Chrome OS operating system. Almost all the apps on a Chromebook are web-based ones],” says McKegney. “There’s good reason for that: a school system doesn’t want to manage machines. They want to put content on the cloud where it can be managed centrally.”

In addition, with technologies like virtual reality on the cusp of going mainstream, powerful content creation platforms like Verold will help the proliferation of web-based virtual experiences.

McKegney imagines a future where users will be able to see a hotel room prior to renting it by surfing over to a hotel’s website, donning the Oculus VR headset and walking through the room in virtual reality app.

“UI design is going to come off your screen to mean something more.”

Story by Igor Bonifacic. Photos by Tanja Tiziana.

Read more