Who Needs Rosie the Robot? January 27, 2014 | by Sean Lorenz
Robots. My 80s childhood memories are littered with them: the Jetsons’ sassy Rosie the Robot, C-3PO and R2D2, Twiki, Transformers, Go-bots, Voltron, Vicki from Small Wonder, Data…yeah, I watched a lot of TV as a kid. Couple Hollywood’s take on robots with the futurist fantasies of Omni magazine’s April 1983 special robot edition, and it was hard not to have a well-defined anthropomorphic view of robots as matronly maids or silicon servants. If robots weren’t cute, wise cracking helper bots, their only occupational alternative was being hell-bent on the demise of humanity (see every robot movie in existence for reference). Cylons and Terminators aside, the American myth of robotics is a story of human and machine working side by side to bring about a more efficient tomorrow. I tend to imagine the cultural perception of robot/human interaction as ancient Greek philosophers casually discussing whether water or fire was more awesome, because their slaves did, well, everything else. Substitute philosophical discussions with catching up on our DVR backlog and you’ve just described dozens of articles in the past several years discussing a new burst of well-funded robotic company startups.
Why the sudden renewed interest in robotics? There seems to be three primary industry drivers at play here: 1) cheap, powerful CPUs and GPUs, 2) cheaper, more accurate sensors, and 3) smarter intelligence algorithms. These first two describe the truly astonishing amount of hardware packed into an iPhone. In fact, numerous robot companies such as Double Robotics and Romotive realized that a smartphone or tablet could be conceived of as a robot’s brain. Give your iPhone a body and some smarts, and you’ve got a pretty impressive commercially intriguing robot.
During my time at Neurala, a TechStars-backed company building “brains for bots”, I frequently dreamt up viable autonomous robotic software business plans, wondering what in the world people would want to buy. Neurala’s vision is to build an Intelligence Engine that takes sensor inputs, processes them in a way similar to that of the human brain, then allows for a robot to adapt to its current environment. The key word here is adapt. As robots change from using wheels to get about to end effectors such as legs or arms, coordinating motor and sensory inputs becomes exponentially more complex.
But what if all the complexity required to build an adaptive robot that looks and moves like us is unnecessary? What if 80s television steered us Gen X-turned-startup kids down the wrong path? Maybe the future of robots in the home isn’t a metal canister with arms and legs wearing an apron. Rather than one robot being instructed to wash dishes, sweep floors, and clean windows, what if your home environment sensed the areas needing a good scrub, then just did it? Who needs Rosie when you have the Internet of Things (IoT).
Obviously there will still need to be something robot-like folding the laundry, which is why robotics is considered a key application vertical in the IoT landscape. Unlike the future assumed by many in the past, however, the alternative might look like a new home ecosystem of “things” that perform their jobs independently while bettering the environment as a whole. If each home had even a few dozen interconnected sensors of various types attached to windows, lights, carpets, washing machines, toasters, refrigerators, dog bowls, etc., each object would have information about all other object states in order to decide what to do next. In other words, we’d have a system whereby machine-to-machine communication centralizes information flow, sending next steps back not to just one robot but to a multitude of IoT devices. This is basically the whole brain modeling perspective of Neurala only extrapolating the concept from “many sensors/one body” to “many sensors/many bodies”. With numerous sensors talking back and forth to and from a central home base, cognitive models of optimal homekeeping might become a more realistic robo-maid future.
This may sound like yet another failed futurist forecast, but the technology to build an IoT-based, self-assessing home environment already exists. IoT platform companies like Xively serve as the backbone to connect multiple sensors in a simple way. Two critical missing elements are: 1) an abundance of sensors and products for the home that blend in with the environment itself, and 2) a centralized cognitive software for making sense of disparate sensors communicating with one another in an ever-growing flood of data. Sure, these are no small tasks but the opportunities are ripe for the picking.