Tag Archives: Open Hybrid

This app lets you program objects by drawing lines

Like something out of science fiction, the Reality Editor lets you connect and manipulate the functionality of physical objects. 

Back in 2013, a team from MIT Media Lab’s Fluid Interfaces Group developed a method of creating Spatially-Aware Embodied Manipulation of Actuated Objects through augmented reality. The project was an effort to extend a user’s touchscreen interactions into the real world. Earlier this year, the crew released libraries and examples that could also allow others to do the same. With Open Hybridyou could directly map a digital interface onto a physical thing and program hybrid objects using Arduino and other popular hardware/software environments.


Now, the researchers have taken the project to a whole new level. The Reality Editor is a futuristic tool that empowers you to connect and manipulate the functionality of any gizmo or gadget. Just point your smartphone camera at an item and an overlay with its invisible capabilities will appear on the screen for you to edit. Drag a virtual line from one to another and form a new relationship between the two.

Although the ultimate goal of the IoT is to make ordinary objects life in our smart, most things are still pretty ‘dumb.’ They don’t communicate with one another, and most are only capable of one function. Let’s take a smart bulb for instance, which can dim and brighten, but it can’t change the channel on your TV. This is where the Fluid Interfaces Group’s app comes in.


The Reality Editor lets you define simple actions, change the functionality of objects around you, and remix how things work and interact. Essentially, the app gives you the power to turn something that is virtual into something that is physical and vice versa. The best part? It’s as easy as connecting dots.

“That light switch in your bedroom you always need to stand up in order to turn off — just point the Reality Editor at an object next to your bed and draw a line to the light. You have just customized your home to serve your convenience,” the team writes. “From now on you will use your spatial coordination and muscle memory to easily operate the object next to your bed as a tool for controlling the light.”


What’s more, you can ‘borrow’ functionalities from one object and use them on another. For example, you could employ your TV’s sleep timer as a way to switch your lights on and off, or even have the air conditioning at your house adjust the temperature when you hop into your car to head home. The possibilities are endless.

At the moment, the Reality Editor utilizes QR-like codes to identify smart devices. It works by prompting an HTML webpage and overlays a particular object’s functionalities onto the smartphone so you can program it. However, it will soon be able to recognize the objects as they are viewed with the app.

The Reality Editor can be downloaded and used along with the group’s open source platform Open Hybrid to build a new generation of Hybrid Objects. This isn’t solely geared towards designers and engineers, but Makers and other high-tech enthusiasts as well. Safe to say, a Minority Report-like future is quickly approaching.


This open source platform turns your physical world into a digital interface

The brainchild of MIT Media Lab’s Fluid Interfaces Group, Open Hybrid is an augmented reality platform for physical computing and the Internet of Things.

The Xerox Star was the first commercially available computer showing a Graphical User Interface (GUI). Since its debut in 1981, many of its introduced concepts have remained the same, especially with regards to how we interact with our digital world: a pointing device for input, some sort of keyboard for commands and a GUI for interaction. However, with many of today’s physical objects becoming increasingly connected to the Internet, Valentin Heun of MIT Media Lab’s Fluid Interfaces Group believes that GUI has hit its limit when it comes to extending its reach beyond the borders of the screen.


This problem is nothing new, though. Dating back the days of text-only command lines, interface designers have always been challenged by the imbalance between the countless commands that a computer can interpret, and the number of which one could store in their brain at one time.

As Heun points out, physical things have been crafted and shaped by designers over centuries to fit the human body. Because of their shape and appearance, we can access and control them intuitively. So wouldn’t an ideal solution be one in which both the digital and physical worlds come together in seamless fashion? That’s the idea behind what he and his MIT Media Lab collaborators call Open Hybrid. This project would enable users to directly map a digital interface right onto a physical item. By doing so, you would ever need to memorize a drop-down menu or app again.


Think about it, the use of these so-called smart objects isn’t all that easy. Take a smart light bulb, for instance, which might have millions of color options, thousands of brightness settings and various hue-changing patterns to select from. But in order to adjust the light, you need to first take your phone out of your pocket, enter a passcode to unlock it, open an app and search for the bulb within its main menu, all before finally accessing its functionality — a process that previously only required tapping a wall switch now requires multiple steps. Aside from that, the more objects that one has throughout their home or office, the more complex it becomes to find them in the app’s drop-down menu.

In an effort to solve this conundrum, Heun has developed the Reality Editor, which offers designers a simple solution for creating connected objects by using web standards and Arduino, in addition to a streamlined way to customize the objects’ behavior with an augmented-reality interface that eliminates complicated, and often unnecessary, steps.


“The amount of apps and drop-down menus in your phone will become so numerous that it will become impossible for you to memorize what app and what menu name is connected with each device. In this case, you might find yourself standing in the kitchen and all you want to do is switch on a light in front of you,” he writes.

These new tangible things are known as Hybrid Objects, as they share the best characteristics of virtual and physical UIs: a virtual interface for occasional modifying, connecting and learning about them, as well as physical interface for everyday operations. Meaning, this system transforms the actual physical world into a transparent window, while the smartphone in your pocket acts as a magnifying glass that can be used to edit reality when necessary.

How it works is pretty straightforward: Hold your phone up so the camera is pointed towards the object, while the app displays a virtual control panel hovering over the item — whether it’s a drone, a lamp, a kitchen appliance, a radio or even an entertainment system. This will prompt its settings and whatever other menu options to magically appear.


You’ll also see nodes corresponding to the physical controls the gadget offers, and can then create interactions between devices by drawing a line from the origin I/O to the designation I/O. And voilà!

“Traditionally, you would create some kind of standard that knows every possible representation of the relevant objects so that every interface can be defined. For example, say you have two objects, a toaster and a food processor, and now you would need to create a standard that knows how to connect these two objects.”

With Open Hybrid you have a visual representation of your object’s functionalities augmented onto the physical object. Where before an abstract standard needed to be devised, you can now just visually break down an object to all its components. Using the same example from above, the toaster now consists of a heating element, a setup button, a push slider and a timing rotation dial. All of these elements are represented with a number between 0.0 and 1.0. This same simple representation applies to the food processor. If you want to connect two things, you are really only pairing the numbers associated with each given item, never the objects themselves.

“This is the power of Open Hybrid. Now that the interface allows you break down every object to its components, you only need to deal with the smallest entity of a message: a number. As such, Open Hybrid is compatible with every Hybrid Object that has been created, and any object that will be built,” Heun adds.


What’s nice is that all of the data about the interfaces and connections are stored on the object itself, and each one communicates directly with handheld devices or with one another, so there’s never a need for any centralized hubs or cloud servers.

The Reality Editor is built on the same open standards that are fundamental to the Internet nowadays, such as HTML, Javascript and Open Frameworks. It runs on low-cost, low-power hardware — which in this case is the Arduino Yún (ATmega32U4) — and is easily compatible with other platforms. The system does require at least 400MhZ, 32MB of RAM, 100MB of memory, as well as TCP/IP and UDP networking capabilities.

“Wherever you can run node.js you can run the Hybrid Object platform. We have successfully experimented with MIPS, ARM, x86 and x64 systems on Windows, Linux and OSX,” Heun notes. “If you have the latest head-mounted, projected or holographic interfaces, feel free to compile the code for your platform and share your findings with the community.”

Safe to say, it’s always exciting to see new projects come out of MIT’s Fluid Interfaces Group. While we’ve seen several attempts in bridging the gap between the physical and digital worlds before, this one is certainly among the most unique. Intrigued? Head over to Open Hybrid’s detailed page here to learn more, or watch Heun’s recent Solid 2015 presentation below.