Tag Archives: GUI

This open source platform turns your physical world into a digital interface


The brainchild of MIT Media Lab’s Fluid Interfaces Group, Open Hybrid is an augmented reality platform for physical computing and the Internet of Things.


The Xerox Star was the first commercially available computer showing a Graphical User Interface (GUI). Since its debut in 1981, many of its introduced concepts have remained the same, especially with regards to how we interact with our digital world: a pointing device for input, some sort of keyboard for commands and a GUI for interaction. However, with many of today’s physical objects becoming increasingly connected to the Internet, Valentin Heun of MIT Media Lab’s Fluid Interfaces Group believes that GUI has hit its limit when it comes to extending its reach beyond the borders of the screen.

Xerox_Star_8010

This problem is nothing new, though. Dating back the days of text-only command lines, interface designers have always been challenged by the imbalance between the countless commands that a computer can interpret, and the number of which one could store in their brain at one time.

As Heun points out, physical things have been crafted and shaped by designers over centuries to fit the human body. Because of their shape and appearance, we can access and control them intuitively. So wouldn’t an ideal solution be one in which both the digital and physical worlds come together in seamless fashion? That’s the idea behind what he and his MIT Media Lab collaborators call Open Hybrid. This project would enable users to directly map a digital interface right onto a physical item. By doing so, you would ever need to memorize a drop-down menu or app again.

RealityEditor_color_650

Think about it, the use of these so-called smart objects isn’t all that easy. Take a smart light bulb, for instance, which might have millions of color options, thousands of brightness settings and various hue-changing patterns to select from. But in order to adjust the light, you need to first take your phone out of your pocket, enter a passcode to unlock it, open an app and search for the bulb within its main menu, all before finally accessing its functionality — a process that previously only required tapping a wall switch now requires multiple steps. Aside from that, the more objects that one has throughout their home or office, the more complex it becomes to find them in the app’s drop-down menu.

In an effort to solve this conundrum, Heun has developed the Reality Editor, which offers designers a simple solution for creating connected objects by using web standards and Arduino, in addition to a streamlined way to customize the objects’ behavior with an augmented-reality interface that eliminates complicated, and often unnecessary, steps.

RealityEditor_swipe_650

“The amount of apps and drop-down menus in your phone will become so numerous that it will become impossible for you to memorize what app and what menu name is connected with each device. In this case, you might find yourself standing in the kitchen and all you want to do is switch on a light in front of you,” he writes.

These new tangible things are known as Hybrid Objects, as they share the best characteristics of virtual and physical UIs: a virtual interface for occasional modifying, connecting and learning about them, as well as physical interface for everyday operations. Meaning, this system transforms the actual physical world into a transparent window, while the smartphone in your pocket acts as a magnifying glass that can be used to edit reality when necessary.

How it works is pretty straightforward: Hold your phone up so the camera is pointed towards the object, while the app displays a virtual control panel hovering over the item — whether it’s a drone, a lamp, a kitchen appliance, a radio or even an entertainment system. This will prompt its settings and whatever other menu options to magically appear.

webpagefront3

You’ll also see nodes corresponding to the physical controls the gadget offers, and can then create interactions between devices by drawing a line from the origin I/O to the designation I/O. And voilà!

“Traditionally, you would create some kind of standard that knows every possible representation of the relevant objects so that every interface can be defined. For example, say you have two objects, a toaster and a food processor, and now you would need to create a standard that knows how to connect these two objects.”

With Open Hybrid you have a visual representation of your object’s functionalities augmented onto the physical object. Where before an abstract standard needed to be devised, you can now just visually break down an object to all its components. Using the same example from above, the toaster now consists of a heating element, a setup button, a push slider and a timing rotation dial. All of these elements are represented with a number between 0.0 and 1.0. This same simple representation applies to the food processor. If you want to connect two things, you are really only pairing the numbers associated with each given item, never the objects themselves.

“This is the power of Open Hybrid. Now that the interface allows you break down every object to its components, you only need to deal with the smallest entity of a message: a number. As such, Open Hybrid is compatible with every Hybrid Object that has been created, and any object that will be built,” Heun adds.

picture17

What’s nice is that all of the data about the interfaces and connections are stored on the object itself, and each one communicates directly with handheld devices or with one another, so there’s never a need for any centralized hubs or cloud servers.

The Reality Editor is built on the same open standards that are fundamental to the Internet nowadays, such as HTML, Javascript and Open Frameworks. It runs on low-cost, low-power hardware — which in this case is the Arduino Yún (ATmega32U4) — and is easily compatible with other platforms. The system does require at least 400MhZ, 32MB of RAM, 100MB of memory, as well as TCP/IP and UDP networking capabilities.

“Wherever you can run node.js you can run the Hybrid Object platform. We have successfully experimented with MIPS, ARM, x86 and x64 systems on Windows, Linux and OSX,” Heun notes. “If you have the latest head-mounted, projected or holographic interfaces, feel free to compile the code for your platform and share your findings with the community.”

Safe to say, it’s always exciting to see new projects come out of MIT’s Fluid Interfaces Group. While we’ve seen several attempts in bridging the gap between the physical and digital worlds before, this one is certainly among the most unique. Intrigued? Head over to Open Hybrid’s detailed page here to learn more, or watch Heun’s recent Solid 2015 presentation below.

Students develop chess set for the visually impaired

Charles Buxton once said, “In life, as in chess, forethought wins.” Somaiya College’s forethought with their automated chess design provides a clear winner for all parties involved. The automated chess table includes braille pieces, voice recaps of every move, and textural contrasts between white and black spaces. The combination of all these factors allows the game to be utilized by those who suffer from visual impairments.

chess

“There are already board games available in the market that can be used by the visually-impaired, but our board game involves technology that allows one to play the game online as well as on a physical board. The board automatically plays the moves depending on the keys pressed,” explains Gaurang Shetty, Head of the College’s Innovation Center.

The web connectivity of the project enables individuals to play each other from across the globe as the game board provides the player with a Graphical User Interface (GUI) over the Internet, so one person can play over GUI sitting in any corner of the world, while the other plays on the physical chess board.

At the heart of the 64-key membrane keyboard lies an Atmel-powered Arduino that allows the board to communicate with a connected computer. The team behind automated chess will be demonstrating their board at Maker Faire Rome next month.

“While the project is ready, we are also trying to incorporate other features to this chess board for even better results,” Shetty concluded.

 

Walltech SmartWatch tick-tocks with Atmel



After successfully designing and extensively documenting the open source OLED Watch (v 4.2), Walltech founder John Wall has moved on to version 6.0 of the Atmel-powered smartwatch.

walltech

The latest wearable device is built around the FemtoduinoBLE, which features an ATmega32u4 microcontroller (bootloaded as an Arduino Leonardo) paired with a BlueGiga Bluetooth 4.0 low energy module to link devices and receive notifications.

The newest Walltech also boasts a 1.5-inch full color OLED display as well as an on-board microSD card slot.

“A step up from the monochrome .96′ OLED display of v4.2, this screen also consumes very little power thanks to the OLED technology behind it and can show beautiful images that will be the GUI for the smart watch,” John explained in a recent blog post.

“Now that there’s an SD card on board, I can use fancy graphics and make it look professional and keep the code to the MCU, enabling more to be coded instead of storing images too.”

In addition, says the designer, the DS1307 and accompanying regulator make an appearance again, with the same battery charging IC from the previous model powering up the 500mah lithium ion battery.

“To make selections, there will be a surface mount three-way navigation switch in the top right that you can flick up, down and push in to make selections and scroll through faces and apps,” he added.

Interested in learning more? You can check out John’s completed OLED Watch (v 4.2) here and the Walltech Smart Watch v6.0 introductory blog post here.

Atmel accelerates automotive design (Part 2)

Yesterday, Bits & Pieces took a closer look at how Atmel is helping accelerate automotive design by closely collaborating with Vector Informatik to fully support our  32-bit AUTOSAR compliant devices.

Essentially, AUTOSAR provides an abstraction layer between hardware and application – allowing hardware-independent development and testing of the application software. It also permits the reuse of a validated application from previous designs for a new one.

“And that is precisely why Atmel has developed a microcontroller (MCU) abstraction layer (MCAL) for its 32-bit AVR automotive family devices,” Atmel engineering rep Eric Tinlot told Bits & Pieces.

atmelautosar

“These MCAL modules and Vector’s LIN/CAN communication layers are integrated into Vector’s complete MICROSAR environment (including OS, real-time environment, diagnostic, etc). Using Vector’s DaVinci, Atmel has also developed a complete set of graphical user interfaces (GUI) for each MCAL module to help users configure all features needed in the application.”

According to Tinlot, all MCAL modules have to be configured using their respective GUI screens. The user generates the required configuration files (.h and .c files) with a single click of the ‘generate’ toolbar icon (green triangle) at the top. These configuration files, the MCAL module, and the MICROSAR package can be compiled with any AUTOSAR application onto a 32-bit AVR automotive device to design an AUTOSAR-compliant ECU node.

The following list details the specific MCALs and GUIs developed by Atmel, with the CAN and LIN drivers provided by Vector Informatik.

  • General-purpose timer driver
  • Watchdog driver
  • Microcontroller unit driver
  • Flash drivers
  • EEPROM drivers
  • Serial protocol interface drivers
  • ICU drivers
  • Pulse width modulation (PWM) drivers
  • Analog-digital (A/D) converter drivers
  • Digital input output drivers
  • Port drivers

“Simply put, the complete AUTOSAR solution, available via Vector Informatik, allows designers to develop their own ECU firmware using an Atmel 32-bit automotive device,” Tinlot added. “Networking communication via LIN or CAN buses is also available. Meaning, the included firmware fulfills AUTOSAR spec requirements.”