Tag Archives: Media Lab

SensorTape is a sensor network in the form factor of masking tape

Sensor deployment made as simple as cutting and attaching strips of tape.

Developed by students from MIT Media Lab’s Responsive Environments group, SensorTape is a sensor network in the form factor of masking tape. Inspired by the emergence of modular platforms throughout the Maker community, it consists of interconnected and programmable sensor nodes on a flexible electronics substrate. In other words, it’s pretty much a roll of circuits that can be cut, rejoined and affixed to various surfaces.


And what’s even cooler is that it’s a completely self-aware network, capable of feeling itself bend and twist. It can automatically determine the location of each of its nodes and the length of the tape, as it is cut and reattached.

As the neighboring nodes talk to one another, they can use their information to assemble an accurate, real-time 3D model of their assumed shape. Tapes with different sensors can also be connected for mixed functionality.

SensorTape’s architecture is made up of daisy-chained slave nodes and a master. The master is concerned with coordinating the communication and shuttling data to a computer, while each slave node features an ATmega328P, three on-board sensors (an ambient light sensor, an accelerometer, and a time-of-flight distance sensor), two voltage regulators and LEDs. The master contains the same AVR MCU, as well as serial-to-USB converter and a Bluetooth transceiver. The tape can be clipped to the master without soldering using a flexible circuit connector.


In terms of communication protocol, the team chose a combination of I²C and peer-to-peer serial. Whereas I²C supports most of the data transmissions from the master to slave, addresses are ‘assigned dynamically’ over peer-to-peer serial. This enables a fast transfer rate of 100 KHz via I²C with a protocol initialization sequence that accommodates chains of various lengths, up to 128 units long. (For testing, the MIT Media Lab crew developed a 2.3-meter prototype with 66 sensor nodes.)

Aside from its hardware, SensorTape has black lines that instruct where it’s okay to cut and break the circuits using a pair of scissors. As you can see in the image above, this can be either in a straight line or on a diagonal, which allows you to piece together the tape into 2D shapes just as you would when forming a picture frame.

Although still in its infancy, sample use cases of SensorTape include everything from posture-monitoring wearables to inventory tracking to home activity sensing. What’s more, the team has created an intuitive graphical interface for programming the futuristic tape, and it’s all Arduino-friendly so Makers will surely love getting their hands on it and letting their imaginations run wild. You read all about the project in the MIT group’s paper, as well as on Fast Company.

This app lets you program objects by drawing lines

Like something out of science fiction, the Reality Editor lets you connect and manipulate the functionality of physical objects. 

Back in 2013, a team from MIT Media Lab’s Fluid Interfaces Group developed a method of creating Spatially-Aware Embodied Manipulation of Actuated Objects through augmented reality. The project was an effort to extend a user’s touchscreen interactions into the real world. Earlier this year, the crew released libraries and examples that could also allow others to do the same. With Open Hybridyou could directly map a digital interface onto a physical thing and program hybrid objects using Arduino and other popular hardware/software environments.


Now, the researchers have taken the project to a whole new level. The Reality Editor is a futuristic tool that empowers you to connect and manipulate the functionality of any gizmo or gadget. Just point your smartphone camera at an item and an overlay with its invisible capabilities will appear on the screen for you to edit. Drag a virtual line from one to another and form a new relationship between the two.

Although the ultimate goal of the IoT is to make ordinary objects life in our smart, most things are still pretty ‘dumb.’ They don’t communicate with one another, and most are only capable of one function. Let’s take a smart bulb for instance, which can dim and brighten, but it can’t change the channel on your TV. This is where the Fluid Interfaces Group’s app comes in.


The Reality Editor lets you define simple actions, change the functionality of objects around you, and remix how things work and interact. Essentially, the app gives you the power to turn something that is virtual into something that is physical and vice versa. The best part? It’s as easy as connecting dots.

“That light switch in your bedroom you always need to stand up in order to turn off — just point the Reality Editor at an object next to your bed and draw a line to the light. You have just customized your home to serve your convenience,” the team writes. “From now on you will use your spatial coordination and muscle memory to easily operate the object next to your bed as a tool for controlling the light.”


What’s more, you can ‘borrow’ functionalities from one object and use them on another. For example, you could employ your TV’s sleep timer as a way to switch your lights on and off, or even have the air conditioning at your house adjust the temperature when you hop into your car to head home. The possibilities are endless.

At the moment, the Reality Editor utilizes QR-like codes to identify smart devices. It works by prompting an HTML webpage and overlays a particular object’s functionalities onto the smartphone so you can program it. However, it will soon be able to recognize the objects as they are viewed with the app.

The Reality Editor can be downloaded and used along with the group’s open source platform Open Hybrid to build a new generation of Hybrid Objects. This isn’t solely geared towards designers and engineers, but Makers and other high-tech enthusiasts as well. Safe to say, a Minority Report-like future is quickly approaching.


Hands full? KickSoul lets you answer calls with your feet

KickSoul is an embedded insole that maps natural feet movements into inputs for digital devices.

Have you ever tried to answer a call, respond to a text or look something up on your phone when your hands are full? Thanks to a team of MIT Media Lab researchers, you can try using your feet instead. Introducing KickSoul — an insole that simply slips inside of your shoe and enables you to wirelessly control your mobile devices and appliances with a flick of your foot.


“Most of [today’s] devices have visual interfaces that rely on hand gestures and touch interaction, as they are easy and natural for us. However, there are occasions when our hands are busy or it is not acceptable to make use of them, preventing us from interacting with our devices,” the group led by Xavier Benavides writes.

To bring their idea to life, the Media Lab crew sewed several electronic components onto a spongy insole. These included an accelerometer and a gyroscope to track motion, an ATmega328 to help collect data and a Bluetooth module for wireless communication. The six-axis IMU registers the movements and transmits them to the MCU. From there, the information is analyzed by a special algorithm and relayed to an accompanying mobile app.


The system supports two types of interactions: pushing an imaginary object away with your foot and pulling one closer. The idea is that, with just these two simple foot movements, you can scroll, zoom in and out on a document, turn on a light, accept or reject a phone call, and save or delete a file. Whenever either gesture is detected, KickSoul will search for the nearest compatible device and determine which one the user wants to operate.

“Most of these interactions are short in time and not very complex. As a consequence, feet become a suitable substitute or complement to hands, as they tend to be free when our hands are not,” the researchers conclude.

Intrigued? Check out the project’s official paper here, and see it in action below.

Will Makers change Shenzhen?

Writing for the EE Times, Junko Yoshida says local culture in Shenzhen is rapidly changing, with a growing number of hi-tech workers reportedly joining the rapidly growing Maker Movement (chuang ke).

Indeed, RPTechWorks founder Yang Yango told Yoshida that “labor intensive” Shenzhen will eventually become a city known for fast prototyping with “shortened development” cycles. 

Qifeng Yan, ex-director of the Nokia Research Center in Shenzhen and currently director and chief researcher at Media Lab (Shenzhen) of Hunan University, expressed similar sentiments in an interview with Yoshida.

However, Yan noted that many individuals in Shenzhen lack free time and space. As such, the Maker Movement in Shenzhen (and China as a whole) is evolving into something quite distinct. 

More specifically, it is intertwined with the existing electronics ecosystem in Shenzhen, as Makers help local companies open DIY workshops, kick off fresh projects and even open new startups.

“The electronics market on Huanqiang Road has always been a destination for every EE. But its importance is increasing for the rest of us, with the maker movement catching on,” Yoshia concluded.

As we’ve previously discussed on Bits & Pieces, hardware development is becoming a more agile process with the aid of prototyping tools like Atmel-powered RepRap and Arduino boards – both of which are helping to facilitate innovation across the world and particularly in China.

“MakerSpaces will likely enable a new wave of tech startups in China as in the U.S,” Seeed Studio founder Eric Pan told Bits & Pieces during a recent interview. “To be sure, Makers working with their peers are now able to more easily realize their goals, while bringing products to market with new platforms such as e-commerce sites and crowdfunding.”

Interested in learning more about China and the Maker Movement? Previous Bits & Pieces articles on the subject are available here. Atmel also will be at Maker Faire Shenzhen 2014 in April, so be sure to stop by and see us if you are in the area!