Tag Archives: EEG

Clara is a smart lamp that helps you stay focused


Working on a project? Cramming for an exam? This brain-sensing, environment-augmenting lamp uses EEG technology to tell how focused your are and block out distractions. 


We’ve all been there: It’s late at night, you’re cramming for an exam when suddenly you’re interrupted by the simplest thing. How cool would it be to have a desktop accessory that could give you a kick in the right direction and increase your intensity as you try to finish your studying? Thanks to a group of Makers from the School of Visual Arts, that will soon be a reality.

IMG_2933

The brainchild of developers Mejía Cobo, Belen Tenorio, and Josh Sucher, Clara is a brain-sensing lamp that employs EEG technology to tell how focus you are at a task at hand. Embedded with speaker and LEDs, the scene-augmenting device is capable of responding to changes in brainwaves, then reacting to your level of concentration by increasing the ambient music and shifting the light levels.

To bring this idea to fruition, the team used the combination of an Arduino Uno (ATmega328), an MP3 shield, several Adafruit NeoPixels, a SparkFun Bluetooth modem and a Neurosky MindWave Mobile EEG headset to wirelessly measure your “attention” and map the lamp’s color temperature, thereby subtly altering your environment.

IMG_2947

As you begin homing in on a specific idea, the light will become crisper and cooler as the volume of the ambient noise emitted from the speaker slowly rises. This helps to enhance your ninja-like focus and block out other distractions.

“The basic structure of the Arduino code is straightforward. The NeoPixel strip is instantiated, then the Music Maker shield is instantiated, then we take advantage of interrupts to listen for, receive and act on Bluetooth serial data while the music is playing,” its creators reveal. “When the MindWave detects ‘activity’ (a number from 0-100 generated via some proprietary algorithm on the Neurosky chip), we initiate the ‘fade’ of the music and the light.”

Bul

Looking ahead, don’t be too surprised if you see Clara on Kickstarter in the coming months. Plus, the team hints that they may even migrate to an Arduino Mega (ATmega2560) for its next iteration. Until then, check out rather unique project on its page here.

This lower-limb exoskeleton is controlled by staring at flickering LEDs


Scientists have developed a brain-computer interface for controlling a lower limb exoskeleton.


As recent experiments have shown, exoskeletons hold great promise in assisting those who have lost the use of their legs to walk again. However, for those who are quadriplegic, diagnosed with a motor neuron disease or have suffered a spinal cord injuries, hand control is not an option. To overcome this barrier, researchers at Korea University and TU Berlin have developed a brain-computer interface that can command a lower limb exoskeleton by decoding specific signals from within the user’s mind.

2050410_Korea_IOP_endo.png

This is achieved by wearing electroencephalogram (EEG) cap, which enables a user to move forwards, turn left and right, sit and stand simply by staring at one of five flickering LEDs, each representing a different action. Each of the lights flicker at a different frequency, and when the user focuses their attention on a specific LED, this frequency is reflected within the EEG readout. This signal is then identified and used to control the exoskeleton.

The exoskeleton control system consists of a few parts: the exoskeleton, an ATmega128 MCU powered visual stimuli generator and a signal processing unit. As the team notes, a PC receives EEG data from the wireless EEG interface, analyzes the frequency information, and provides the instructions to the robotic exoskeleton.

Brain

This method is suitable for even those with no capacity for voluntary body control, apart from eye movements, who otherwise would not be able to control a standard exoskeleton. The researchers believe that their system offers a much better signal-to-noise ratio by separating the brain control signals from the surrounding noise of ordinary brain signals for more accurate exoskeleton operation.

“Exoskeletons create lots of electrical ‘noise,’” explains Professor Klaus Muller, an author on the paper that has been published in the Journal of Neural Engineering. “The EEG signal gets buried under all this noise — but our system is able to separate not only the EEG signal, but the frequency of the flickering LED within this signal.”

led-controlled-exoskeleton-5

The control system could serve as a technically simple and feasible add-on to other devices, with EEG caps and hardware now emerging on the consumer market. According to the researchers, it only took volunteers a few minutes to get the hang of using the exoskeleton. Because of the flickering LEDs, participants were carefully screened and those suffering from epilepsy were excluded from the study. The team is now working to reduce the ‘visual fatigue’ associated with long-term use.

“We were driven to assist disabled people, and our study shows that this brain control interface can easily and intuitively control an exoskeleton system — despite the highly challenging artefacts from the exoskeleton itself,” Muller concludes.

Those wishing to learn more can read the entire paper here, or watch the brain-controlled exoskeleton in action below.

[Images: Korea University / TU Berlin]

This fiber optic dress is amplified by a wearer’s thoughts


This EEG-powered dress shines red when alert and green when relaxed.


Rain Ashford has been tinkering around with EEG-enabled wearable devices for quite some time now. In fact, she is in the midst of wrapping up her doctoral thesis. As part the process, the Maker has created a rather slick, interactive dress as a fun way to display engagement and moods in crowded situations, particularly those so noisy that hearing someone speak is virtually impossible.

18310960536_c98ea585cb

The aptly named ThinkerBelle EEG Amplifying Dress uses a NeuroSky Mindwave Mobile EEG headset to collect brain information and relay that data to her garment to non-verbally communicate with those nearby. Ultimately, this leaves it up to observers to make their own interpretations from the brilliant spectacle.

“I created this dress in response to a subsection of feedback data from my field trials and focus groups, which investigated the functionality, aesthetics and user experience of wearables and in particular wearer and observer feedback on experiences with my EEG Visualising Pendant,” Ashford writes.

17703140464_e89896fd21

The dress was constructed out of satin fabric and fiber optic filament woven into an organza. The EEG headset collects and amplifies data in the form of two separate streams — attention and meditation — which are sent over via Bluetooth and visualized on the top layer of the dress through a series of LEDs. The illumination is controlled by an Adafruit Pro Trinket (ATmega328): red light signifies attention while green denotes a state of relaxation.

“The dress is constructed so the two streams of data light overlap and interweave. The fiber optic filament is repositionable allowing the wearer to make their own lighting arrangements and dress design,” she adds.

What’s more, the wearable project features a variety of modes, one in which lets the user record and playback the data. This means someone can design a combination of color and lights on the dress, then replay it after taking off the EEG headpiece. This enables the wearer to come across as though he or she is concentrating or relaxed to those around.

18112999170_8ed92109cc

“Why would someone want to do that? Think of this much like a lie detector test. Sometimes you want people to know how you feel, and other times you would rather keep your thoughts to yourself. So, in this case if you want to appear calm even though you are really agitated, you can just have the dress display a previous calm time period,” the Adafruit crew explains.

Pretty cool, right? Check the project out in its entirety on Ashford’s page here. Not for nothing, the blend of these two colors makes for one heck of Christmas attire!

Wearable cap lets amputees grasp objects with their mind


Researchers at the University of Houston have built a brain-machine interface to control prosthetic hands.


When it comes to brain-controlled interfaces, advancements in the space have come a long way since its earliest days of research at UCLA in the 1970s. Under a grant from the National Science Foundation and followed by a contract from DARPA, the papers published following the study marked the first appearance of the expression BCI in scientific literature. Now fast forward nearly 40 years and scientists are inspiring a wide-range of possibilities, including enabling amputees to command robotic limbs with their mind.

89254_web

That’s exactly what one team from the University of Houston has done. The researchers have developed an algorithm that allowed a man to grasp a bottle and other objects with a prosthetic hand, powered merely by his thoughts. Instead of implants, this non-invasive method uses a wearable EEG cap that monitors brain activity externally through the scalp. During the its demonstration, a 56-year-old man whose right hand had been amputated was successfully able to clutch selected items 80% of the time, which included a water bottle, a small coin, a credit card and even a screwdriver.

While the ability to command prosthetics through brainwaves has been around, earlier studies centered around either surgically implanted electrodes or myoelectric control, which relies upon electrical signals from muscles in the arm. Beyond demonstrating that prosthetic control is possible using non-invasive EEG, researchers said the study offers a new understanding of the neuroscience of grasping and will be applicable to rehabilitation for other types of injuries, including stroke and spinal cord injury.

Reportedly, the work marked the first time an amputee has been able to use EEG-based BMI control of a multi-fingered prosthetic hand to grab objects, and could potentially lead to the development of improved artificial limbs. Interested in learning more? The team’s findings were recently published in the journal Frontiers in Neuroscience. You can also check out the study’s official page here.

‘Telepathic’ communication achieved for first time

Alright, so maybe it’s not entirely “mental telepathy,” but an international group of researchers is reporting that they have successfully achieved brain-to-brain communication. According to the scientists from the United States, France and Spain, the team has leveraged several technologies, including computers and the Internet, to relay information between test subjects separated by approximately 5,000 miles without carrying out any invasive procedures on the subjects.

85727991

Words such as “hola” and “ciao” were telepathically transmitted from a location in India to a location in France using an Internet-connected electroencephalogram (EEG) and robot-assisted and image-guided transcranial magnetic stimulation (TMS) technologies. When one study participant merely thought of a greeting, the recipient thousands of miles away was aware of the thought occurring, according to the report published in PLOS One.

“We wanted to find out if one could communicate directly between two people by reading out the brain activity from one person and injecting brain activity into the second person, and do so across great physical distances by leveraging existing communication pathways,” revealed Alvaro Pascual-Leone, a Harvard Medical School neurology professor.

04-bci-telepathy.w529.h352.2x

Generally speaking, previous studies on EEG-based brain-computer interaction (BCI) have used communication between a human brain and computer. In these studies, electrodes attached to a person’s scalp record electrical currents in the brain as a person realizes an action-thought, such as consciously thinking about moving the arm or leg. The computer then interprets that signal and translates it to a control output, such as a robot or wheelchair, the study explains.

However, in this new study, the research team comprised of Pascual-Leone, Giulio Ruffini and Carles Grau of Starlab Barcelona, Spain and Michel Berg, leading a group from Axilum Robotics in Strasbourg, France added a second human brain on the other end of the system. Four healthy participants, aged 28 to 50, participated in the study.

“One of the four subjects was assigned to the brain-computer interface (BCI) branch and was the sender of the words; the other three were assigned to the computer-brain interface (CBI) branch of the experiments and received the messages and had to understand them.”

By utilizing an electroencephalogram connected to the Internet and transcranial magnetic stimulation, in which electromagnetic induction is used to externally stimulate a brain, the researchers proved that it was possible to communicate information from one human brain to another. In order to facilitate this, a computer translated simplistic words into digital binary code, presented by a series of 1s or 0s. Then, the message was emailed from India to France, and delivered via robot to the receiver, who through non-invasive brain stimulation could see flashes of light in their peripheral vision. The subjects receiving the message did not hear or see the words themselves, but were correctly able to report the flashes of light that corresponded to the message.

kfs2pcp2tkng9fdznfet

“We wanted to find out if one could communicate directly between two people by reading out the brain activity from one person and injecting brain activity into the second person, and do so across great physical distances by leveraging existing communication pathways,” said Pascual-Leone.

This experiment suggests the possibility of supplementing or bypassing the traditional methods of language-based or motor-based communication, and could have a number of applications. “We hope that in the longer term this could radically change the way we communicate with each other,” the researchers concluded.

Over the past several years, there has been a number of studies surrounding brain-controlled activity, where researchers have used these signals to control everything from drones to prosthetics. As we’ve explored on Bits & PiecesAtmel chips (like the ATmega328) have been at the heart of several brain-computer interfacing (BCI) innovations. With the emergence of new technologies and a passionate Maker crowd, we can only imagine the endless possibilities the future holds.

Open Sauce

By Steve Castellotti

CTO, Puzzlebox

North Beach, San Francisco’s Italian neighborhood, is famous for the quality and wide variety of its many restaurants. From colorful marquees scattered up and down Columbus to the hushed, more dimly lit grottos hidden down side streets and back alleys, there is no lack of choice for the curious patron.

Imagine then, having chosen from all these options, you sit down and order your favorite dish. When the plate arrives the waiter places next to it a finely embossed card printed on thick stock. A closer examination reveals the complete recipe for your meal, including hand-written notations made by the chef. Tips for preparation and the rationale for selecting certain ingredients over others are cheerfully included.

Flipping the card over reveals a simple message:

“Thank you for dining with us this evening. Please accept this recipe with our regards. You may use it when cooking for friends and family, or just to practice your own culinary skills. You may even open your own restaurant and offer this very same dish. We only ask that you  include this card with each meal served, and include any changes or improvements you make.”

Sharing the “Secret” Sauce

Having been raised in an Italian family myself, I can assure you that there is no more closely guarded secret than the recipe for our pasta gravy (the sauce). But I can’t help but wonder how such an open sharing might affect the landscape of a place such as North Beach. If every chef was obliged to share their techniques and methods, surely each would learn from the other? Customers would benefit from this atmosphere of collaboration in terms of the taste and quality of their dinners.

These many restaurants, packed so tightly together as they are, would still be forced to compete on terms of the dining experience. The service of their wait-staff, the ambience, and cost would count for everything.

For the majority of customers, knowledge of the recipe would simply be a novelty. In most cases they would still seek a professional chef to prepare it for them. But to the aspiring amateur, this information would contribute to their education. A new dish could be added to their repertoire.

An experienced restaurateur could no doubt correct me on any number of points as to why such a scenario would be a poor business model and never could or should be attempted. But just across town, throughout Silicon Valley and indeed across the globe, in the realm of technology, this exact model has been thriving for decades.

Open Source in the Software World

In the software world, developers have been sharing their source code (the recipe for the programs they write) under licenses similar to the one outlined above on a grand scale and to great success. The Internet itself was largely constructed using open platforms and tools. Mobile phones running Google’s Android operating system are now the most popular in the world, with complete source material available online. And in 2012 Red Hat became the first open source company to achieve a billion dollars in revenue, with customers from IBM to Disney and Pixar among their roster.

The benefits are many. Developers can leverage each others’ work for knowledge and time saving. If you want to build a new web site, there’s no need to write the web server or common routines such as user management from scratch. You can take open versions and start from there. Even better, if you have questions or run into trouble, more likely than not someone else has, too, and the answer is only a search away. Most importantly, if the problem you found indicates a flaw in the software (a bug), then a capable coder is empowered to examine the source and fix it himself or herself. And the result can be shared with the entire community.

There are parallels here to several fields. Similar principles form the basis of the scientific method. Without the sharing of procedures and data, independent verification of results would be impossible. And many discoveries result from iterating on proven techniques. A burgeoning do-it-yourself community, a veritable Maker Movement, has grown around magazines like Make and websites such as Instructables.com. New inventions and modifications to popular products are often documented in meticulous detail, permitting even casual hardware hackers to follow along. Electronics kits and prototyping boards from companies like Arduino are based on Atmel microcontrollers  plus open circuit designs, and are often used to power such projects.

Puzzlebox Brain Controlled Helicopter in Flight

Brain-Controlled Helicopter

Recently, our company, Puzzlebox, released the Orbit, a brain-controlled helicopter. The user begins by setting a display panel to the desired level of concentration and/or mental relaxation they wish to achieve.  A mobile device or our custom Pyramid peripheral processes data collected by a NeuroSky EEG headset. When that target is detected in the user’s brainwaves, flight commands are issued to the Orbit using infrared light. One can practice maintaining focus or a clarity of thought using visual and physical feedback.

Puzzlebox Brain-Controlled Helicopter with Atmel AVR

Puzzlebox Brain-Controlled Helicopter with Atmel AVR

Beyond novelty, however, lies the true purpose of the Puzzlebox Orbit. All source code, hardware designs, schematics, and 3D models are published freely online. Step-by-step guides for hacking the software and electronics are included. Methods for decoding infrared signals and extending mechanisms to operate additional toys and devices are shared. Creative modification is encouraged.  The goal is to promote the product as a teaching aid for middle and high school sciences classes and in university-level programming and electrical engineering courses.

Puzzlebox forging Classroom and Early Adoption of Technology for Education

This business model is itself a bit of an experiment, much like the restaurant described above. There is little preventing a competitor from producing a knock-off and leveraging our own recipes to do it. They might even open their doors just across the street from ours. We’ll need to work hard to keep our customers coming back for seconds. But so long as everyone abides by the rules, openly publishing any modifications of improvements made on our recipe, we’re not afraid to share the secrets of our sauce. We only ask that they include the original material with each dish they serve, and include any changes or improvements made along the way. We’re willing to compete on cost and dining experience. In this way we hope to improve the quality and flavor for everyone.

Puzzlebox with Arduino and Atmel AVR

Puzzlebox with Arduino and Atmel AVR

Puzzlebox Software IDE Interface

Openness and The Internet of Things

Today, communities such as Kickstarter and others tapping into the power of openness and crowd-sourcing are fueling a lot of technological innovation.  The next era for enterprise is revolving around The Internet of Things (#IoT), machine-to-machine (#M2M) communications and even the Industrial Internet (#IndustrialInternet).

One strong proponent of innovation and thought, Chris Anderson, is renowned for having his fingerprints and vision on trends as they bloom into movements.  Anderson is committed and energized in this Make-infused world. His latest book, “Makers: The New Industrial Revolution”, eloquently outlines the “right now” moment with makers. “Hardware is the new software”, opening up the brink of the next age of the Internet, where devices and machines become connected. Cloud, agile apps, and embedded design hardware (systems on chips, microcontrollers, or smart devices) are converging and  paving the next generation of integrated products across the fabric of devices.

“The real revolution here is not in the creation of the technology, but the democratization of the technology. It’s when you basically give it to a huge expanded group of people who come up with new applications, and you harness the ideas and the creativity and the energy of everybody. That’s what really makes a revolution.

…What we’re seeing here with the third industrial revolution is the combination of the two [technology and manufacturing]. It’s the computer meets manufacturing, and it’s at everybody’s desktop.”

Excerpt credited from Chris’s Anderson’s “Maker: The New Industrial Revolution”

With that said, we enter the next age, where hardware is the new software.