Category Archives: Arduino

Watch a LEGO band cover Daft Punk’s ‘Da Funk’


Billed as “the world’s first robotic LEGO band,” each member of Toa Mata is made of Bionicle pieces and powered by Arduino.


Last year at this time, Italian sound artist Giuseppe Acito caught our attention with his innovative take on Depeche Mode’s anthemic 1983 single “Everything Counts.” What made it so different, you ask? The rearranged tune wasn’t performed by him, but instead by his entirely LEGO-based, ATmega328 powered band that he calls Toa Mata.

Music

Billed as the world’s first LEGO robotic group, the Toa Mata Band is controlled by Arduino Uno hooked up to a MIDI sequencer. For his latest project, Acito wired the Bionicle bunch to several servos, each driven by the Arduino.

SR

With a little programming via MIDI, the band was able to play Daft Punk’s hit song “Da Funk” using a range of instruments and synthesizers including Fender Jazz Bass, Ableton Push/Live, Coron Drum, Korg DS10 synth, Finger BassLine, Boss HC-2, Moog Animoog, and a Nintendo DS.

Pretty cool, right? Watch Acito’s Toa Mata Band recreate Daft Punk’s legendary track below! Meanwhile, you can browse some of his other work here.

This wearable fluid status sensor could lead to new vital sign


A new wearable sensor from the University of Michigan will provide more accurate and continuous fluid status data streams.


A team of University of Michigan researchers have developed a wearable sensor that could one day provide doctors with a simple, portable and completely non-invasive way to measure fluid status — the volume of blood that’s traveling through a patient’s blood vessels at any given time.

CN7H-1xXAAA4WdV-1

This sensor could perhaps be the answer to an age-old problem that has perplexed physicians, which is how to precisely determine the right circulatory volume is for an individual. Fluid status is a diagnostic measure much like heart rate or blood pressure. It can alert doctors when a cardiac patient has excess fluid that prevents their heart from pumping efficiently or provide a more precise measure of how much waste fluid to filter out of a dialysis patient’s blood. Additionally, it can tell a medical staff how much fluid to give to a trauma patient who has lost blood or a septic patient with an overwhelming infection.

At the moment, though, getting an accurate measure of fluid status requires an ultrasound or the insertion of a specialized catheter that measures the pressure of blood flowing through a blood vessel. Both tests are expensive and complex, and must be administered in a hospital by an expert. University of Michigan’s wearable sensor could change that by making measuring fluid status as simple as strapping a smartphone-sized device to a patient’s arm or leg and asking them to take a deep breath. And because it can be worn for extended periods of time, the unit could provide doctors and caregivers with an unprecedented amount of real-time data about fluid status.

The device works uses a process called Dynamic Respiratory Impedance Volume Evaluation, also known as DRIVE, to measure the changes in “bioimpedance,” or electrical conductivity, of the wearer’s limb as they breathe. Blood is an excellent conductor of electricity, so a patient with more blood will have greater conductivity. It’s quite similar to the ultrasound method of measuring fluid status, which directly captures the changes in the vena cava, the body’s largest vein. But instead of using the vein size to calculate fluid status, the new device gets the same information by measuring bioimpedance. While they may not be the first ones to use approach, the team is the first to incorporate fluid status measurement into a wearable gadget.

“You can absolutely, with DRIVE, track how much circulating volume someone has by taking this new vital sign and combining it with the treatment outcomes we expect. We can use it as a new way of honing in on where we want a patient to be and where are they currently,” says Barry Belmont, a biomedical engineering doctoral student at the University of Michigan.

According to Belmont, the new sensor is easy to use and requires minimal expertise, making it an ideal option for the intensive car unit, a small clinic, an ambulance, in an accident scene or even on the battlefield.

Wear

What’s more, the researchers say their technology could effectively make fluid status another vital sign. Current measurements like heart rate and blood pressure are diagnostic measurements that have been in place for decades or more. However, these methods don’t accurately address issue that patients experiencing trauma, undergoing dialysis, or septic patients commonly have in that they can’t capture the amount of blood flowing through a patient’s blood vessels.

“This could turn fluid status into a routine diagnostic tool, the way we measure heart rate and blood pressure today,” reveals Kevin Ward, executive director of the UM Center for Integrative Research in Critical Care (MCIRCC). “It has the potential to improve care and lower costs for millions of patients, and I think it’s a great example of how collaboration between fields like engineering and medicine can have a direct benefit on the lives of patients.”

The team has been testing a benchtop version of the sensor, built from off-the-shelf components, for more than a year. At the heart of the wearable itself lies an ATmega1280 MCU, while an Arduino Mega was employed for much of the benchtop validation process. The systems works by sending small amount of electcricity around the limb. As it moves through the limb, the current either travels faster or slower based on the amount of blood volume. This actually enables them to count the number of respirations and how deep a wearer is breathing.

A real-time stream of fluid status data could even help doctors provide better treatment to patients who need additional fluid, like sepsis patients. The researchers predict their current round of testing will continue through the end of this year. If the trials are successful, the device will go to the FDA for approval.

“We’ve gone from something that’s fairly large with a comptuer and a tabletop to something that resembles a Nano iPod that you can wear on your arm,” Ward explains.

Intrigued? Head over to the University of Michigan’s official page to learn more, or listen to a more elaborate overview of the project in the video below!

Parse for IoT launches four new SDKs


Parse for IoT has expanded its SDK lineup with four new kits built with Atmel and other industry leaders.


The Internet of Things is one of the most exciting new platforms for app development, especially as more and more people interact with connected devices every day. But it also poses a host of challenges for developers, as they must wrestle with the complex task of maintaining a backend with a whole new set of constraints. Many IoT devices also need to be personalized and paired with a mobile companion app. Cognizant of this, the Parse team is striving to make it simpler.

Phot

At F8 this year, Parse for IoT was announced — an official new line of SDKs for connected devices, starting with an SDK targeted for the Arduino Yún (ATmega32U4). Now, Parse has shared that they are expanding their lineup with four new SDKs built with Atmel, Broadcom, Intel and TI. This will make it easier than ever to use Parse with more types of hardware and a broader range of connected devices. For example, you can build an app for the Atmel | SMART SAM D21 and WINC1500 — and connect it to the Parse cloud in minutes, with nothing more than a few lines of code.

Parse

“We’ve been excited to see the creative and innovative things our developer community has built since we first launched Parse for IoT at F8. Already, hundreds of apps for connected devices have been created with the new SDKs,” explains Parse software engineer Damian Kowalewski. “Our tools have been used to build exciting and diverse products like a farm-to-table growing system that lets farmers remotely control their equipment with an app (Freight Farms); a smart wireless HiFi system that syncs music, lighting and more (Musaic); and even a smart BBQ smoker that can sense when meat is perfectly done (Trignis). Here at Parse, we had fun building a connected car and a one-click order button. And we’ve heard that our SDKs are even being used as teaching tools in several college courses.”

IMG_22661

As to what’s ahead, this lies in the hands and minds of Makers. From a garage hacker’s weekend project to a production-ready connected product, manufactured at scale — Parse can power them all. Ready to get started? You can download the new SDKs and access QuickStart guides here.

AMQUMO is a Xively ambient quality monitor


Based on an ATmega328, this monitor logs ambient noise, temperature, humidity and brightness data on Xively.


Created by Davide Gironi, AMQUMO is an indoor ambient quality monitor powered by the versatile ATmega328. The DIY device works by logging the data of four environment parameters on the Xively platform: ambient noise, temperature, humidity and brightness. This information is displayed through four bi-color LEDs, labeled with an N, T, H and B, respectively.

HTN

Built on the Xively Logger ATmega328 Library, Gironi used a web-based interface to set up the network parameters and the Xively tokens. The network can be configured using a static IP, gateway, netmask or DHCP.

Aside from the ATmega328 at its core, AMQUMO is equipped with an EC28J60 Ethernet controller to handle communication, a DHT22 sensor to measure temperature and humidity, an analogic noise sensor with an electret microphone and op-amp to monitor ambient noise, and a BH1750 board to detect brightness. Ambient noise and brightness are sampled twice every second to provide instant LED feedback, while humidity and temperature have a bit slower sample rate with ambient levels computed and posted to Xively each minute.

avr_atmega_ambient_quality_monitor-atmega328

“The PCB is quite simple, it’s just a bridge board for a low cost Arduino Mini board and all the sensors board. The main board and all [of the] sensors can be, of course, designed as a single board,” Gironi notes. “The temperature and humidity sensor need to be exposed outside the main electronics board, because both the EC28J60 chip and voltage regulator heat up to almost 40°C. And to solve this issue, a step down switching regulator should be used.”

Interested? Check out the AMQUMO’s original page here.

Creating the world’s first Android autonomous vehicle


One team of students turned an RC car into a self-driving vehicle capable of following street lanes, parking and overcoming obstacles.


A future full of driverless cars is just around the corner, with reports predicting over 10 million to hit the roads over the next five years. And while many of today’s vehicles already boast high-tech capabilities like self-parking and automatic braking, their price tag often keeps them out of reach of most young folks like University of Gothenburg student Dimitris Platis.

IMG_8902_cover

Platis, in collaboration with his classmates Yilmaz Caglar, Aurélien Hontabat, David Jensen, Simeon Ivanov, Ibtissam Karouach, Jiaxin Li and Petroula Theodoridou — who collectively go by the name Team Pegasus — decided to develop an impressive autonomous vehicle of their own. The only difference? It’s much smaller than the ones you’d find on the road, and unfortunately, won’t be able to give any of them a lift to class.

Originally conceived as a school project, the scaled-down, self-driving vehicle utilizes machine vision algorithms along with data fed by its on-board sensors to follow street lanes, perform parking maneuvers and stay clear of obstacles in its way. Basically, the unit is an RC car that they hacked by replacing its ESC, DC and servo motor with several electronic parts housed inside various compartments and affixed to the shell.

IMG_8906

An Android phone handles the image processing, decision-making and wireless transmission of steering instructions (via Bluetooth) to an Arduino Mega (ATmega2560) embedded inside its chassis. This board connects to three ultrasonic distance sensors — two of which are mounted to the front and another to its rear. A trio of IR sensors are linked to the Arduino as well, while a speed encoder is attached to one wheel.

Aside from that, a 9-DOF Razor IMU board (ATmega328P) is fitted to the front bumper to provide feedback on the car’s movement, though it is not too reliable due to magnetic interference from the motors. The vehicle is also equipped with LEDs that act as head and brake lights along with an ATtiny85 based driver board that receives signals over serial and blinks the lights.

An electronic speed controller, powered by a 7.2V battery, is tasked with driving the motors according to a PWM signal that it receives from the Arduino. The servo motor determines the angle of the vehicle’s front wheels.

IMG_0356

The Makers created an Android app called CARduino that communicates via Bluetooth with the on-board MCU, drives the motors and parses the sensor data.

“On the software dimension of the physical layer, an Arduino library was created, which encapsulated the usage of the various sensors and permits us to handle them in an object oriented manner. The API, sports a high abstraction level, targeting primarily novice users who ‘just want to get the job done.’ The components exposed, should however also be enough for more intricate user goals,” Platis explains. “This library was developed to be used with the following components in mind: an ESC, a servo motor for steering, HC-SR04 ultrasonic distance sensors, SHARP GP2D120 infrared distance sensors, an L3G4200D gyroscope, a speed encoder, [and] a Razor IMU.”

Want to learn more? You can race on over to the project’s detailed logcheck out its latest write-up in MAKE: Magazine, or see it in action below!

MIT researchers have created a 3D printer for molten glass


Think of G3DP as the next generation of glassblowing. 


Remember the days when 3D printers were only capable of using plastic filament? Well, the times have changed. Chocolate, ceramics, metal, living tissue — these are just some of the materials now being spit out to make an assortment of things, from the practical to the absurd. Next on that ever-growing list? Glass, thanks to a team of researchers at MIT’s Mediated Matter Group.

3DP

That’s because the group has developed an unbelievable 3D printer that can print glass objects. The device, called the G3DPconsists of two heated chambers. The upper chamber is a crucible kiln that operates at a temperature of around 1900°F, and funnels the molten material through an alumina-zircon-silica nozzle, while the bottom chamber works to anneal the structures.

The machine doesn’t create glass from scratch, but instead works with the preexisting substance, layering and building out beautifully-constructed geometric shapes according to designs drawn up in a 3D CAD program. This printing method shares many of the same principles as fused deposition modeling (FDM), which is commonly employed by most 3D printers today. Except that it can operate at much higher temps and uses molten glass as the medium, opposed to plastic filament.

How does it all work, you ask? The glass is first melted at an extremely high temperature over a period of roughly four hours. For another two hours, it undergoes a fining process, in which helium may be introduced to the molten material to enlarge and carry small bubbles to the surface, eliminating them. During this stage, the extruder has to be kept cool so that the glass doesn’t begin flowing. Once fining is complete, the crucible and nozzle are set to temperatures of 1904°F  and 1850°F, respectively, and the extrusion process begins. The G3DP is controlled by three independent stepper motors, as well as the combination of an Arduino (assuming based on an ATmega2560) and RAMPS 1.4 shield.

Glass

At this time, the researchers have used G3DP to craft things like vases, prisms, and other small decorations, some of which will be on display at the Cooper Hewitt, Smithsonian Design Museum next year.

“Two trends in additive manufacturing highlight the value we expect from additive manufacturing of molten glass. First, the freedom that this process provides in terms of the forms that can be created in glass,” its creators explain. “Second, bespoke creation of glass objects provides the opportunity for complex scaffolds, fluidics and labware custom made for individual applications. Moving forward, the simultaneous development of the printer and the design of the printed glass objects will yield both a higher performance system and increasingly complex novel objects.”

As impressive as this may sound, it’s even more mesmerizing to watch it in action. It will surely be interesting to see how the G3DP will influence art, architecture and product design in the future. Intrigued? You can read the team’s entire paper here.

[Images: MIT’s Mediated Matter Group]

These lights will let you control your smart devices through gestures


LiSense uses shadows created by the human body from blocked light and reconstructs 3D human skeleton postures in real-time.


As our homes become increasingly smarter, what if we could use the light around us for more than just illumination? In other words, imagine if the light in your room could sense you waving your hand as you enter, or was able to trigger your smart coffee machine, unlock the door and turn on your entertainment center. While it sounds like something straight out of a sci-fi novel, it may soon all be possible thanks to a new project from researchers at Dartmouth University.

55ca00f1e2f3f

The team is looking to transform ubiquitous light into a medium that integrates communication with human sensing. LiSense works by decoding information made from visible light to turn everyday lighting into sensors that can then recognize and respond to what we do. This is achieved through visible light communication (VLC), which encodes data into light intensity changes at a high frequency invisible to the human eye.

Not only does LiSense use light to sense people’s movements, but it also allows them to control devices in their environment with simple gestures, employing light to transmit the information. The hope is that you will be able to gesture and engage with objects in a room via nothing more than light, similar to how you’d use a Kinect or Wii gaming system to interact with your TV.

For LiSense to track a person’s movements, the researchers built a three-meter by three-meter light-sensing testbed with five off-the-shelf Cree LEDs in the ceiling and 324 photodiodes on the floor. A total of 29 microcontrollers, Arduino Due (SAM3X8E) and Uno (ATmega328), were embedded as well. The system uses the shadows created by a person standing on the testbed to reconstruct their 3D human skeletal posture in real-time (at 60 Hz).

To get their shadow-based human sensing to work, the researchers had to overcome two critical challenges. Since multiple ceiling lights lead to diminished and complex shadow patterns on the floor, they had to devise light beacons to separate light rays from individual LEDs and ambient light. Additionally, they came up with an algorithm capable of taking the collected limited resolution, 2D shadow maps from the photodiodes in the floor and reconstructing a person’s posture in 3D.

Light

By waving your hand, LiSense lets you freely control things, play games and track behavior without the need of cameras and on-body devices. One day, the team says it may even respond to your feelings. Compared to existing methods that use wireless radio signals such as Wi-Fi to track user gestures, VLC has several appealing properties and advantages. For starters, light-based sensing is secure, doesn’t penetrate walls, and isn’t limited to classifying a pre-defined set of gestures and activities. On top of that, it’s energy efficient, operates at a bandwidth 10,000 times greater than the radio frequency spectrum, and reuses existing lighting infrastructure.

“Light is everywhere and we are making light very smart,” says Xia Zhou, lead author and researcher on the project. “Imagine a future where light knows and responds to what we do. We can naturally interact with surrounding smart objects such as drones and smart appliances and play games, using purely the light around us. It can also enable a new, passive health and behavioral monitoring paradigm to foster healthy lifestyles or identify early symptoms of certain diseases. The possibilities are unlimited.”

Sounds intriguing, right? See it all in action below, and be sure to read the team’s entire paper here.

Maker creates a Tron and Star Wars-inspired control panel for his computer


This fully-functional, overhead control panel will be the most awesome thing you see today.


Most of us rely on a keyboard and mouse to perform tasks on our computers. Not Redditor user “smashcuts.” Instead, the Maker has built a fully-functional overhead control panel for his PC, complete with 100 programmable buttons and switches that trigger all kinds of actions, from the useful to the absurd.

w27pR8I

As you can imagine, constructing such a complex device was no easy task. To make this a reality, the Maker employed the combination of a USB hub and controllers, LEDs for the backlighting, an Arduino Mega (ATmega2560) for the blinking lights and a HAL unit from Think Geek. These electronics are all housed inside an enclosure made from a metal junction box and laser-etched acrylic panels.

3FsqvvZ

While the project itself was a pretty elaborate endeavor with some serious functionality, it was all done in good humor. There’s a green ‘Main Systems’ section which turns on his most frequently used programs, such as Chrome, Photoshop, Premiere, After Effects and iTunes. Meanwhile, a central unit controls all of his main OS shortcuts like open, save and close.

z5gH4fe

He’s also included a category that he calls ‘Panic Control,’ with three toggles for stress management. According to smashcuts, ‘Don’t Panic’ cues a hitchhiker’s guide YouTube video, ‘Serenity Now’ cues a Firefly clip, and ‘Hold Steady’ plays songs from his favorite band. As if that wasn’t enough, there’s a ‘Wave Collider’ panel that allows him to activate various iTune playlists and choose ‘More Rock’ or ‘Less Rock’ depending on his mood.

Wk3Pf1J

Beyond that, buttons in the bottom left-hand corner type a variety of laughter into open chat windows, including the common ‘HA,’ ‘HAHA,’ or ‘HAHAHA’ for extremely funny moments. There’s even a ‘Weapons System,’ which emits humorous sound effects. Despite some of its comedic features, this was surely an impressive build!

If you’ve ever dreamt of using a Star Wars/Tron-like control panel, you’ll want to check out the Maker’s project in its entirety here.

This 3D-printed robot can navigate inside confined spaces


OctaWorm is a 3D-printed, Arduino-based robot that may be the future of search-and-rescue missions. 


When disaster strikes, one of the biggest problems challenges that rescue teams encounter is locating and reaching survivors amid the rubble. Unfortunately, there are times even with today’s advanced technologies where humans are unable to slip into a tight space and extract an individual. But what if there was a robotic device that could? That is the idea behind a recent project by Juan Cristóbal Zagal.

Pic1

Developed in collaboration with researchers from the University of Chile and University of Akron, OctaWorm is a 3D-printed octahedral robot that is capable of morphing its body to squeeze through holes, gaps and debris. The latest version, now the third prototype, is comprised mostly of 3D-printed parts and some aluminum rods for enhanced durability. It employs pneumatic-driven servo motors for movement and is operated via a wired controller, though the team hopes to make this wireless in the near future.

Aside from that, the robot is equipped with an Arduino board, an Arduino-compatible shield to controls the relays and three pneumatic solenoid valves. Since the OctaWorm is pneumatically driven, Zagal used high-quality rapid pneumatic connectors and plastic tubing to attach it to the controller.

The robot also features 3D-printed ball joints, which enable it to grip onto and traverse through any type of terrain. These rubbery balls are tasked with handling the deformation motion, and allow it to assume a variety of shapes and configurations as it slips into a crack or crevice.

Pic2

“The current version of the robot is capable of traveling inside a pipe. It is also capable of dealing with changes on the internal diameter of the pipe. The functional symmetry of the robot allows it to travel along T, L and Y joints in pipelines. Traditional in-pipe robots have many problems for dealing with these types of junctions. In contrast the deformable octahedral robotcan simply squeeze into junctions,” Zagal tells 3DPrint.com. 

The goal of the project was to develop a new way to use robotic motion to access and navigate confined spaces typically found in disaster situations, as well as pipes and air ducts. In the future, Zagal envisions an even tinier version that could be used for medical applications, such as going through blood vessels.

Until then, you can watch the OctaWorm in action below!

[h/t 3DPrint.com]

This LED installation mimics the movements of fireflies


This 2,000-plus LED installation reacts to the movement of its visitors, placing them inside a colorful 3D environment.


Austrian arts collective Neon Golden recently created an immersive light installation designed to mimic the movements of fireflies. The project, aptly named SWARMconsists of over 2,000 LEDs that are suspended at various heights from an overhead metal grid and arranged in a series of 40 modules throughout a dark room.

img_1010.jpg__1200x1200_q95_upscale

The lights use motion-sensing technology, which is controlled by Raspberry Pi and Arduino running Processing, to replicate the motion of lightning bugs. The hanging LEDs change position horizontally in response to the movements of nearby visitors. The team also employed Cinema 4D to generate SWARM’s advanced 3D effects.

“Through the movement of the visitors within the installation the LEDs are lightening up and the static, chaotic structure transforms into a vibrant, three-dimensional swarm one can visually but also acoustically experience,” Neon Golden explains.

img_0594.jpg__800x448_q95_upscale

According to its creators, SWARM is adaptable to meet different space requirements, as the configuration of light modules can be adjusted to fit smaller or larger areas. The piece made its debut back at the Olympus Photography Playground in Vienna in February 2015.

img_0842.jpg__800x448_q95_upscale

You can see it for yourself in the video below as dancer Máté Czakó makes his way through the luminescent creatures, revealing the LEDs’ reactivity.

[h/t Dezeen]