Tag Archives: Cornell University

Robobarista can learn how to make your morning latte


The best part of waking up is a robot filling your cup! 


Developed by researchers at Cornell University, the aptly-named Robobarista may appear to be just an ordinary robot, however it packs the skills of a talented Starbucks barista. Impressively, it is capable of learning how to intuitively operate machines by following the same methods a human would when introduced to a device, like a coffeemaker.

robobarista

The Robobarista can autonomously make an espresso, as well as carry out other mundane tasks, using instructions provided by Internet users. To do this, team had to first collect enough crowdsourced information from online volunteers to teach the robot how to manipulate objects it had never seen before. The Robobarista then reads these instructions — such as “hold the cup of espresso below the hot water nozzle and push down the handle to add hot water” — and completes a said command by using a database of deep learning algorithms.

“In order for robots to interact within household environments, robots should be able to manipulate a large variety of objects and appliances in human environments, such as stoves, coffee dispensers, juice extractors, and so on,” the team writes. “Consider the espresso machine above — even without having seen the machine before, a person can prepare a cup of latte by visually observing the machine and by reading the instruction manual. This is possible because humans have vast prior experience of manipulating differently-shaped objects.”

Robobarista’s functionality is based on a two-step process. Generally speaking, the idea is to get the robot to recognize certain things — including buttons, handles, nozzles and levers — and produce similar results as its human counterparts. This way, when it sees a knob, for instance, the robot can scan through its database of known objects and properly identify it. Once it has confirmed that the said control is indeed a knob, it can figure out how to physically operate it based on all of the similar gizmos in its database, the device’s online instruction manual, and how it understands a person’s use of the gadget.

The team notes that their focus was on the generalization of manipulation trajectory through part-based transfer using point-clouds without knowing objects a priori and without assuming any of the sub-steps like approaching and grasping.

“We formulate the manipulation planning as a structured prediction problem and design a deep learning model that can handle large noise in the manipulation demonstrations and learns features from three different modalities: point-clouds, language and trajectory,” the team explains.

espresso_transfer-1

To help instruct an action, users select one of the preset steps and then navigate a series of options to control the robot’s movements. Ultimately, every user will complete the task slightly differently, therefore building up the droid’s skillset when it draws on hundreds of these instructions. As this database grows, so does its potential to carry out more chores in and around the house.

For each item, the team captures raw RGB-D images through a Microsoft Kinect camera and laser rangefinder, then stitches them with Kinect Fusion to form a denser point-cloud in order to incorporate different viewpoints of the objects. The crowdsourced instructions are translated into coordinates, which the robot uses to plan the trajectory of its arm to control a new machine.

“Instead of trying to figure out how to operate an espresso machine, we figure out how to operate each part of it,” the team adds. “In various tests on various machines so far, the robot has performed with 60% accuracy when operating a device that it has never seen before.”

Don’t drink coffee? No need to fret. Since Robobarista can master directions over the Internet via Amazon’s Mechanical Turk, the friendly bot can do a lot more than just make a mean cup ‘o joe. In fact, it can fill up a water bottle or pour a bowl of cereal as well. Talk about the perfect Rosie-like robot for the morning rush!

Up until now, robots have typically been configured to complete the same command repeatedly, like the recently-unveiled gadget capable of whipping up dinner by following a set of preprogrammed recipes. However, Cornell’s latest creation has been built to intuitively account for variables and work around them.

If you’ve come by any of our event booths in the past, you know how much we love coffee. Perhaps, we should call upon Robobarista for our next shows! Interested in learning more? Be sure to read the Cornell team’s paper. The student researchers are still working with crowdsourcing to educate their robot, and you can sign up to assist in their efforts here.

These Arduino-based outfits flash to the beat of music


Created by a team of Cornell students, these smart garments have the front page of Adafruit written all over them.


Smart garments are one of the wearables that Gartner has billed as having the greatest potential for growth. A testament to the limitless possibilities of that space is a recent project by a group of undergrads from Cornell University. The students have created a set of embedded outfits with vivid, luminescent panels that can pulse to the beat of music.

(Source: Cornell Chronicle)

(Source: Cornell Chronicle)

“This collection is inspired by the future – and present – of wearable technology being more and more integrated into fashion and daily life,” explains co-creator Eric Beaudette. “These garments depict our vision of fashion of the future, having increased function and compatibility with devices, such as smartphones.”

Surely, anyone wearing these fabricated pieces would turn some heads with its optical fiber cloth illuminated by controllable RGB LEDs and strips of electroluminescent tape. An Arduino (which we assume would be an ATmega32U4 based LilyPad) sewn into each garment enables the lights to accurately brighten to the tunes.

(Source: Cornell Chronicle)

(Source: Cornell Chronicle)

The team noted that maintaining harmony between the materials, technologies and construction can be difficult task. “Garments with circuitry and other technologies add layers of complexity, especially since these technologies were not originally designed for use with clothing.”

Nellie is a 3D-printed weed-picking robot


This Arduino-powered bot may one day help farmers stay weed-free. 


Other than shoveling several inches of snow, there’s one outdoor chore that anyone would surely welcome robotic assistance: weeding. While there are already a number of plowing bots out in existence today, thanks to one Maker, the daunting lawn care task may soon be taken care of as well.

nellie4

A recent entry in MAKE: Magazine and Cornell University’s Pitch Your Prototype competition, Maker Mike Rigsby has developed a 3D-printed robot capable of, you guessed it, pulling out weeds! While at first this may sound like yet another mechanism to increase laziness, weeds are actually a serious problem for farmers all around the world — and it’s only getting worse. Take for instance Pigweed, which grows up to three inches per day and has become resistant to the dominant weed killers, threatening the nation’s soybean and corn crops.

“This is a serious attempt to address an agricultural problem,” Rigsby told the magazine. “I suspected that robots could handle the weeds and that the time to start working on such a solution is now, before the weeds develop further resistance to chemicals.”

And so Nellie was born. The robot spots and plucks them the old-fashioned way, one at a time. The current proof-of-concept is powered by a trio of Arduino Unos (ATmega328), a pair of Arduino motor shields, a Pixy camera, a Ping ultrasonic sensor, eleven AA NiMh batteries, a servo motor, a four-wheel drive base, along with some custom 3D-printed parts that were constructed using two AVR powered MakerBot Replicator 2.

nellie1

How it works is relatively simple. The Pixy camera spots a weed, then feeds the data over to the Arduino processors which relay the commands to the motor controller module to activate the grabber and close the pincer. Meanwhile, the Arduino-controlled motor shield enables the robot to move about the land in the right direction. At the moment, the device is only designed to roll over carpet.

Should the Maker win the contest’s grand prize, however, Rigsby hopes to use the winnings to devise another working prototype with a little more oomph, which can navigate a farm’s terrain. And who knows, perhaps in the coming months, everyday gardeners will be able to take advantage of Nellie, too.

F4X4VQHI6NIEX30.LARGE

“To advance the project requires money for parts. Nellie’s daughters and sons will need a heavy duty chassis that will run between rows of plants, reaching to the side to eliminate offensive weeds. They need multiple cameras and better vision to pinpoint the target. Weeds will be eliminated by pulling, burning, cutting, digging, electrocuting or some combination of methods,” Rigsby adds.

Until then, you can watch it in action below. Now this would make for a great Hackay Prize entry as well. Just sayin’.

This DIY trainer is like a Whack-a-Mole for boxers

Back in the 1985 classic Rocky IV, Balboa’s rival Ivan Drago was shown utilizing a futuristic electronic punch meter in his quest for triumph. Inspired by the flick, a group of Cornell students recently designed their own electronic boxing trainer system for beginners and well-seasoned athletes alike. The device is capable of teaching both basic and advanced combinations, while also providing users with a gauge of timing and accuracy.

intro1

“While we desired to have ‘built-in’ sequences to train the user, we also desired to allow the user to self-program and store their own practice combination sequences. We attempted to do this without exceeding a budget of $100,” the team writes.

The project is comprised of five square pads organized in the shape of a human head and upper body, with each square surrounded by bright LED ribbons that emit light based on a pre-conifgured pattern. Think of the system as a life-size Simon or Whack-A-Mole: When a pad lights up, the trainee must hit it. If the user fails to strike the pad within a set deadline, the next punch in the sequence proceeds. The game, which is controlled by an ATmega1284P, keeps tabs of the user’s activity in terms of reaction, accuracy and other relevant statistics.

In an effort to keep costs at a minimum, the students created their own force sensors consisting of two square pieces of fine aluminum windscreen with foam structures placed in between. One of the foam pieces was made from conducting ESD foam, while the other was a piece of half-inch thick insulating foam rubber with quarter-inch slits cut approximately half an inch apart, for approximately 12 slits per sensor.

force1

“In each of the screen pieces, we thread (copper) wire through the (aluminum) screen’s holes in a ‘Z’ pattern and bond the metals together with conducting glue. The jailbar foam is then hot-glued to the ESD foam, and then the composite foam is sandwiched via hot glue between two of the wire-threaded screens. The screens are then covered on their exposed sides with cardboard that has been rolled with a wine bottle to reduce its brittleness (by popping any trapped air pockets). We used four 10” by 10” sensors to create the ‘body’ of a figure, and a 9” by 9” sensor for the head.”

The sensors are then mounted against a wall, or in the Cornell project’s case, a back of a metal exterior door. When a user punches a sensor, it compresses and as a result, causes the resistance between the two pieces of aluminum screen to change. This greater the change in resistance, the more powerful the hit.

Meanwhile, feedback to the user and the programming interface is done by serial communication. The team interfaced with a PC using a USB-RS232 cable and PuTTY.  The code for the system can be divided into three distinct categories: initialization, user input via PuTTY (or other serial terminal) and game play.

Interested in learning more about the project as well as the team’s outcome? You can read the entire log of their build here, or watch a demo of the boxing trainer below.

 

This embedded ukulele can teach you to play chords and songs

Of course, just moments after completing our list of Maker musical masterpieces, we came across the nifty ukule-LED, an LED-embedded ukulele that uses lights to instruct how to play chords and songs.

banner

Designed by Cornell students Raghav Subramaniam and Jeff Tian, ukule-LED is equipped with 16 NeoPixels that are situated along the first four positions of the fretboard. This allows those playing the device to easily learn how to play each chord. All of the 16 LEDs are connected in series to a single pin on the ATmega1284P that sits on a board mounted to the bottom of the ukulele along with power and serial.

overview

ukule-LED has two modes of operations: “Play” and “Practice.” First, in “play” mode, the user can feed the system a song file, a text file that contains the tempo, time signature, and an ordered listing of the chords in a song. The ukulele will then light up the correct chords at the correct times in the song. (Think of it like Guitar Hero.) While in “practice” mode, the user can specify a single chord, which is lit up indefinitely. For those more experienced musicians, the ukule-LED can still serve as an excellent chord reference.

major_play

“Our inspiration for this project comes from a variety of sources. Our idea to upgrade a musical instrument using LEDs comes from various Hackaday articles, including this one. We considered using LEDs to augment a couple of other instruments, including a keyboard and a guitar, before settling on a ukulele,” Subramaniam and Tian explain.

seventh

Once mastering the earlier stages of the string instrument, those wishing to build upon their existing skills can do so thanks to ukule-LED’s extension capabilities. Currently, the system only supports major, minor and 7th chords, which light up in green, red and blue, respectively. The Makers note that the project was built using an open, highly expandable Python script available for download.

See the DIY ukulele in action below!

Interested in fine tuning your strumming skills? Head on over to its official project page here.

Self-learning ‘copter navigates with an ATmega644 MCU



Akshay Dhawan and Sergio Biagioni of Cornell University have designed a self-learning (RC) helicopter powered by an advanced machine learning algorithm paired with Atmel’s ATmega644 microcontroller (MCU).

Aside from Atmel’s ATmega644 MCU, key project components include:

  • Syma S107 Micro Helicopter
  • Custom PC Board (for MCU)
  • RS232 UART connector
  • Max233CP
  • Power Supply
  • Infrared Emitter 365-1056-ND
  • Infrared Receiver 160-1030-ND
  • Wooden platform
  • Balsa wood 24 inch dowel
  • White board (holds phototransistor circuit)

As HackADay’s Will Sweatman reports, the ‘copter is attached to a boom which restricts its movement down to one degree of motion. Meaning, the helicopter can only move up from the ground, rather than side to side or front to back.

“The goal is for the helicopter to teach itself how to get to a specific height in the quickest amount of time. A handful of IR sensors are used to tell the ATmega644 how high the helicopter is,” writes Sweatman.

“The genius of this though, is in the firmware. Akshay and [Sergio] are using an evolutionary algorithm adopted from Floreano et al, a noted author on biological inspired artificial intelligences.”

Essentially, the ‘copter creates random “runs” and then check the data. The runs that are closer to the goal are refined, while the others are eliminated in a process that emulates evolution via natural selection. In short, the project’s goal is for the ‘copter to start at Point A, go to Point C and hover. The allotted time is 10 seconds per run, with the helicopter expected to teach itself the routine as quickly as possible.

“A neural network is used to determine at what level the throttle should be at to achieve the highest Fitness Value. This network is a part of the Evolutionary Algorithm that runs in the firmware. Basically, it starts off with random values that generate random levels of throttle,” Sweatman explains.

“The values that achieve the highest Fitness Value get ‘mutated’, while the others are discarded. The mutations in the values are done at random and the process repeats. In the end, the firmware learns the best throttle levels to achieve the goal of being at Point C for the longest time in the allotted 10 seconds.”

Interested in learning more about the self-learning ‘copter? You can check out the project’s official Cornell page here.

Mixed martial arts training with Fight Coach

Mixed martial arts (MMA) is a full contact combat sport that allows the use of both striking and grappling techniques from a variety of other fighting genres.

Image Credit: Wikipedia

While an experienced trainer is essential to prepare for an upcoming bout, aspiring fighters may also want to step into the practice ring with Fight Coach.

As HackADay’s Will Sweatman reports, the training platform, created by Cornell University’s Vincent Nguyen and Jooyoung Park, is built around Atmel’s ATmega32U4 microcontroller (MCU), an MPU-6050 6-axis accelerometer and a RN-41 Bluetooth module – all packed into a pair of boxing gloves.

“Fight Coach is a sensor that can be embedded into combat-sport equipment that can allow combat athletes to get a better gauge of their performance. By tracking the athlete’s movement and displaying it in real-time, Fight Coach can help athletes optimize their training,” Nguyen and Park explained on the project’s official page.

“In addition, Fight Coach is small enough to fit inside muay thai shinpads, boxing gloves, or even on your hand wraps. [Plus], Fight Coach records data from the fighter’s gloves so that it can not only be analyzed to improve performance, but also interact with the fighter in real-time.”

Currently, Fight Coach offer three primary modes of training: defense, damage and free-training, which is likely more than enough to help fighters hold their own in the ring.

Interested in learning more about Fight Coach? You can check out extensive documentation on the project’s official Cornell page here.

ATmega644 MCU powers phased array speaker system



Edward Szoka (ecs227) and Tom Jackson (tcj26) of Cornell University have designed a phased array speaker system capable of “steering” sound around a room.

As HackADay’s Will Sweatman reports, the ATmega644-powered platform samples a standard audio input signal at approximately 44.1 kHz via 12 independently controllable speakers – each with a variable delay.

 Simply put, the angle of maximum intensity of the output wave can be shifted by adjusting the delay at precise intervals.

“Phased arrays are usually associated with EM applications, such as radar. But the same principles can be applied to sound waveforms,” Sweatman explained.

The basic idea behind a phased-array? By changing how the speakers are driven, the angle of the maximum intensity of the output wave can be shifted.

“This type of array was built to be able to support various other more advanced design challenges, including longer-range acoustic modem transmission and sonar imaging,” they added.

Interested in learning more? You can check out the project’s official page here and HackADay’s write up here.

ATmega1284P powers this gesture-based security lock

A team of Cornell University students has designed a security lock that 
opens after verifying a stored gesture pattern.

“The idea is to create a box like assembly, in which the user places his hand, makes a defined gesture and unlocks the system. Basically, there is a mechanism that allows the user to save a gesture pattern,” a team rep wrote on the project’s official page.

“Once that is done, the system goes in lock state. When the user enters his hand in the box, he tries to recreate the same pattern. If he is able to do so, the system unlocks. If unable to, the system remains locked.”

According to the rep, the project was inspired by a popular mobile phone unlock feature where a user draws a pattern on the screen to activate the device.

“We wanted to create a similar system which could be used in any security application, as simple as opening the door of the house based on the gesture,” the rep explained.

 “The attractive feature of the project is that the user makes the pattern in the air and not on any surface. Also, we have given the user the flexibility of changing the pattern whenever he wishes to do so.”

The gesture-based security lock is powered by Atmel’s versatile ATmega1284P microcontroller (MCU), a custom PCB and an IR proximity sensor. Additional key components include a three-pin jumper cable, breadboard, power supply, toggle switch, push button, LEDs, 330ohm resistors, assorted wires and a cardboard frame. 

On the software side, the project employs a series of algorithms for switches/inputs, store mode, pattern matching and four channel ADC multiplexing.

“Overall, our system performs satisfactorily and can be effectively used to create a gesture-based secure unlock,” the team rep concluded. “Given more time and budget, we could have made the system 3D. Changes in the third dimension could be used to model the system, [thereby] increasing system accuracy and giving the user another dimension for creating the patterns.”

Interested in learning more about the Atmel-powered gesture-based security system? You can check out the project’s official page here and HackADay’s write-up here.

Playing virtual chess with Atmel-powered glove controllers

Students at Cornell University have designed a game of virtual chess using glove controllers powered by Atmel’s stalwart ATmega1284 microcontroller (MCU).

“Each player wears a glove (there are two gloves for two players) and uses hand-motions to play a chess application. The chess board graphical user interface is generated using MATLAB, which receives information from the player’s gloves through a serial connection with the microcontroller, and updates the game appropriately. As such, when a player tilts his hand in a certain direction, the cursor on the computer screen moves accordingly,” the Cornell students explained.

“Since the glove also has contact sensors in the form of copper strips on both the thumb and pointer finger, pressing these two fingers together simulates picking up or dropping a piece at the location of the cursor. The goal of our project was to simulate the physical motions involved in playing chess without the need for a physical chess set.”

Essentially, the contact sensors simulate a button click. As noted above, pressing the pointer finger and thumb together creates an active-low signal that is sent to the Atmega1284 microcontroller.

“The x-data and y-data from the accelerometer are [then] sent to an analog-to-digital converter in the microcontroller. The microcontroller transforms this information received from the gloves to update the cursor coordinates,” the students noted.

“The [MCU] also handles the state of the chess game, maintaining and managing a wide array of state information needed to update the game. The microcontroller continuously creates packets containing the cursor coordinates and other state data and sends them off to a MATLAB application using serial communication. MATLAB requests these packets and parses them to decide how to render and update the graphical user interface (GUI) representing the chess board.”

In addition to Atmel’s ATmega1284, the virtual chess project was built using the following components: 3-axis accelerometer module, white board, target board, small solder board, header pins, power supply, push buttons, gloves, copper strips, voltage regulator, potentiometer, wires, resistors and capacitors.