Tag Archives: Cornell Engineering

Robobarista can learn how to make your morning latte


The best part of waking up is a robot filling your cup! 


Developed by researchers at Cornell University, the aptly-named Robobarista may appear to be just an ordinary robot, however it packs the skills of a talented Starbucks barista. Impressively, it is capable of learning how to intuitively operate machines by following the same methods a human would when introduced to a device, like a coffeemaker.

robobarista

The Robobarista can autonomously make an espresso, as well as carry out other mundane tasks, using instructions provided by Internet users. To do this, team had to first collect enough crowdsourced information from online volunteers to teach the robot how to manipulate objects it had never seen before. The Robobarista then reads these instructions — such as “hold the cup of espresso below the hot water nozzle and push down the handle to add hot water” — and completes a said command by using a database of deep learning algorithms.

“In order for robots to interact within household environments, robots should be able to manipulate a large variety of objects and appliances in human environments, such as stoves, coffee dispensers, juice extractors, and so on,” the team writes. “Consider the espresso machine above — even without having seen the machine before, a person can prepare a cup of latte by visually observing the machine and by reading the instruction manual. This is possible because humans have vast prior experience of manipulating differently-shaped objects.”

Robobarista’s functionality is based on a two-step process. Generally speaking, the idea is to get the robot to recognize certain things — including buttons, handles, nozzles and levers — and produce similar results as its human counterparts. This way, when it sees a knob, for instance, the robot can scan through its database of known objects and properly identify it. Once it has confirmed that the said control is indeed a knob, it can figure out how to physically operate it based on all of the similar gizmos in its database, the device’s online instruction manual, and how it understands a person’s use of the gadget.

The team notes that their focus was on the generalization of manipulation trajectory through part-based transfer using point-clouds without knowing objects a priori and without assuming any of the sub-steps like approaching and grasping.

“We formulate the manipulation planning as a structured prediction problem and design a deep learning model that can handle large noise in the manipulation demonstrations and learns features from three different modalities: point-clouds, language and trajectory,” the team explains.

espresso_transfer-1

To help instruct an action, users select one of the preset steps and then navigate a series of options to control the robot’s movements. Ultimately, every user will complete the task slightly differently, therefore building up the droid’s skillset when it draws on hundreds of these instructions. As this database grows, so does its potential to carry out more chores in and around the house.

For each item, the team captures raw RGB-D images through a Microsoft Kinect camera and laser rangefinder, then stitches them with Kinect Fusion to form a denser point-cloud in order to incorporate different viewpoints of the objects. The crowdsourced instructions are translated into coordinates, which the robot uses to plan the trajectory of its arm to control a new machine.

“Instead of trying to figure out how to operate an espresso machine, we figure out how to operate each part of it,” the team adds. “In various tests on various machines so far, the robot has performed with 60% accuracy when operating a device that it has never seen before.”

Don’t drink coffee? No need to fret. Since Robobarista can master directions over the Internet via Amazon’s Mechanical Turk, the friendly bot can do a lot more than just make a mean cup ‘o joe. In fact, it can fill up a water bottle or pour a bowl of cereal as well. Talk about the perfect Rosie-like robot for the morning rush!

Up until now, robots have typically been configured to complete the same command repeatedly, like the recently-unveiled gadget capable of whipping up dinner by following a set of preprogrammed recipes. However, Cornell’s latest creation has been built to intuitively account for variables and work around them.

If you’ve come by any of our event booths in the past, you know how much we love coffee. Perhaps, we should call upon Robobarista for our next shows! Interested in learning more? Be sure to read the Cornell team’s paper. The student researchers are still working with crowdsourcing to educate their robot, and you can sign up to assist in their efforts here.

SingLock is a pitch-based DIY security system


You’ve got 99 problems, but a pitch ain’t one.


With data breaches on the rise, the inability of passwords to keep online accounts secure is more apparent now than ever before. Instead, the use of multi-factor authentication can add another layer of security to fend off malicious attackers. While smart cards and tokens have been implemented throughout the years, a pair of Cornell students Sang Min Han and Alvin Wijaya recently designed their own 2FA system using PINs combined with a form of voice recognition.

singlock

The project, which is aptly named SingLock, isn’t as simple as saying a passphrase either. Based on an ATmega1284P, the system features a pair of password protection stages. Not only does a user need to enter a four-digit numeric identification number via key page, but just as its name implies, a user must also sing the correct pitch into the microphone in order to gain entry. And while, one may be worried about an attacker eavesdropping and attempting to sing the key themselves, the team has implemented a couple of mechanisms to defend against those situations.

Created as a final project for Bruce Land’s engineering class, the Makers reveal that SingLock is relatively more secure than the average keypad and/or keyboard-based systems. In addition, the sound-based security system doesn’t leave residues — such as heat signatures on a keypad after a button press — that may make the system vulnerable to penetration by outsiders. The system itself is comprised of three main components, including a keypad, an LCD user interface and a microphone, making it simple to use for a wide-range of users.

As the duo notes, the keypad and LCD screen serve as the main user interface of the system. Using the keypad, a user is instructed follow a set of directions provided on the screen in order to lock and unlock the system correctly. Both the LCD and the set of two LEDs serve as indicators of the system’s lock state. Initially, both the red and green LEDs are lit. However, when the system is locked correctly, only the red LED is lit. Conversely, when the system is successfully unlocked, only the green LED illuminations.

top_view

“SingLock is built on a few fundamental concepts in signal processing, namely sampling theory and frequency domain analysis of audio signals. Sampling is carried out so that the system operates at a reasonable range of frequencies. Peak-matching calculations performed at every attempt to unlock the system is carried out using the Fast Fourier Transform (FFT) algorithm. We review these signal processing fundamentals in the next section,” the duo explains.

The built-in microphone is responsible for recording the pitch of the user, while the analog acoustic input signal is amplified and filtered to remove ambient noise. This signal is then sampled into a digital signal by the Analog-to-Digital Converter (ADC) of the ATmega1284P. The team then takes the FFT of the sampled signal and match peaks to the stored peaks in the passkey in the frequency domain. If a predefined number of stored peaks in the passkey are found in the stored frequency peaks of the microphone input signal, the system unlocks. Otherwise, it remains locked.

“Most security systems we find today are keypad and/or keyboard-based. Speech, rather than button-pressing and/or typing, is however the main means of communications for most people. It is therefore intuitive to have speech as the basis of encryption when considering human usability factors and ease-of-access.”

Interested in learning more about the megaAVR based system? You can read all about the project, including its components, mathematical theory, as well as how to create one for yourself here. In the meantime, be sure to watch its demo below.

Taste the rainbow one color at a time with this sorting machine

What’s better than a mouthful of Skittles, right? When it comes to various-colored candies, such as Skittles and Starburst, there’s always those one or two flavors you’re secretly wishing are heavily favored inside the pack. It would seem that many of us tend to love the red, tolerate the orange, and simply leave behind the yellow. Well, a group of Cornell engineering students recently devised a final project that will surely solve that quandary.

image025

With their ECE4760 class coming to an end, the Maker trio devised an ATmega1284 powered Skittle-sorting miniature factory that actually bags and seals same-colored candies into little pouches of flavor. Problem solved!

How it works is relatively simple. The Skittles are loaded into a plastic funnel at the top, where they are fed through a color-detection module one candy at a time — either automatically or manually. Red, green, blue and white light are reflected off the Skittle, while the color is deciphered using an RGB LED and OPT101 photodiode driven by an ATmega1284.

image030

“The LED is directed onto the Skittle with a small light block between it and the photodiode. As light hits the Skittle, certain wavelengths are reflected. The wavelength of the Skittle’s color is reflected most strongly. For example, shining a green light onto the green Skittle will reflect more light than shining a green light onto a red Skittle.”

Once a color is detected, a solenoid shoots the Skittle down a cardboard ramp which leads the piece of candy through a hole and into its appropriate bag. The ramp’s position is controlled by a servo and changes depending on the color. Once a bag has reached its preconfigured capacity, the packaging wheel rotates through a heat sealer to seal and cut the pouch.

image036

“We chose this project because we liked the multidisciplinary approach it required. There were challenging elements from both an electrical and manufacturing engineering perspective. We needed accurate color sensing, precise servo control, and repeatable timing to ensure the Skittles would sort correctly. In addition, we had to build a mechanical structure capable of passing a single Skittle within fairly strict tolerances. As an added benefit, we acknowledge that many people have Skittle flavor preferences which our mini-factory caters to,” the team writes.

Watch it in action below!

Candy lovers interested in learning more can hurry over to the team’s official project page here. Meanwhile, you may also enjoy this Atmel | SMART SAM D21 based Skittles sorter which was recently on display this year at Electronica.