Tag Archives: Carnegie Mellon University

Students create a rubber band-flinging drone with AVR


When shooting a single rubber band just won’t do, its time to build a UAV to do it for you!


For those who may not know, PennApps is the granddaddy of college hackathons converging over 1,200 hobbyists and tinkerers from all over the globe onto the campus of University of Pennsylvania. Students work in teams of up to four people for thirty-six hours to create a web, mobile, wearable or hardware project, and show if off at the final expo, which is open to the public.

A Carnegie Mellon University team — going by the name “Bodyguard” — comprised of Makers Kumail Jaffer, Angel Zhou, Kyle Guske and John Lore recently decided to create a rubber band-flinging drone for their PennApps project last fall. In order to do so, the team affixed an Arduino Yún (ATmega32U4) to a servo motor that would power the rubber band cannon. To do this, they connected the Arduino to the drone’s own Wi-Fi network and relayed signals to shoot.

You can watch it in action below!

Video: Wobbl is an Arduino-powered conversation table


This project encourages you to put down that phone and enjoy the presence of someone else.


While we may not yet have flying cars, one thing that Back to the Future II foresee was the fact that one day, we’d all be consumed with technology at the dinner table. It seems that in our constantly-connected world, we’re in front of some sort of screen 24/7. Admit it, at one point or another, you have been so immersed in your phone that you’ve failed to acknowledge the person sitting across from you at the table — albeit a friend, family member or significant other. Well, in an effort to spur engagement between two people, a team of Carnegie Mellon University students have developed what they call a conversation table. 

bttfphonetable_001

Powered by an Arduino Uno (ATmega328), Wobbl is a conceptual approach to how an environment can respond to your decisions with polite commentary. How it works is relatively simple. Users set specific conversation time, say over dinner. If and when someone picks up their smartphone in that timeframe, it will cause the other person’s end of the table to wobble, and vice versa.

IMG_1188-1

The Conversation Table uses analog infrared distance sensors that are capable of recognizing when a phone is in its designated slot. This will then trigger either side of the table to shake via two medium-sized continuous rotation servos. These motors drive a bolt into a nut inside of a stationary leg, thereby creating a makeshift linear actuator. The table itself is constructed out of laster-cut and hand-stained plywood.

IMG_1181-1

“Sometimes, we want to have a conversation with someone that isn’t about reading facts off of Wikipedia, or checking to see what time movies are playing on a theater’s mobile site. Sometimes we just want to talk to the person across from us about them, rather than hear about the person they were texting or be so distracted from your own phone you miss out on what the person is saying. We also want to know that they’re engaged with us whether we’re talking or listening,” the team writes.

Interested in learning more? Head over to the team’s official project page here.

This smartwatch turns your skin into a touchscreen

Developed by Carnegie Mellon University’s Future Interfaces Group, Skin Buttons are touch-sensitive projected icons made on a user’s skin.

skinbuttons-590x330

While smartwatches are a promising new interactive platform, their small size makes even basic actions cumbersome. As a result, the Carnegie Mellon team has designed a new way to “expand the interactive envelope around smartwatches, allowing human input to escape the small physical confines of the device.”

Using tiny laser projects that are integrated into the smartwatch to render touch-sensitive icons allows for the expansion of the interaction region without increasing device size, and more importantly, sacrificing precious real estate on a wearer’s arm.

“Maybe in 15 or 20 years you’ll have a device that’s as powerful as a smartphone but has no screen at all,” explained Chris Harrison, Head of the Future Interfaces Group. “Instead it’s like a little box of matches that you plunk down on the table in front of you and now all of a sudden that table it interactive. Or a watch that’s screen-less. You could just snap your fingers and you whole arm becomes interactive.”

The proof-of-concept implementation can be used for a range of applications, many of which typically found on a mobile device, such as accessing music, reading emails and text messages, as well as checking the time or setting an alarm.

The prototype smartwatch contains four fixed-icon laser projectors along with accompanying infrared proximity sensors. These are connected to an ATmega328P based Femtoduino board, which communicates over USB with a host computer. Additionally, a 1.5-inch TFT LCD display is driven from a host computer. While the team used an external computer for prototyping, it appears that a commercial model would be self-contained.

“If you put a button on your skin, you expect people to be like, “What the, this is totally insane!” Harrison told Wired. “But actually people don’t generally react like that. People think it’s cool but they get over the coolness really fast and just start using it.”

Interested in learning more? You can access the team’s entire paper here, or head over to the Future Interfaces Group’s official website.

Re-imagining the radio interface with wood, fabric and Arduino

Audio broadcasting radios have been around since the 1920s. In fact, their control interface share many similarities — knobs, sliders and switches — with those designed by our ancestors nearly 100 years ago. Now, what if we could re-imagine the entire radio control experience to create a more meaningful relationship between the user and the artifact?

OLYMPUS DIGITAL CAMERA

Seeking to do just that, Carnegie Mellon University design student Yaakov Lyubetsky has developed a fully-functional prototype of his latest The Experimental Form Radio using an Arduino Uno (ATmega328).

Processed with VSCOcam with f2 preset

The project features an ATmega328 based board along with a custom circuit comprised of three independent layers of conductive fabric and thread. Touching together the two layers of conductive fabric completes one of twelve circuits that then either change the radio station or the volume.

“When The Experimental Form Radio is laying on a tabletop, it is off. To turn the radio on, you pick it up and slot it onto a wall mount. The radio leverages the elastic qualities of fabric to control stations and volume,” Lyubetsky explains. “To change stations you press lightly and slide your finger along the fabric surface. To change the volume you press firmly into the fabric, and then slide your finger along the deeper cavity in the radio.”

As the Maker points out, the visual and auditory feedback allows the user to have a clear understanding of the system state.

“The soft and stretchy material qualities of fabric create a control system that is inviting and pleasurable for the user. The strength of the user’s push as well as the cast shadow on the fabric creates tangible feedback for the user to have better control of the tuning and volume.”

To explore Lyubetsky’s efforts to re-imagine the way we interact with radios, you can tune-in to his project page here.

Making 3D manipulation of 2D images a reality

Researchers at Carnegie Mellon University are attempting to make a task, once only possible in science fiction movies, an everyday occurrence. Using basic 3D models, the group hopes their new software can lead to full 3D manipulation of traditional 2D photos.

cab-3

Led by Associate Researcher Yasser Sheikh, the Carnegie Mellon team has created software, which enables the use of freely available stock 2D images of everyday objects, such as furniture, cars, clothing and appliances, and turn them into 3D models. These models can then be manipulated in 3D spaces as the user desires. The software also includes an ability to adapt the image’s lighting and blending after the individual aspect is transformed.

The software relies on a massive library of stock 3D renderings and images to function. Once a user selects the entire or portion of the 2D image to manipulate, the software utilizes the catalog to create a simple 3D rendering that can then be transformed by the user. Once the model is created, the photo can be completely re-imagined.

The researchers believe that as 3D scanning and printing becomes more widespread, more stock models will become available for the software to tap into, filling the gaps in its database. “In the real world, we’re used to handling objects — lifting them, turning them around or knocking them over,” Robotics Institute PhD student Natasha Kholgade told CNET.

The authors note as a result to the rising accessibility of 3D renderings of everyday objects, “Public repositories of 3D models are growing rapidly and several Internet companies are currently in the process of generating 3D models for millions of merchandise items, such as toys, shoes, clothing, and household equipment. It is therefore increasingly likely that for most objects in an average user photograph a stock 3D model will soon be available, if it is not already.”

If this new software proves to be viable, the resourcefulness of the platform for the 3D printing community could be invaluable. If you want to read the full report from the research team, the document can be found here.