Tag Archives: robotics

ARbot lets you have virtual tank battles and robot races anywhere


This projects lets you partake in virtual tank-like battles of up to 64 players throughout your office, house or pretty much anywhere.


Created by Denis Kurilchik and the Roboboom team, the ARbot project consists of a spherical robot and a mobile app that bridges the gap between real and virtual worlds, allowing users to partake in tank-like battles of up to 64 players throughout their office, at home, at school or pretty much anywhere they want.

20150423090330-14

Not just an ordinary radio-controlled toy, the ARbot is comprised of two hemispherical wheels that operate in unison using a shifted center of gravity to overcome any number of obstacles. The multi-directional bot houses an AVR based system board along with an electric motor, and is charged via micro-USB. The battery itself typically lasts anywhere between one to three hours in active mode, which is plenty for some lunch or coffee break fun.

Meanwhile, an accompanying app (available in iOS, Android and Windows) connects with the robot over Bluetooth, enabling some friendly competition. The program currently features two game modes: single-player and multi-player. As its name would suggest, multi-player lets users do battle against other ARbot owners — whether that’s during lunch, in between classes, after work, or downstairs in the basement.

20150423122946-____2015-04-22___23.30.49

Regardless of the mode, ARbot strives to blur the lines between a user’s real and augmented reality worlds. Meaning, that living room or office floor suddenly transforms into a battlefield with shoes, bags and sofas becoming barriers. Multi-player mode, however, requires at least two robots to be controlled via the ARtank app. These devices are then paired, so that all competitors view the same images on their screens.

So whether it’s for a teenager or simply a child at heart, this project makes for an entertaining and interactive gadget for anyone. Aside from tanks, the ARbot can also be used as a remote-controlled racing vehicle capable of reaching speeds of approximately three feet per second.

20150426172530-____2015-04-22___23.11.52

Avaialble in white, green, purple, pink, yellow and black, there’s an ARbot that suits every style. Those looking for an even more durable, modern-looking gizmo — and are willing to shell out some serious cash — may also want to check out the team’s special carbon edition, equipped with wireless charging. In the future, the team has plans of integrating the robot with wearables, such as Google Glass, to provide a greater immersive experience.

Sound like something for you? Head over to the ARbot project’s official Indiegogo page, where the team is currently seeking $32,000. Shipment is expected to kick off in January 2016.

Single chip MCU + DSP architecture for automotive = SAM V71


Automotive apps are running in production by million units per year, and cost is a crucial factor when deciding on an integrated solution.


It’s all about Cost of Ownership (CoO) and system level integration. If you target automotive related application, like audio or video processing or control of systems (Motor control, inverter, etc.), you need to integrate strong performance capable MCU with a DSP. In fact, if you expect your system to support Audio Video Bridging (AVB) MAC on top of the targeted application and to get the automotive qualification, the ARM Cortex-M7 processor-based Atmel SAMV70/71 should be your selection: offering the fastest clock speed of his kind (300 MHz), integrating a DSP Floating Point Unit (FPU), supporting AVB and qualified for automotive.

Let’s have a closer look at the SAM V71 internal architecture, shall we?

A closer look at Atmel | SMART ARM based Cortex M7 - SAMV71 internal architecture.

A closer look at Atmel | SMART ARM based Cortex M7 – SAMV71 internal architecture.

When developing a system around a microcontroller unit, you expect this single chip to support as many peripherals as needed in your application to minimize the global cost of ownership. That’s why you can see the long list of system peripherals (top left of the block diagram). Meanwhile, the Atmel | SMART SAM V71 is dedicated to support automotive infotainment application, e.g. Dual CAN and Ethernet MAC (bottom right). If we delve deeper into these functions, we can list these supported features:

  • 10/100 Mbps, IEEE1588 support
  • MII (144-pin), RMII (64-, 100, 144-pin)
  • 12 KB SRAM plus DMA
  • AVB support with Qav & Qas HW support for Audio traffic support
  • 802.3az Energy efficiency support
  • Dual CAN-FD
  • Up to 64 SRAM-based mailboxes
  • Wake up from sleep or wake up modes on RX/TX

The automotive-qualified SAM V70 and V71 series also offers high-speed USB with integrated PHY and Media LB, which when combined with the Cortex-M7 DSP extensions, make the family ideal for infotainment connectivity and audio applications. Let’s take a look at this DSP benchmark:

DSP bench-Atmel-SAM-Cortex-M7

ARM CM7 Performance normalized relative to SHARC (Higher numbers are better).

If you are not limited by budget consideration and can afford integrating one standard DSP along with a MCU, you will probably select the SHARC 21489 DSP (from Analog Devices) offering the best-in-class benchmark results for FIR, Biquad and real FFT. However, such performance has a cost, not only monetarily but also in terms of power consumption and board footprint — we can call that “Cost of Ownership.” Automotive apps are running in production by million units per year, and cost is absolutely crucial in this market segment, especially when quickly deciding to go with an integrated solution.

To support audio or video infotainment application, you expect the DSP integrated in the Cortex-M7 to be “good enough” and you can see from this benchmark results that it’s the case for Biquad for example, as ARM CM7 is equal or better than any other DSP (TI C28, Blackfin 50x or 70x) except the SHARC 21489… but much cheaper! Good enough means that the SAMV70 will support automotive audio (Biquad in this case) and keep enough DSP power for Ethernet MAC (10/100 Mbps, IEEE1588) support.

Ethernet AVB via Atmel Cortex M7

Ethernet AVB Architectures (SAM V71)

In the picture above, you can see the logical SAM V71 architectures for Ethernet AVB support and how to use the DSP capabilities for Telematics Control Unit (TCU) or audio amplifier.

Integrating a DSP means that you need to develop the related DSP code. Because the DSP is tightly integrated into the ARM CM7 core, you may use the MCU development tools (and not specific DSP tools) for developing your code. Since February, the ATSAMV71-XULT (full-featured Xplained board, SAM V71 Xplained Ultra Evaluation Kit with software package drivers supporting basic drivers, software services, libraries for Atmel SAMV71, V70, E70, S70 Cortex-M7 based microcontrollers) is available from Atmel. As this board has been built around the feature-rich SAM V71, you can develop your automotive application on the same exact MCU architecture as the part going into production.

SAMV71 Ultra Xplained - Atmel ARM Cortex M7

Versatility and Integrated DSP built into the ARM CM7 core allows for MCU development tools to be used instead of having to revert to specific DSP tools. You can develop your automotive application on exactly the same MCU architecture than the part going into production.

Interested? More information on this eval/dev board can found here.


This post has been republished with permission from SemiWiki.com, where Eric Esteve is a principle blogger as well as one of the four founding members of SemiWiki.com. This blog first appeared on SemiWiki on April 29, 2015.

These robots will slide under your car and move it


Sure there are self-parking cars, but what about autonomous robots that can move your parked car? 


A team of European researchers have developed a swarm of small robots and a deployment unit that can autonomously extract and move vehicles up to two tons in weight. Dubbed the Autonomous Multi-Robot System for Vehicle Extraction and Transportation — or AVERT for short — the system was designed for use by law enforcement.

avertteamsho

As its name implies, the solution requires very little human interaction and is comprised of three separate subsystems: a deployment unit, a set of bogies and a remote command center. How it works is pretty straightforward: The deployment unit is equipped with a digital camera and SICK laser scanner that is tasked with mapping out an area and scouting for potential obstacles in order to plan its safest route. This unit then releases four small bogies, which by operating in a swarm, navigate over to the vehicle using on-board sensors to avoid obstacles, detect tires and dock themselves to the vehicle.

Once in position under the car’s footprint, it is hoisted just an inch or so off the ground and taken away by the robots. Meanwhile, a graphical user interface (GUI) provides users with all of the necessary information and on-demand interaction during the deployment and operation of the system.

IMG_0568

As awesome as AVERT would be to rescue society from bad parallel parking situations, it was specifically developed for use by police officers — especially in scenarios which require the extraction of suspicious vehicles from within buildings, parking garages and other tight places where a tow truck is not accessible, or to transport cars suspected of being rigged with explosives to a safer location.

The team has been developing the technology since 2012 and believes a production model could be ready by next year. Even better, a member of the project has informed us that an Arduino Mega (ATmega2560) can be found at the core of AVERT. They will be showcasing the system at the upcoming International Conference on Robotics and Automation in Seattle.

That’s enough yapping from us now, you’ll have to see it to believe it! Watch below!

DORA is an immersive teleoperated robotic platform


DORA is bridging the gap between immersive virtual simulations and real world physical telepresence.


Telepresence robots have been used in a wide-range of applications, from remotely attending a meeting and visiting a museum to exploring space and scoping out a battlefield. As its name implies, these machines enable a user to make it as though they are standing in a distant location by navigating around an environment via a robotic surrogate. Yet, despite advancements in technology, the experience is still not exactly like real-life. That may may soon all change, especially if left in the hands of University of Pennsylvania engineers.

dora1-1430067323214

One team of researchers has set out to revolutionize telepresence robotics by building what they call Dexterous Observational Roving Automaton (DORA), which works with the Oculus Rift VR headset to establish a groundbreaking physical-virtual interface. Nowadays, most commercial devices are merely screens or tablets on moving platforms. However, DORA aspires to make it as though a user has actually been transported to another place.

In an effort to offer such an immersive experience, the remote robot is equipped with a pair of cameras that not only stream three-dimensional views of its terrain, but looks up/down, forward/backward and left/right as a VR headset wearer turns their own head. This is accomplished by precisely matching the movements of the headset wearer’s own neck in all six degrees of freedom through an inertial measurement unit (IMU) and infrared beacon tracking. That data is wirelessly transmitted to the robot’s embedded Atmel based Arduino and Intel Edison microcontrollers, prompting its camera-equipped head to mimic the motions of the user.

ora

“DORA is based upon a fundamentally visceral human experience—that of experiencing a place primarily through the stimulation of sight and sound, and that of social interaction with other human beings, particularly with regards to the underlying meaning and subtlety associated with that interaction. At its core, the DORA platform explores the question of what it means to be present in a space, and how one’s presence affects the people and space around him or her,” its creators tell IEEE Spectrum.

Still in its prototyping stage, DORA operates over a radio link with a line of sight range of just over four miles. However, the team is looking to improve its responsiveness with a lag less than 60ms and to transition to Wi-Fi or 4G connections. This will allow for the system to be used in a variety of settings, such as virtual tourism, emergency response, and maybe one day even video chat.

Intrigued? Head over to the team’s official page to explore the project in more detail.

These tiny robots can carry loads 100 times their weight


Inspired by a gecko, one tiny bot can pull objects that are nearly 2,000 times heavier than itself. 


Whoever said big things can’t come in small packages has surely never seen these robots. That’s because Stanford University engineers have built miniature bots capable of hauling things that weigh over 100 times more than themselves.

Pic1

Impressively, the strongest of the bots — which are aptly named MicroTugs — weighs only 12 grams yet is capable of pulling objects that are nearly 2,000 times its weight. While another one, a 9-gram climbing robot, can carry over a kilogram vertically up glass. To put these into perspective, co-creator David Christensen says that is the equivalent of a person dragging a blue whale and climbing up a skyscraper while lugging an elephant, respectively. Even a 20-milligram bot can tote up to 500 milligrams, which is roughly the size of a paper clip.

How can this be, you ask? The robots borrow techniques from inchworms and geckos as they traverse their terrain. Inspired by the gecko, the engineers covered the robots’ feet with tiny rubber spikes that bend when pressure is applied and straighten out when the robot picks its foot back up. The team of researchers also adopted the inchworm’s method of locomotion: while one half of its body moves forward, the other stays in place to support the heavy load being pulled. This allows the bot to climb walls without losing its grip, New Scientist explains.

“This work demonstrates a new type of small robot that can apply orders of magnitude more force than it weighs. This is in stark contrast to previous small robots that have become progressively better at moving and sensing, but lacked the ability to change the world through the application of human-scale loads,” the pair of engineers write.

Robots

Just think: A robot bringing your coffee across your desk when out of reach or picking up a pen that was dropped on the floor? That’s not the end-game, though. In the future, the team hopes that machines like these could prove to be useful in factories, on construction sites, and even in emergency scenarios. For instance, one might carry a rope ladder up to a person trapped on a high floor in a burning building.

stanfor_7b0f42e17ee3cfa5e9ae67debd983d4a

The mighty bots will be presented next month at the International Conference on Robotics and Automation in Seattle. Intrigued? Delve deeper into the Stanford engineers’ research and development here, and be sure to watch them in action below!

Robobarista can learn how to make your morning latte


The best part of waking up is a robot filling your cup! 


Developed by researchers at Cornell University, the aptly-named Robobarista may appear to be just an ordinary robot, however it packs the skills of a talented Starbucks barista. Impressively, it is capable of learning how to intuitively operate machines by following the same methods a human would when introduced to a device, like a coffeemaker.

robobarista

The Robobarista can autonomously make an espresso, as well as carry out other mundane tasks, using instructions provided by Internet users. To do this, team had to first collect enough crowdsourced information from online volunteers to teach the robot how to manipulate objects it had never seen before. The Robobarista then reads these instructions — such as “hold the cup of espresso below the hot water nozzle and push down the handle to add hot water” — and completes a said command by using a database of deep learning algorithms.

“In order for robots to interact within household environments, robots should be able to manipulate a large variety of objects and appliances in human environments, such as stoves, coffee dispensers, juice extractors, and so on,” the team writes. “Consider the espresso machine above — even without having seen the machine before, a person can prepare a cup of latte by visually observing the machine and by reading the instruction manual. This is possible because humans have vast prior experience of manipulating differently-shaped objects.”

Robobarista’s functionality is based on a two-step process. Generally speaking, the idea is to get the robot to recognize certain things — including buttons, handles, nozzles and levers — and produce similar results as its human counterparts. This way, when it sees a knob, for instance, the robot can scan through its database of known objects and properly identify it. Once it has confirmed that the said control is indeed a knob, it can figure out how to physically operate it based on all of the similar gizmos in its database, the device’s online instruction manual, and how it understands a person’s use of the gadget.

The team notes that their focus was on the generalization of manipulation trajectory through part-based transfer using point-clouds without knowing objects a priori and without assuming any of the sub-steps like approaching and grasping.

“We formulate the manipulation planning as a structured prediction problem and design a deep learning model that can handle large noise in the manipulation demonstrations and learns features from three different modalities: point-clouds, language and trajectory,” the team explains.

espresso_transfer-1

To help instruct an action, users select one of the preset steps and then navigate a series of options to control the robot’s movements. Ultimately, every user will complete the task slightly differently, therefore building up the droid’s skillset when it draws on hundreds of these instructions. As this database grows, so does its potential to carry out more chores in and around the house.

For each item, the team captures raw RGB-D images through a Microsoft Kinect camera and laser rangefinder, then stitches them with Kinect Fusion to form a denser point-cloud in order to incorporate different viewpoints of the objects. The crowdsourced instructions are translated into coordinates, which the robot uses to plan the trajectory of its arm to control a new machine.

“Instead of trying to figure out how to operate an espresso machine, we figure out how to operate each part of it,” the team adds. “In various tests on various machines so far, the robot has performed with 60% accuracy when operating a device that it has never seen before.”

Don’t drink coffee? No need to fret. Since Robobarista can master directions over the Internet via Amazon’s Mechanical Turk, the friendly bot can do a lot more than just make a mean cup ‘o joe. In fact, it can fill up a water bottle or pour a bowl of cereal as well. Talk about the perfect Rosie-like robot for the morning rush!

Up until now, robots have typically been configured to complete the same command repeatedly, like the recently-unveiled gadget capable of whipping up dinner by following a set of preprogrammed recipes. However, Cornell’s latest creation has been built to intuitively account for variables and work around them.

If you’ve come by any of our event booths in the past, you know how much we love coffee. Perhaps, we should call upon Robobarista for our next shows! Interested in learning more? Be sure to read the Cornell team’s paper. The student researchers are still working with crowdsourcing to educate their robot, and you can sign up to assist in their efforts here.

This Arduino-based robot responds to simple voice commands


After being disappointed with a robotic arm he received for the holidays, one Maker decided to build his own bot instead.


With Maker Faire season in full swing, we just can’t seem to get enough of robotic creations. Recently, we came across a pretty sweet project from John Fin, who devised a voice-controlled, Arduino-based bot of his own.

Spark1

The idea was initially conceived after the Maker received a robotic arm for Christmas and was displeased with its quality. Three months later, he had a fully-functioning bot navigating through his garage. The robot, which goes by the name of S.P.A.R.C., is powered by an Arduino Uno (ATmega328) with an EasyVR voice control shield connected to an Arduino Duemilanove (ATmega328) with a motor shield.

S.P.A.R.C. works by accepting simple commands, generally formatted by naming the object to be manipulated, then a desired action, followed by a number. For instance, “Forward 1… Turn right 1…. Arm up 7…. Wrist down 6.” In true robotic fashion, it even emits a friendly beep between each segment of the command. As Fin explains, the reason for saying a number is to give the operator incremental control.

Spark2

So what exactly can S.P.A.R.C. do? For starters, the bot can move its arm up and down, grasp and release items, follow and stop on a dime, among many other things. Not only can it be controlled by voice, an operator can use a remote as well. Beyond that, S.P.A.R.C. can respond to inquiries in Siri-like fashion, ranging from a simple “What’s my name?” to the weather forecast — which is all made possible through an on-board Galaxy S2 smartphone running the Dragon Assistant app. This gives the robot Internet access and “a little extra personality,” too.

Aside from its embedded Arduino boards and smartphone, S.P.A.R.C. is equipped with an extendable mast to bring items up and down, a sonar for measuring distance and detecting obstructions, movable lights, an arm with changeable grippers, a toolbox for whatever, a built-in Bluetooth-enabed speaker, a power inverter, and of course, two wheels for ultimate 360-degree mobility.

Pretty cool, right? Watch S.P.A.R.C. in action below!

PLEN2 is the world’s first printable, open-source robot


Say hello to your new robotic sidekick. 


R2-D2. GERTY 3000. Marvin. K-9. Jinx. These are just a few of the most well-known robotic sidekicks that super geeks like us have come to love over the years. Soon, PLEN2 may join the ranks of these memorable sci-fi characters, with the only difference being actual use in the real world. Whether you’ve ever wanted someone to go to class in your place, to break the ice with an attractive girl at the bar, or to fetch your morning cup ‘o joe, you’re in luck.

Launched on Kickstarter by Japan-based PLEN Project Committee, the 3D-printable, humanoid robotic kit consists of a control board, servo motors and other electronic accessories that allow Makers of all levels to put together themselves. What’s more, you don’t need any technical knowledge or special tools in order to bring your open-source PLEN2 to life.

photo-1024x768

3D data for the main components of the robot are provided free of charge, and with the help of a 3D printer, users can customize the data as well as make their own original parts. Upon completion, the easy-to-manuever and highly-agile humanoid stands approximately 7.87” tall, weighs just over 21 ounces and boasts 18 degrees of freedom. Designed to mirror its human counterpart, PLEN2 aspires to revolutionize the relationship between homo and robo sapiens. To help spur this adoption, the project’s creators have made its kit super simple to assemble, personalize, and of course, use.

7ed8ffaf875684f59ac578342a4b3c55_large

The robot’s command center is built around an Arduino Micro (ATmega32U4), and by employing some open-source software, can be programmed to meet any Maker’s wants and needs. PLEN2 is equipped with 24 RC servo motors, 1Mb of on-board EEPROM and an RS-485 communication port in both its control and head board. The head unit also comes standard with a BLE113 Bluetooth Smart module and a six-axis motion sensor, while PWM will drive the LEDs that PLEN2 uses for eyes.

949b2d6cc8cd892341a8e428c30cbcbe_original

Gadget-lovers can take pleasure in knowing that each PLEN2 can be customized not only in color and design, but in the way that it is controlled as well — this includes by iOS or Android smartphone, facial expression, gestures, myoelectrics and brainwaves, among countless other input methods.

d8f91891c3f965907f7b8f1437d98478_large

Not only for leisure activities, the humanoid can play an integral role in both educational and medical settings. A wide-range of uses cases include communicating with others in your place, carrying small items around, throwing a pickup game of humanoid soccer, as well as improving medical rehabilitation. What’s more, it can help entice children to pursue STEM disciplines and enable them to experience the joy of making things themselves.

As to whether this project takes off, or if you decide on programming a PLEN2 of your own, one thing is certain: Its theme song will get stuck in your head. Consider yourself warned…

…We told you so. Interested in learning more? Head over to its official Kickstarter page, where its team is currently seeking $40,000. If all goes to plan, you can have can have a PLEN2 alongside of you come November 2015.

Google patents customizable robot personalities


Newly-patented system would allow users to download the personality of a celebrity or a deceased loved one to a robot.


Google has been granted a patent that would allow the company to develop downloadable personalities for robots drawn from the cloud, such as your favorite celebrity or even a deceased loved one.

google-granted-patent-robots-personalities

“The robot personality may also be modifiable within a base personality construct (i.e., a default-persona) to provide states or moods representing transitory conditions of happiness, fear, surprise, perplexion (e.g., the Woody Allen robot), thoughtfulness, derision (e.g., the Rodney Dangerfield robot), and so forth,” the filing reveals.

Just as you would download an app, Google’s patent details how a user could download various actions and personalities. The robot would use information from a person’s mobile devices, such as calendar information, emails, texts messages, call logs, web browsing history and TV viewing schedule, to determine a personality to take on that would suit the user. Beyond that, friends will even be able to clone their robots and exchange aspects of its personality.

“The personality and state may be shared with other robots so as to clone this robot within another device or devices. In this manner, a user may travel to another city, and download within a robot in that city (another ‘skin’) the personality and state matching the user’s ‘home location’ robot. The robot personality thereby becomes transportable or transferable,” the document continues.

Google also outlines a number of examples where the robot can learn human behavior and adapt accordingly, whether that’s knowing a user is grumpy when it’s raining outside, in need of coffee before heading off to work, or even being unable to consume particular meals due to food allergies.

“For example, the user may be allergic to mangos and may update a user-profile to include such information. Simultaneously, a robot may update the user-profile to include that the user is also allergic to peanuts. When the dining fare is French cooking, the robot may be queued to adopt the persona of Julia Child.”

Based on the information in its user-profile, the robot can even adopt a butler persona and offer up suggestions. Meanwhile, users can interact with the robot and tell it if it has done something wrong, as well as be programmed to provide a desired look.

Robots that mimic humans are still very much in their infancy, and truthfully there’s no telling where this technology can go — especially when backed by giants like Google. And while there’s no guarantee that this patent will ever come to fruition, it may very well be the next step in making human-robot relationship a reality. Intrigued? You can read the entire patent filing here.

Festo unveils a pair of insect-inspired robots


These robotic ants and butterflies act like the real things.


Well, it looks like Festo’s SmartBird, BionicKangaroo and BionicOpter are getting two new siblings. That’s because the German automation company has introduced the latest addition to its growing family of biomimetic robots: an ant and a butterfly.

For the first time, the cooperative behaviour of the creatures is also transferred to the world of technology using complex control algorithms. (Source: Festo)

The cooperative behavior of the real-life creatures is transferred to tech world using complex control algorithms. (Source: Festo)

First, the aptly named BionicANTS are designed to cooperatively operate. In other words, as a whole, they can complete complex tasks such as move larger objects, head to a specific location or conduct their own flash mob if they’d really like. Each 5.3-inch BionicANT is comprised of various components that are laser-sintered and finished with visible conductor structures and electrical circuits attached to its exterior.

The artificial ants can solve a complex task together working as an overall networked system. (Source: Festo)

The artificial ants can solve a complex task together working as an overall networked system. (Source: Festo)

A majority of the ant’s frame, as well as the electronic circuits located on the outside of its body, are 3D-printed. A radio module on its abdomen enables the robots to communicate with one another, while piezo-ceramic bending transducers are tasked with pushing movements, lifting its legs and activating its gripping jaws. A 3D stereo camera in the ant’s head allows it to see, an infrared optical sensor on its underside records the distance traveled, and a microprocessor distributes all the necessary signals. Beyond that, a pair of on-board Li-Po batteries provide up to 40 minutes of wireless power, before requiring to be recharged in a dock via their feelers.

Each butterfly is autonomous, using independently controllable wings to fly preprogrammed routes. (Source: Festo)

Each butterfly is autonomous, using independently controllable wings to fly preprogrammed routes. (Source: Festo)

Similarly, the beautiful eMotionButterfly also uses collective behavior through an intelligent networking system. As they soar through the sky, they can manuever along pre-programmed paths inside special areas equipped with 10 high-speed infrared cameras — this keeps them from crashing into each other, walls or any other object. Each 20-inch butterfly weighs just 32 grams, and are equipped with two servo motors, some electronics and two small Li-Po batteries that gives them enough juice to fly at 2.5 meters per second for four minutes before they need to be recharged.

If you squint really, really hard... (Source: Festo)

If you squint really, really hard… (Source: Festo)

Interested in learning more? Fly on over to Festo’s official page here, and be sure to watch both the ants and butterflies in action below.