ArduEye is a project by Centeye, Inc. to develop open source hardware for a smart machine vision sensor. All software and hardware (chips and PCBs) for this project were developed either from pre-existing open source designs or from Centeye’s own 100% IR&D efforts. In the interview below, Atmel discusses the above-mentioned technology with Maker Geoffrey Barrows, founder of ArduEye and CEO of Centeye.
Tom Vu: What can you do with ArduEye?
Geoffrey Barrows: Here are some things people have actually done with an ArduEye, powered by just the ATmega328 type processor used in basic Arduinos:
- Eye Tracking- A group of students at BCIT made an eye tracking device, for people paralyzed with ALS (“Lou Gehrig’s disease”), that allow them to operate a computer using their eyes.
- Internet connected traffic counter– I aimed an ArduEye out at the street in front of my house and programmed it to count the number of cars driving northbound. Every 5 minutes, it would upload the count to Xively, allowing the whole world to see the change in traffic levels throughout the day.
- Camera trigger- One company used an ArduEye to make a camera trigger at the base of a water slide at a water park. When someone riding the slide reached the bottom, the camera took a picture of the rider and then send it to him or her via SMS!
- Control a robotic bee – My colleagues at Harvard University working on the “RoboBee” project mounted one of our camera chips on their 2cm large robotic bee platform. The chip was connected to an Arduino Mega (obviously not on the bee), which ran a program to compute visually, using optical flow, how high the bee climbed. A controller could then cause the bee to climb to a desired height and hold a position. This was a very cool demonstration.
- Control a drone – My colleagues at the U. Penn GRASP Lab (who produced the famous swarming quad copter video) used two ArduEyes to control one of their nano quad copters to hover in place using vision.
- The New Jersey based “Landroids” FIRST robotics team uses ArduEyes on their robots to do things like detect objects and other robots.
These are just some examples. You can also do things like count people walking through a doorway, make a line-following robot, detect bright lights in a room, and so forth. I could spend hours dreaming up uses for an ArduEye. Of course an ArduEye doesn’t do any of those things “out of box”- you have to program it. 
TV: Explain the methodology and the approach? What is your general rule of thumb when it comes to resolution and design margins?
GB: My design philosophy is a combination of what I call “vertical integration” and “brutal minimalism”. To understand “vertical integration,” imagine a robot using a camera to perform vision-based control. Typically, one company designs the camera lens, another designs the camera body and electronics, and another company designs the camera chip itself. Then you have a “software guy/gal” write image processing algorithms, and then another person to implement the control algorithms to control the robot. Each of these specialties is performed by a different group of people, with each group having their own sense of what constitutes “quality.” The camera chip people generally have little experience with image processing and vice versa. The result is a system that may work, but is cumbersome and cobbled together.
Our approach is instead to consider these different layers together and in a holistic fashion. At Centeye, the same groups of minds design both the camera hardware (the camera body as well as the camera chips themselves) and the image processing software. In some cases we even design the lens. What this means is that we can control the interface between the different components, rather than being constrained by an industrial standard. We can identify the most important features and optimize them. Most important, we can then identify the unnecessary features and eliminate them.
This latter practice, that of eliminating what is unnecessary, is “brutal minimalism”. This is, in my opinion, what has allowed us to make such tiny image sensors. And the first thing to eliminate is pixels! It is true that if you want to take a beautiful photograph for display, you will need megapixels worth of resolution (and a good lens). But to do many other tasks, you need far fewer than that. Consider insects- they live their whole lives using eyes that have a resolution between about 700 pixels (for a fruit fly) to maybe 30k pixels (for a dragonfly). This is an existence proof that you don’t always need a million pixels, or even a thousand pixels, to do something interesting.
TV: What are some of the interesting projects you have worked on when involving sensors, vision chips, or robotics?
GB: The US Air Force and DARPA has over the years been sponsoring a number of fascinating programs bringing together biologists and engineers to crack the code of how to make a small, flying robot. These projects were all interesting because they provided me, the engineer, with the chance to observe how Nature has solved these problems. I got to interact with an international group of neuroscientists and biologists making real progress “reverse engineering” the vision systems of flys and bees. Then later on I got to implement some of these ideas in actual flying robots.
This gave me insights to vision and robotics that are often contradictory to what is generally pursued in much university research efforts- the way a fly perceives the world and controls itself is completely different from how most flying “drones” do the same. Flying insects don’t reconstruct Cartesian models of the world, and certainly don’t use Kalman filters!
I also participated in the DARPA “Nano Air Vehicle” effort, where I got to put some of these bio-inspired principles to practice. As part of that project, we built a set of camera chips to make insect-inspired “eyes”, and then hacked a small toy helicopter to do things like hold a position visually, avoid obstacles, and so forth, with a vision system weighing just a few grams. What very few people know is that some of the algorithms we used could be traced back to the insights obtained directly by those biologists studying flying insects.

Right now we are also participating in the NSF-funded Harvard University “RoboBee” project, whose goal is to build a 2cm scale flying robotic insect. Centeye’s part, of course, is to provide the eyes. My weight budget will be about 20 milligrams. So far we are down to about 50 milligrams, with off-board processing, so we have a way to go.
TV: You mentioned insects. do you draw inspiration from biology in your designs?
GB: Another aspect of our work, especially our own work with flying drones, is to take inspiration from biology. This includes the arrangement of pixels within an eye, as well as the type of image processing to perform and even how to integrate all this within a flight control system.
There is a lot we can learn from how nature has solved some tough problems. And we can gain a lot by copying these principles. However, in my experience it is best to understand the principles behind why nature’s particular solution to a problem works and innovate with that knowledge, rather than to slavishly copy a design you see in nature. Consider a modern airliner and a bird in flight. They do look similar- wings keep them aloft using Bernoulli forces, a tail provides stability, by keeping the center of drag behind the center of gravity, and they modify their flight path by changing the shape of their wings and tail. However an airliner is made from metal alloys, not feathers!

I like to invoke the 80/20-principle here – If you make a list of all the features of a design from nature, probably 80% of the benefit will come from 20% of the features, or even less. So focus on finding the most important features, and implement those.
TV: What are the technology devices, components, and connectivity underneath?
GB: For almost all of our vision sensor prototypes, including ArduEyes, there were four essential components: A lens, which focused light from the environment onto the image sensor chip, the image sensor chip itself, a processor board, and an algorithm running on the processor. You can substantially change the nature of a vision by altering just one of these components. We usually use off-the-shelf lenses, but we have made our own in the past. We always use our own image sensor chip. For the processor we have used everything from an 8-bit microcontroller to an advanced DSP chip. And finally we generally use our own algorithms, though we have tinkered with open source libraries like Open-CV. 
It can take a bit of a mentality shift to be able to design across all these different layers. Most of the tools and platforms out there do not allow this type of flexibility. However with a little bit of practice it can be quite powerful. Obvious the greatest amount of flexibility comes from modifying the vision algorithms.
TV: Does nature have a smart embedded designer? If so, what would Nature’s tagline or teaser be for it’s creations? What’s the methodology or shape, if you can sum it up in a few words?
GB: Perhaps one lesson from Nature’s “embedded designer” would be “Not too much, not too little.” To understand this, consider evolution: If you are a living creature, then your parents lived long enough to reproduce and pass their genes to you. This is true of your parents, grandparents, and so on. Every single one of your ancestors, going all the way back to the origins of life on Earth, lived long enough to reproduce, and your genetic makeup is a product of that perfect 100% success rate. It is mind blowing to think about it.

Now, for a creature to live long enough to reproduce, it has to have enough of the right features to survive. But it must also not have too many features, and it must also not have the wrong features. Most animals get barely enough energy (e.g. food) to survive. If a particular animal has too many “unnecessary features,” then it will need more food to survive and thus is less likely to pass its genes on.
Another lesson would be that a design’s value is measured relative to the application. Each animal species evolved for a particular role in a particular environment- this is why penguins are different from flamingos, and why fruit flies are different from eagles. Applied to human engineered devices, this means that any “specification” or figure of merit considered in a vacuum is meaningless. You have to consider the application, or the environment, first before deciding on specifications. This is why choosing a camera based only on the number of “megapixels” it has is dangerous.
TV: What is your rule of thumb when it comes to prototypes, testing, improving, and then rolling out include fuller design?
GB: I’m going to be more philosophical here: Rule #1- A crappy implementation of the right thing is superior, both technically and morally, to a brilliant implementation of the wrong thing. Wrong is wrong, no matter how well done. Rule #2- A crappy implementation of the wrong thing is superior to a brilliant implementation of the wrong thing. Doing the wrong thing brilliantly generally consumes more resources than doing it crappy, plus the fact you invested more into it makes you less likely to abandon it once you realize it is wrong.
Of course, the ideal is to do a brilliant implementation of the right thing. However when you are prototyping a new device, or trying to bring a new technology to market, it is very difficult to know what are the right and the wrong things to do. So the first thing you must do is to not worry about being crappy, and instead focus on identifying the right thing to do. Quickly, one ought to prototype a device, experiment with it, get it in the hands of customers if it is a product, get feedback, and make improvements. Repeat this cycle until you know you are doing the right thing. And only then put in the effort to do a brilliant implementation.
Those who are familiar with the “Lean Startup” business development model will recognize the above philosophy. I am a big fan of Lean Startup. I would give away everything I own if I could send a few relevant books on the topic back in time to my younger self 15 years ago, with a sticky note saying “READ ME YOU FOOL!”
Now of course we have to take the word “crappy” with a grain of salt. I don’t mean to produce and deliver rubbish. That helps no one. Instead, what I mean is that the first implementations you put out there are “brutally minimalist” and include the bare essence of what you are trying to produce. It may be minimal, but it still has to deliver something of real value. This is often called a “minimally viable product” in the Lean Startup community.
The same applies to when we are conducting research to develop a new type of technology. The prototypes are ugly, and often use code that makes spaghetti look like orderly Roman columns. But their purpose is to quickly test out and refine an idea before making it “pretty”.
TV: What is the significance of the ATmega328 in your embedded design?
GB: We chose the ATmega328 because this is the standard processor for basic Arduino designs. We wanted to maintain the Arduino experience as faithfully as possible to keep the product easy to hack.
TV: How important is it for you to rapidly build, test, and develop the evolution of your product from Arduino?
GB: Funny you should ask. We use Arduinos and ArduEyes all the time to prototype new devices or even perform basic experiments. When I get a new chip back from the foundry, the first thing I do is hook it up to an Arduino. I can verify basic functionality in just a few hours, sometimes even in ten minutes.
TV: What is the difference between Centeye and ArduEye? Technology differentiators?
GB: ArduEye is essentially a project that was developed by Centeye and supported by Centeye. The main differentiators are that ArduEye was developed in isolation from our other projects, in particular the ones associated with Defense. We essentially developed a separate set of hardware, including chips, and software, and did so at no small expense. This is partially why it took so long for this project to become reality.
TV: How do you see ArduEye and vision chips in the future for many smart connected things?
GB: I think the best uses for adding vision to an IoT application will come not from me, but from tinkerers, hackers, and other entrepreneurs that have identified a particular problem or pain that can be solve using our sensors as a platform. But in order for them to innovate, vision must be tamed to the level that these users can quickly iterate through different possibilities. I see ArduEye as a good platform to make it happen, to let such innovation occur in a friction-less manner.
TV: What are some one the IoT implications of using brilliant sensor eye devices in their products?
GB: At one level there is a rich amount of information you can obtain with vision. Think about it- you can drive a car if you only have visual information. However vision has a tendency to generate a LOT of data. This is true even for a very modest image sensor of several thousand pixels. And it is true that bandwidth is getting cheaper, but I don’t think the Siri model of pushing all the data to “the cloud” for processing is a viable one. You will have to find ways to process vision information up at the sensor, or at some nearby node, before that information can be sent up to the cloud.
TV: How can sensors like ArduEye be compounded with richer use-cases especially when integrating the Big Data and Cloud initiatives of modern trending IT innovations?
GB: Over the next decade we will see newly minted billionaires who have figured this out.
TV: How can ArduEye evolve? What do you see as a visionary for ArduEye to be integrated more so to accelerate efficiency?
GB: Good question! Well, first of all, this will depend on how others are using ArduEye and the feedback I get from them. For ArduEye to be successful, it has to be valuable to other people. So I would really like to hear feedback from anyone who uses these products, so that we can make them better. I’ve been willing to speak with anyone who uses these products. Tell me- do you know any other image sensor companies that allow you to speak with the people who design the chips? That said, some obvious improvements would be to incorporate more advanced Arduinos, such as the Due that uses an ARM processor.
TV: Are there security or privacy concerns for this technology to evolve? What are the caveats for designers and business makers?
GB: Security and privacy will be a big issue for the Internet of Things, and will lead to many entrepreneurial opportunities. However, this is not our focus. But if you think about it- a benefit to using ArduEyes to monitor a room instead of a full resolution camera is that you won’t be able to recognize their faces! You can say, half jokingly, that privacy is built in!
TV: How are vision chips and open source ArduEye helping people live better or smarter lives? Where do you see this going in 5-10 years?
GB: The ArduEye is a fairly new project and is one that takes an uncommon, though technically sound approach to machine vision. So right now all of the use cases are experimental. This is very often the case for a new emerging technology. It will take time for the best applications to be found. But I expect that as our community of users grows, and as we learn to better service this community, we could see a diverse set of applications. Right now I can only speculate.
TV: Where do you see Sensors, Vision, etc play a more pivotal and crowding role in the grandeur Internet of Things, Internet of Everything, and Industrial Internet?
GB: In order for the Internet of Things to reach it’s full potential, it will need sensors to acquire all the information that is needed. Already the number of devices connected to the Internet is in the billions. It will only be a matter of time before this reaches the trillions. And we all know that vision is a powerful sensory modality. Some of the vision sensors will be higher resolution imagers of the type you see in cameras. However in the same way that there are many more insects than large mammals on planet Earth, it makes sense that there is room for many more cameras of ArduEye capability than for full image sensors. This is where I see Centeye playing in the future. More than that, this is why I originally founded Centeye in 2000- the company name was meant to be a triple pun, with the prefix “cent-“ meaning many, tiny, and expensive. Many eyes, tiny eyes, cheap eyes. I was just too soon in 2000…

Like this:
Like Loading...