Tag Archives: microcontrollers

Arduino’s Yún (ATmega32u4) controls this SmartBoiler

The Arduino Yún – designed in collaboration with Dog Hunter – is based on Atmel’s popular ATMega32u4 microcontroller (MCU) and also features the Atheros AR9331, an SoC running Linino, a customized version of OpenWRT. The Yún is somewhat unique in the Arduino lineup, as it boasts a lightweight Linux distribution to complement the traditional microcontroller (MCU) interface.

Although the Atmel-powered Yún hit the streets just a few short months ago, the board has already been used in a wide variety of Maker projects that we’ve recently covered on Bits & Pieces, including an electricity monitor, mesh extender platform, Foursquare soap bubble machine and the Gmail (alert) lamp. And today we’ll be taking a closer look at how George Koulouris used the Atmel-powered Yún to regulate his water heater.

“I have two small problems in my house. An ever-increasing electricity consumption bill and a girlfriend [who] likes to take hot baths at unpredictable times during the day,” Koulouris wrote in a recent blog post re-published on the official Arduino site. “Until recently, we left our water heater switched on, 24/7. But then we took a look at our electricity counter readings. Needless to say, we switched it off immediately! An old water heater can indeed make the electricity counter wheel spin fast, very fast.”

As such, says Koulouris, he started switching it on and off whenever the two needed to take a bath. However, the duo weren’t always at home and the water took almost an hour to heat. Enter the SmartBoiler, a device housed in a small box and placed on top of the main electricity board.

“A mechanical arm extends out of the box. Its bottom end is clipped to the heater’s switch whereas its top end is attached to a motor in the SmartBoiler,” Koulouris explained. “The box contains a motor and an Arduino Yún. The latter checks, at regular time intervals, a .txt file on a web-server to see whether me (or my girlfriend!) have turned on the heater. If yes, it launches the motor and the switch is turned on.”

Although Koulouris originally created the SmartBoiler to regulate his water heater, he does note that the project can be used as a basis to control any mechanical switch.

“Simply dimension the box correctly and you can control everything via the Internet. Your lights, your main electricity switch… The possibilities are limitless!”

Interested in learning more? You can download the laser cutter files here, the code on Github and the dimensions of the mechanical parts on Thingiverse, while the user interface (UI) can be viewed here.

Taking the IoT to the next level

Over three-quarters of companies are now actively exploring or using the Internet of Things (IoT), with the vast majority of business leaders believing it will have a meaningful impact on how their companies conduct business. Clearly, the the IoT is reaching a tipping point.

iotimpact

Although the concept of an Internet of Things has been around for at least a decade, the IoT is beginning to become an important action point for the global business community. As Clint Witchalls notes in a recent report sponsored by ARM, there is no doubt that IoT-related technology is already having a broad impact across the world. Although the precise effect is likely to vary by country and by company, it is hard to imagine any sector will be left untouched by rapidly evolving Internet of Things.

Kevin Ashton who originally coined the term the “Internet of Things” (IoT) in 1999 while working at Proctor & Gamble, points out that the recent “trickle” of IoT product releases is all part of a larger plan to test market appetite.

“We are trying to understand before we get in too deep, because once you are financially invested and committed you cease to become agile. Then you really have to start building on the thing you’ve already invested in,” Ashton explains. “In the early stages of technology deployment it’s a charitable act really to explore a new technology because the return on investment isn’t there, it’s too expensive and it’s too unknown. That’s where government has a role.”

Looking ahead, investment in the IoT should continue to increase as more and more senior executives move up the IoT learning curve. According to Witchalls, the costs associated with the IoT will continue to fall concurrently – just like any nascent technology. Indeed, a number of early adopters believe that the technology is already mature enough and cheap enough to make IoT products and services viable without the need for a big upfront investment, at least for initial trials.

“You don’t need a lot of R&D, it’s more about integration,” says Honbo Zhou, a director of China’s Haier. “Everyone can build it [into their products]. It’s just a matter of finding a business model that works.”

Meanwhile, Elgar Fleisch, the deputy dean of ETH Zürich, a science and technology university, says he believes IoT adoption will be quite different from what he dubs the “Internet of people revolution.”

During the first phase of the Internet, he maintains, anyone with a good idea and a computer could start an organization with global reach. However, Fleisch sees the initial advantage in the “IoT revolution” going mainly to bricks-and mortar organizations, especially large firms with many assets to track and monitor. Meaning, we are unlikely to see another Facebook, Yahoo or eBay.

“There will be winners and losers, but we are unlikely to see entirely new big players entering the market,” Fleisch opines.

Notwithstanding the significant involvement of the physical world of assets and products, the IoT is still expected to be a less visible revolution than the traditional Internet.

“PayPal, Groupon and YouTube are well-known Internet companies, yet few people are probably aware that the smart meter in their cellar means that their home is a part of the IoT,” writes Witchalls. “As organizations move towards the ‘productization’ of the IoT, there are signs that business leaders recognize that this need not be a major hindrance: undeveloped consumer awareness is not seen as one of the top obstacles to organizations using the IoT. After all, consumers will always want products and services that are better, cheaper, greener and more convenient.”

As Ashton notes, “Consumers are not going to demand the Internet of Things. Nobody is going to demand the underlying infrastructure.”

Rather, says Ashton, consumers will demand some value and benefit.

“They’re going to demand a security system that they can control from their smartphone. You don’t go to the end user and talk about the Internet of Things. You go to the end user to talk about benefits,” he adds.

Want to learn more about how the IoT revolution is gathering pace and reaching a tipping point? Part one is available here, part two here, part three here and part four here.

Boot Linux in a second

When I worked at EDN Magazine I wrote up a story about MontiVista Software. They had gotten a real-time Linux to boot in under a second. This was for an automotive dashboard, the Linux was displaying a gauge so it had to start working as soon as you turned the key. Since I just fired up two Atmel MPU (microprocessor unit) demo boards that could support Linux, I thought it would be cool to bring the article to the attention to our MPU group.

It turns out that Atmel 3rd party partner Timesys was way ahead of me. Frederic in our MCU group pointed me to a video where you can see our Atmel SAM5D33 eval board in booting in a couple seconds (mp4). Note that this eval board is not just a passive display like an instrument cluster. It also has a full user interface that takes touch, mouse, and keyboard inputs. Frederic noted: “An application without a UI will certainly boot in less than a second.”

Linux-fast-boot_Atmel-SAMA5D33

Timesys can get a real-time Linux to boot in less than 3 seconds. It would be even faster if you don’t need a user interface like touch, keyboard, or mouse.

Speaking of big-iron MPUs with external memory, be sure to check out ARM Techcon this week in Silicon Valley. Atmel will be there, and I see MontiVista is an exhibiter as well. I will be at the Atmel booth on and off, as well as checking out some of the conference.

1:1 interview with Geoffrey Barrows of ArduEye

ArduEye is a project by Centeye, Inc. to develop open source hardware for a smart machine vision sensor. All software and hardware (chips and PCBs) for this project were developed either from pre-existing open source designs or from Centeye’s own 100% IR&D efforts. In the interview below, Atmel discusses the above-mentioned technology with Maker Geoffrey Barrows, founder of ArduEye and CEO of Centeye.

geoffrey-barrows-ardueye-avrTom Vu: What can you do with ArduEye?

Geoffrey Barrows:  Here are some things people have actually done with an ArduEye, powered by just the ATmega328 type processor used in basic Arduinos:

  • Eye Tracking- A group of students at BCIT made an eye tracking device, for people paralyzed with ALS (“Lou Gehrig’s disease”), that allow them to operate a computer using their eyes.
  • Internet connected traffic counter– I aimed an ArduEye out at the street in front of my house and programmed it to count the number of cars driving northbound. Every 5 minutes, it would upload the count to Xively, allowing the whole world to see the change in traffic levels throughout the day.
  • Camera trigger- One company used an ArduEye to make a camera trigger at the base of a water slide at a water park. When someone riding the slide reached the bottom, the camera took a picture of the rider and then send it to him or her via SMS!
  • Control a robotic bee – My colleagues at Harvard University working on the “RoboBee” project mounted one of our camera chips on their 2cm large robotic bee platform. The chip was connected to an Arduino Mega (obviously not on the bee), which ran a program to compute visually, using optical flow, how high the bee climbed. A controller could then cause the bee to climb to a desired height and hold a position. This was a very cool demonstration.
  • Control a drone – My colleagues at the U. Penn GRASP Lab (who produced the famous swarming quad copter video) used two ArduEyes to control one of their nano quad copters to hover in place using vision.
  • The New Jersey based “LandroidsFIRST robotics team uses ArduEyes on their robots to do things like detect objects and other robots.

These are just some examples. You can also do things like count people walking through a doorway, make a line-following robot, detect bright lights in a room, and so forth. I could spend hours dreaming up uses for an ArduEye. Of course an ArduEye doesn’t do any of those things “out of box”- you have to program it. Arduino_ardueye-avr-atmega

TV:  Explain the methodology and the approach? What is your general rule of thumb when it comes to resolution and design margins?

GB:  My design philosophy is a combination of what I call “vertical integration” and “brutal minimalism”. To understand “vertical integration,” imagine a robot using a camera to perform vision-based control. Typically, one company designs the camera lens, another designs the camera body and electronics, and another company designs the camera chip itself. Then you have a “software guy/gal” write image processing algorithms, and then another person to implement the control algorithms to control the robot. Each of these specialties is performed by a different group of people, with each group having their own sense of what constitutes “quality.” The camera chip people generally have little experience with image processing and vice versa. The result is a system that may work, but is cumbersome and cobbled together.

Our approach is instead to consider these different layers together and in a holistic fashion. At Centeye, the same groups of minds design both the camera hardware (the camera body as well as the camera chips themselves) and the image processing software. In some cases we even design the lens. What this means is that we can control the interface between the different components, rather than being constrained by an industrial standard. We can identify the most important features and optimize them. Most important, we can then identify the unnecessary features and eliminate them.

This latter practice, that of eliminating what is unnecessary, is “brutal minimalism”. This is, in my opinion, what has allowed us to make such tiny image sensors. And the first thing to eliminate is pixels! It is true that if you want to take a beautiful photograph for display, you will need megapixels worth of resolution (and a good lens). But to do many other tasks, you need far fewer than that. Consider insects- they live their whole lives using eyes that have a resolution between about 700 pixels (for a fruit fly) to maybe 30k pixels (for a dragonfly). This is an existence proof that you don’t always need a million pixels, or even a thousand pixels, to do something interesting.

TV:  What are some of the interesting projects you have worked on when involving sensors, vision chips, or robotics?

GB:  The US Air Force and DARPA has over the years been sponsoring a number of fascinating programs bringing together biologists and engineers to crack the code of how to make a small, flying robot. These projects were all interesting because they provided me, the engineer, with the chance to observe how Nature has solved these problems. I got to interact with an international group of neuroscientists and biologists making real progress “reverse engineering” the vision systems of flys and bees. Then later on I got to implement some of these ideas in actual flying robots.

This gave me insights to vision and robotics that are often contradictory to what is generally pursued in much university research efforts- the way a fly perceives the world and controls itself is completely different from how most flying “drones” do the same. Flying insects don’t reconstruct Cartesian models of the world, and certainly don’t use Kalman filters!

uva-darpa-aerovironment-avrI also participated in the DARPANano Air Vehicle” effort, where I got to put some of these bio-inspired principles to practice. As part of that project, we built a set of camera chips to make insect-inspired “eyes”, and then hacked a small toy helicopter to do things like hold a position visually, avoid obstacles, and so forth, with a vision system weighing just a few grams. What very few people know is that some of the algorithms we used could be traced back to the insights obtained directly by those biologists studying flying insects.

https://atmelcorporation.files.wordpress.com/2013/10/hummingbird-robot.jpg

Right now we are also participating in the NSF-funded Harvard University “RoboBeeproject, whose goal is to build a 2cm scale flying robotic insect.  Centeye’s part, of course, is to provide the eyes.  My weight budget will be about 20 milligrams. So far we are down to about 50 milligrams, with off-board processing, so we have a way to go.

RoboBees Project TeamTV: You mentioned insects. do you draw inspiration from biology in your designs?

GB: Another aspect of our work, especially our own work with flying drones, is to take inspiration from biology. This includes the arrangement of pixels within an eye, as well as the type of image processing to perform and even how to integrate all this within a flight control system.

There is a lot we can learn from how nature has solved some tough problems. And we can gain a lot by copying these principles. However, in my experience it is best to understand the principles behind why nature’s particular solution to a problem works and innovate with that knowledge, rather than to slavishly copy a design you see in nature. Consider a modern airliner and a bird in flight. They do look similar- wings keep them aloft using Bernoulli forces, a tail provides stability, by keeping the center of drag behind the center of gravity, and they modify their flight path by changing the shape of their wings and tail. However an airliner is made from metal alloys, not feathers!

robobee_ardueye_avr_atmel

I like to invoke the 80/20-principle here – If you make a list of all the features of a design from nature, probably 80% of the benefit will come from 20% of the features, or even less. So focus on finding the most important features, and implement those.

TV:  What are the technology devices, components, and connectivity underneath?

GB:  For almost all of our vision sensor prototypes, including ArduEyes, there were four essential components: A lens, which focused light from the environment onto the image sensor chip, the image sensor chip itself, a processor board, and an algorithm running on the processor. You can substantially change the nature of a vision by altering just one of these components. We usually use off-the-shelf lenses, but we have made our own in the past. We always use our own image sensor chip. For the processor we have used everything from an 8-bit microcontroller to an advanced DSP chip. And finally we generally use our own algorithms, though we have tinkered with open source libraries like Open-CVcenteye_product_stonyman_breakout_001-645x425

It can take a bit of a mentality shift to be able to design across all these different layers. Most of the tools and platforms out there do not allow this type of flexibility. However with a little bit of practice it can be quite powerful. Obvious the greatest amount of flexibility comes from modifying the vision algorithms.

TV:  Does nature have a smart embedded designer? If so, what would Nature’s tagline or teaser be for it’s creations? What’s the methodology or shape, if you can sum it up in a few words?

GB: Perhaps one lesson from Nature’s “embedded designer” would be “Not too much, not too little.” To understand this, consider evolution: If you are a living creature, then your parents lived long enough to reproduce and pass their genes to you. This is true of your parents, grandparents, and so on. Every single one of your ancestors, going all the way back to the origins of life on Earth, lived long enough to reproduce, and your genetic makeup is a product of that perfect 100% success rate. It is mind blowing to think about it.

ArduEyeMini_front-avr-atmega-atmel

Now, for a creature to live long enough to reproduce, it has to have enough of the right features to survive. But it must also not have too many features, and it must also not have the wrong features. Most animals get barely enough energy (e.g. food) to survive. If a particular animal has too many “unnecessary features,” then it will need more food to survive and thus is less likely to pass its genes on.

Another lesson would be that a design’s value is measured relative to the application. Each animal species evolved for a particular role in a particular environment- this is why penguins are different from flamingos, and why fruit flies are different from eagles. Applied to human engineered devices, this means that any “specification” or figure of merit considered in a vacuum is meaningless. You have to consider the application, or the environment, first before deciding on specifications. This is why choosing a camera based only on the number of “megapixels” it has is dangerous.

TV:  What is your rule of thumb when it comes to prototypes, testing, improving, and then rolling out include fuller design?

GB:  I’m going to be more philosophical here: Rule #1- A crappy implementation of the right thing is superior, both technically and morally, to a brilliant implementation of the wrong thing. Wrong is wrong, no matter how well done. Rule #2- A crappy implementation of the wrong thing is superior to a brilliant implementation of the wrong thing. Doing the wrong thing brilliantly generally consumes more resources than doing it crappy, plus the fact you invested more into it makes you less likely to abandon it once you realize it is wrong.

Of course, the ideal is to do a brilliant implementation of the right thing. However when you are prototyping a new device, or trying to bring a new technology to market, it is very difficult to know what are the right and the wrong things to do. So the first thing you must do is to not worry about being crappy, and instead focus on identifying the right thing to do. Quickly, one ought to prototype a device, experiment with it, get it in the hands of customers if it is a product, get feedback, and make improvements. Repeat this cycle until you know you are doing the right thing. And only then put in the effort to do a brilliant implementation.

Those who are familiar with the “Lean Startup” business development model will recognize the above philosophy. I am a big fan of Lean Startup. I would give away everything I own if I could send a few relevant books on the topic back in time to my younger self 15 years ago, with a sticky note saying “READ ME YOU FOOL!”

Now of course we have to take the word “crappy” with a grain of salt. I don’t mean to produce and deliver rubbish. That helps no one. Instead, what I mean is that the first implementations you put out there are “brutally minimalist” and include the bare essence of what you are trying to produce. It may be minimal, but it still has to deliver something of real value. This is often called a “minimally viable product” in the Lean Startup community.

The same applies to when we are conducting research to develop a new type of technology. The prototypes are ugly, and often use code that makes spaghetti look like orderly Roman columns. But their purpose is to quickly test out and refine an idea before making it “pretty”.

TV:  What is the significance of the ATmega328 in your embedded design?

GB:  We chose the ATmega328 because this is the standard processor for basic Arduino designs. We wanted to maintain the Arduino experience as faithfully as possible to keep the product easy to hack.

TV:  How important is it for you to rapidly build, test, and develop the evolution of your product from Arduino?

GB:  Funny you should ask. We use Arduinos and ArduEyes all the time to prototype new devices or even perform basic experiments. When I get a new chip back from the foundry, the first thing I do is hook it up to an Arduino. I can verify basic functionality in just a few hours, sometimes even in ten minutes.

TV:  What is the difference between Centeye and ArduEye? Technology differentiators?

GB:  ArduEye is essentially a project that was developed by Centeye and supported by Centeye. The main differentiators are that ArduEye was developed in isolation from our other projects, in particular the ones associated with Defense. We essentially developed a separate set of hardware, including chips, and software, and did so at no small expense. This is partially why it took so long for this project to become reality.

TV:  How do you see ArduEye and vision chips in the future for many smart connected things?

GB:  I think the best uses for adding vision to an IoT application will come not from me, but from tinkerers, hackers, and other entrepreneurs that have identified a particular problem or pain that can be solve using our sensors as a platform. But in order for them to innovate, vision must be tamed to the level that these users can quickly iterate through different possibilities. I see ArduEye as a good platform to make it happen, to let such innovation occur in a friction-less manner.

TV:  What are some one the IoT implications of using brilliant sensor eye devices in their products?

GB:  At one level there is a rich amount of information you can obtain with vision. Think about it- you can drive a car if you only have visual information. However vision has a tendency to generate a LOT of data. This is true even for a very modest image sensor of several thousand pixels. And it is true that bandwidth is getting cheaper, but I don’t think the Siri model of pushing all the data to “the cloud” for processing is a viable one. You will have to find ways to process vision information up at the sensor, or at some nearby node, before that information can be sent up to the cloud.

TV:  How can sensors like ArduEye be compounded with richer use-cases especially when integrating the Big Data and Cloud initiatives of modern trending IT innovations?

GB:  Over the next decade we will see newly minted billionaires who have figured this out.

TV:  How can ArduEye evolve? What do you see as a visionary for ArduEye to be integrated more so to accelerate  efficiency?

GB:  Good question! Well, first of all, this will depend on how others are using ArduEye and the feedback I get from them. For ArduEye to be successful, it has to be valuable to other people. So I would really like to hear feedback from anyone who uses these products, so that we can make them better. I’ve been willing to speak with anyone who uses these products. Tell me- do you know any other image sensor companies that allow you to speak with the people who design the chips? That said, some obvious improvements would be to incorporate more advanced Arduinos, such as the Due that uses an ARM processor.

TV:  Are there security or privacy concerns for this technology to evolve? What are the caveats for designers and business makers?

GB:  Security and privacy will be a big issue for the Internet of Things, and will lead to many entrepreneurial opportunities. However, this is not our focus. But if you think about it- a benefit to using ArduEyes to monitor a room instead of a full resolution camera is that you won’t be able to recognize their faces! You can say, half jokingly, that privacy is built in!

TV:  How are vision chips and open source ArduEye helping people live better or smarter lives? Where do you see this going in 5-10 years?

GB:  The ArduEye is a fairly new project and is one that takes an uncommon, though technically sound approach to machine vision. So right now all of the use cases are experimental. This is very often the case for a new emerging technology. It will take time for the best applications to be found. But I expect that as our community of users grows, and as we learn to better service this community, we could see a diverse set of applications. Right now I can only speculate.

TV:  Where do you see Sensors, Vision, etc play a more pivotal and crowding role in the grandeur Internet of Things, Internet of Everything, and Industrial Internet?

GB:  In order for the Internet of Things to reach it’s full potential, it will need sensors to acquire all the information that is needed. Already the number of devices connected to the Internet is in the billions. It will only be a matter of time before this reaches the trillions. And we all know that vision is a powerful sensory modality. Some of the vision sensors will be higher resolution imagers of the type you see in cameras. However in the same way that there are many more insects than large mammals on planet Earth, it makes sense that there is room for many more cameras of ArduEye capability than for full image sensors. This is where I see Centeye playing in the future. More than that, this is why I originally founded Centeye in 2000- the company name was meant to be a triple pun, with the prefix “cent-“ meaning many, tiny, and expensive. Many eyes, tiny eyes, cheap eyes. I was just too soon in 2000…

centeye_product_ardueye_avr

Wearable device revenue – $6 billion by 2018

Analysts at ABI Research have determined that wearable wireless device revenues will grow to exceed $6 billion in 2018. Of the four segments tracked, sports, fitness and wellness are the largest, never dropping below 50% share of all device shipments over the forecast period.

“Fitness activity trackers are quickly gaining popularity in the market,” explained ABI Research senior analyst Adarsh Krishnan. “Different from other more single-use or event-centric devices, activity trackers monitor multiple characteristics of the human body including movement, calories burned, body temperature and sleep tracking.”

More specifically, says Krishnan, activity trackers are expected to grow at a 40% CAGR and overtake the 2013 shipment leader, heart rate monitors, in 2017. Meanwhile, the second largest market – home monitoring devices (primarily for the elderly) – is also slated to witness strong growth over the next five years with overall device revenue growing at CAGR exceeding 39%.

“This segment is also anticipated to see the development of cross-over devices such as personal emergency response devices supplemented with activity tracker features,” Krishnan added.

As previously discussed on Bits & Pieces, Atmel is smack in the middle of the rapidly evolving wearable tech revolution. First off, Atmel’s SAM4S and tinyAVR MCUs are inside the Agent smart-watch which recently hit Kickstarter, while the Amulyte pendant is powered by Atmel’s SAM4L, the very same MCU used to regulate smart (wearable) glucose meters.

Meanwhile, Atmel’s versatile SAMA5D3 eMPU lineup is more than capable of powering fitness and outdoor portable electronic equipment for measuring performance (or providing navigation) of various outdoor activities, including running, cycling, hiking and golf.

Atmel MCUs have also tipped up in a number of Maker projects for wearable tech, as our microcontrollers power Adafruit’s FloraGemma and Trinket platforms.

And why not? Simply put, Atmel offers a wide range of wearable computing platforms designed for ultra-low power consumption – both in active and standby modes. Indeed, Atmel’s EventSystem with SleepWalking allows peripherals to automatically connect with each other even in ultra low power modes, thereby simplifying sensor interfacing and further optimizing power consumption. Meanwhile, “Wakeup” times are minimized, facilitating the use of low-power modes without missing communications data or sensor events.

In addition, Atmel devices integrate numerous features to save circuit board space, such as USB transceivers and embedded termination resistors. Many devices are offered in very small form factor packages, a critical characteristic for engineers and Makers designing wearable tech.

On the software side, the Atmel Software Framework (ASF) includes communications libraries to support external Wi-Fi and Bluetooth radios, mesh and point-to-point networking on Atmel’s 802.15.4/Zigbee AT86RF radios as well as a full range of USB drivers. The ASF also contains libraries and driver functions for many popular third-party sensors such as accelerometers, gyroscopes and magnetometers.

In addition, stand-alone Atmel controllers support off-the-shelf capacitive buttons, sliders and wheel (BSW) implementations. Plus, all our microcontrollers can directly manage capacitive buttons via provided software libraries, while the maXTouch series of capacitive touchscreen controllers are capable of managing optically clear touch sensors overlaid on LCD displays.

And last but certainly not least, Atmel’s touch platforms may be tuned to function when moisture is present – which is often a key requirement for wearable applications. Interested in learning more? Check out Atmel’s white paper on wearable tech here.

ArduLab-Front-AVR-Atmel

1:1 interview with Manu Sharma of Ardulab and Infinity Aerospace

atmel-ardulab-launch-simplyavrThe Ardulab, a highly capable experimentation platform ready for space right out of the box, is built around Atmel’s versatile ATMega 2560 microcontroller (MCU). The low-cost, open-source, NASA-approved container Ardulab can be programmed just like an Arduino. Although the most recent mission is headed to the International Space Station (ISS) on Cygnus, the Ardulab is more than capable of operating on a number of suborbital launch vehicles and parabolic aircraft.

Below is Atmel’s interview with Manu Sharma, creator of Ardulab and co-founder of Infinity Aerospace.

Tom Vu:  What’s your vision for Ardulab’s roadmap?

Manu Sharma: Ardulab is a powerful platform that is transforming the way commercial customers including NASA and others conduct research and experiments aboard the International Space Station (ISS) and suborbital vehicles. Ardulabs is used by high schools and universities to Jet Propulsion Lab, NASA. Within 9 months, we have been successful in selling the product to a broad range of customers as well as launching an Ardulab to the International Space Station aboard the Cygnus spacecraft with the help of our partners, NanoRacksmanu-sharma-ardulab-avr

What Ardulab has triggered is phenomenal. I consider Ardulab a centerpiece product of our company. The Ardulab alone places tremendous power in the hands of users who are conducting research aboard the International Space Station. The customer unleashes the full potential of what Infinity Aerospace can offer to them by using the supporting services around Ardulab. For instance, we have started offering suborbital launch slots in the same package as an Ardulab(s). On top of that, we provide complete payload integration and handling services. In a nutshell, all of these services combine with Ardulab to form an innovative offering we call “Space Programs.” These are comprehensive packages that include everything from nuts and bolts to the Ardulabs and launch slots.

ArduLab-Your-Space-Experiment-Goes-Here-NanoRacks-Compliant-3-1024x539So, you see, already, Ardulab is changing its form from just a product to a package that serves customer needs from the beginning to end. We want to make Ardulab the de-facto product for microgravity research. We are seeing growth in the micro satellites market too and we may do something similar there as well.

Tom Vu: For founders, the signature are often the same at the root of startup. There are multiple forms of currency: Ideation, innovation, success, design, capital, humanity, design feats, passion, etc. What is your currency to make this ArduLab project evolve?

Manu Sharma: All those currencies have been important for us at the root. However, if I had to choose one, it would be passion and design. At Infinity Aerospace we strive to design the absolute best product we can and to hear our customers tell us Ardulab is a well-designed product is so rewarding. We really think that designing better products makes all the difference and in some cases it can be instrumental in pioneering a revolution in the industry. From the choice of material, selecting the machining process to designing mounting holes so that customers can just plug and play equipment; we have spent a lot of time focusing on each feature. Then we let a bit of evolution / iteration take place. Early adopters buy the product and we listen to their feedback and start adding or removing features appropriately.

Of course all of this requires patience. That’s where passion comes in. We are space nuts. Literally. We love building products in space, after all.

ardulab-infinity-aerospace-team

Tom Vu: Explain the methodology and the approach? What is your general rule of thumb for designs in space? Form factor? Efficiency? Cost? Open source?

Manu Sharma: We are still learning the best way to create breakthrough products for space. What has worked for us so far is first going out and finding a REAL problem that’s worth something to potential customers. If you look at the space industry, the problems are either huge or small and very limited in potential. But all of this is changing. We generally question every single thing of current designs or processes.

Once we have good grasp of the fundamental problem, we don our artist and engineering hats and start designing a product. We take everything into account from cost, efficiency, to form factor. We chose open sourcing ArduLab’s operating system and software for a purpose. We want people to leverage the already existing vast knowledge resources on the internet about Arduino to build things in space. A general rule of thumb for us designing things for space is to keep everything very simple.

Tom Vu: Any “best practice” advice for those innovators, makers, or engineers out there looking to build for space?

Manu Sharma: Try to build upon standards or leverage existing standards. Use commercially off the shelf (COTS) hardware as much as possible to keep the costs low. Look for products in different markets that may inspire/help you to build a space product. Chances are you will find a lot of them.

Tom Vu: We are seeing lots of democratization of space? What does that really mean for consumers? What does it mean for designers and engineers?

Manu Sharma: Yes, within the last two years we have seen successful crowd funding campaigns for quite a few space products. We just saw a successful Kickstarter campaign by Planetary Resources (raising well over a million dollars) with people contributing as low as $25 to have their photos taken in space. As these new tools and technologies continue to emerge and scale with Moore’s law, we will see more and more democratization of space. It’s a fabulous thing on both ends.

For designers and engineers, the bells are ringing to get to work. As space democratizes, more and more products and business models become viable.

Tom Vu: Describe more of the intrinsic design characteristics and methodology, why Arduino?

Manu Sharma: When we looked at how people did research and other commercial activities aboard the International Space Station, we found that everyone was developing their own custom microcontroller boards, casing and electronics. Everything was custom. It takes a lot of resources and time. And why spend time reinventing the wheel if someone else has already figured it out?

We chose the Ardunio because everyone is using it to create experiments, including us. You can find example code and libraries for pretty much any kind of sensor or actuator you’re using. Tapping the Arduino community means reusability, efficiency and an open environment where we all can help each other. It is really hard to change a user behavior. Instead of giving customers a new architecture and software development environment, we chose to use the vastly adopted Arduino IDE.

Not to mention the fact that we were flabbergasted with the prices of similar solutions in the market. We wanted an affordable, simple, and powerful solution. Open sourcing space is the right thing to do. We want everyone to do interesting things in space.

Tom Vu: Can you give us a fly through of your design from origination, development, hardware, and application? Why AVR microcontrollers?

Manu Sharma: From the beginning we wanted to maintain the basic architecture of a well adopted microcontroller platform. We looked at various options such as the Beaglebone, Raspberry Pi and others. We realized that the large majority of Ardulab use cases do not require extensive processing power on board or a complex architecture. We wanted something simple, and highly robust that’s been industry proven. So we decided to go with the Atmel Atmega 2560. It is perfect for the Ardulab.

Next, we developed the microcontroller board that extends the capabilities of the Arduino Mega with additional features that solve the toughest challenges every one faces when designing their experiments for operation on the International Space Station. We solved them, standardized the protocols, and streamlined the process of remotely retrieving the data from Ardulabs aboard the ISS while on earth.

Tom Vu: How important are designs having “beyond the core” solutions ? What will Ardulab’s ecosystem eventually grow into at maturity?

Manu Sharma: One has to always keep the trajectory of product development and evolution in mind. We certainly do with ArduLab. We think that with Ardulab we can offer a great educational package to high schools and universities. Schools and Universities can literally start a private space program by using ArduLabs, our launch slots, and payload services. Customers can simply Fedex their experiments to us and we take care of integrating them with launch vehicles. If it is ISS bound, we work with NanoRacks directly. When the experiments are installed inside ISS, you would be able to access the data almost near real time on your iPad.

Eventually, we want ArduLab in every high school and university’s science programs. Ardulab is itself a space lab. Students will be able to collaborate and develop science experiments. Recently, we have considered something similar in the micro satellite market. We aren’t talking about it to much at the moment. You should be hearing from us about this soon.

Tom Vu: Semiconductors have their challenges and expertise with form-factor, scale, and large FAB production cost and cycles? What are your challenges and expertise at Infinity Aerospace?

Manu Sharma: I don’t know if it is right to say that we have challenges with scale. We are growing organically at the moment. Yes, currently we are producing small quantities of Ardulabs and cost of fabrication is higher due to that. However, we are moving towards a hybrid model where we can sell the hardware for less allowing more customers to use our powerful software and services.

Tom Vu: What word comes to mind when we think of Ardulab or even for that matter Infinity Aerospace? What would you like the readers to understand the significance of this project?

Manu Sharma: “Remote Automation and Cloud Management in Space.” We are building products that offer turnkey solutions in low earth orbit and suborbital flights.

Tom Vu:  How does Ardulab play a role in the inspiring minds of today especially in Education or Engineering Academia?

Manu Sharma: We created ArduLab keeping students and researchers in mind. Working with CASIS we’re piloting an educational curriculum around ArduLab in three schools in Houston, Texas. Each school will develop a space experiment inside an Ardulab. Kids from 6th grade to 12th will be working in the teams to develop science experiments. They will be launching their Ardulab experiments to the International Space Station sometime mid next year. This is so exciting for us, it really keeps us in high spirits to share the wonders of space and inspire students. If we are successful here, we will be able to scale Ardulab and space programs to schools and universities nationwide.

Tom Vu: Explain the processing requirements or minimalist approach for Ardulab and the designs of future products? What are the raw challenges dealt with in the harshness of launch, zero gravity, or space? Any design constraints or design methodologies?

Manu Sharma: There are many design constraints. Since we designed Ardulab to comply with the International Space Station, we have to mind the NASA requirements when choosing electronic components. On top of that, we have to make sure we are consistent with NanoRacks systems onboard the ISS. There isn’t any design constraint regarding radiation. Radiation levels inside the ISS is low enough for commercial off the shelf electronics to work properly. Through much iteration, the Ardulab structure has been designed to survive the extreme vibrations encountered during the launch.

Tom Vu: How important is it for you to rapidly build, test, and develop the evolution of your product from Arduino or AVR chip?

Manu Sharma:  It is very important; we don’t have a choice. When we deliver Ardulabs to our customers, we are also eagerly seeing how they use our product and what changes we can make to improve the user experience. We have gone through more than 7 design iterations since last October. As I talk to you now, a new version of Ardulab microcontrollers are being assembled which include new features and technologies.

Tom Vu: Metaphorically, it is fun. What sound barrier or escape velocity (technology, solution, and consumer product changes) are we about to embark in if Ardulab becomes overtly successful?

Manu Sharma:  Ardulab is intended to make it as easy as possible for people all over the world to gain access to space. If Ardulab is wildly successful, then a lot of people will be active in the space industry. We plan to serve this market by continuing to provide more infrastructure technologies. Communication, data transfer, and management of systems is a big opportunity there.

Tom Vu:  What are some of the challenges you can imagine resulting from Infinity Aerospace’s Ardulab?

Manu Sharma:  I think our challenge is similar to any aerospace company. The cost of getting into low earth orbit or to the International Space Station is still of the order of about $50k for a 1 kilogram Ardulab. We are optimistic that in near future, these prices drop an order of magnitude. Currently, we are offering space programs starting at $4995 that includes a launch slot in a suborbital vehicle (XCOR Aerospace Lynx). Customers will get about 4 minutes of high quality microgravity in a suborbital vehicle. It is great for experimentation and learning about the microgravity environment. We are first to market with this product at this price point and we will surely soon see real competition. We need to continue to innovate and lead this market to stay competitive.

Tom Vu: How can Ardulab evolve technologically? As a visionary, do you see a demand for Ardulab to be integrated more into education, design, solution, and products. Perhaps, accelerate efficiencies or connectivity?

Manu Sharma: There are many ways we plan to grow Ardulab technologically. Firstly, we are planning to experiment with higher end microprocessors that not only will serve some users in education market but will provide robust and high performance capabilities to commercial customers developing low earth orbit solutions. If you look at Ardulab as a platform to develop automated tools in space, then you will see multiple possibilities for Ardulab to morph into different products.

Tom Vu: What are you seeing come into your pipeline of request to get into Space. What does Ardulab have in roadmap for customers in the next 5-10 years?

Manu Sharma: We have some great surprises for everyone. We have customers developing their experiments in domains such as bio tech, fluids, chemistry, physics, and robotics and material sciences. Ardulab is creating offshoot products. We have recently been working with NanoRacks to develop an automated fluid mixing lab. We are reusing the Ardulab board to power this totally new product. We are working on software that will enable teams to collaborate for designing experiments in Ardulab and also provide near real time access to them when they are installed on the ISS. We will be growing our services to provide more options for launch slots on suborbital vehicle as well as low earth orbit with different providers in the coming years. Eventually customers would want to go beyond International Space Station and we are fully aware of that and strategically planning our development process keeping that in mind.

Tom Vu: Why space? Is this important in the next step for computing and embedded design engineering?

Manu Sharma: During a late night discussion over drinks with Astronaut Don Pettit, he said one thing that has stuck in my mind since then. He said space is a frontier and frontier is enriched with new discoveries and phenomenon. Space is a totally new arena for developing products. There is a growing market and economical reasons to go to space either for developing better communication and imagery systems or mining resources and settling human bases on different moons and planets.

A phone today has all the basic functionalities to become a system bus for a micro satellite. We say that from the PhoneSat project at NASA Ames. The group later went on to create a company building disposable micro satellites. Imagine that in near future we can get real time imagery of the whole planet! What will people use this data for? What kind of apps could be created? We could possibly track deforestation, wild fires or other natural calamities on our iPads. I think advances in computing and integrated circuit will make this kind of future in space happen.

Tom Vu: What is the vision for Infinity Aerospace as it pertains to people benefiting from their milestones?

Manu Sharma:  Our vision is to provide a very powerful platform and software tools for people to develop space based hardware, tools, experiments or applications. I think with our upcoming products and as Ardulab and other services evolve, people and organizations will be able to create automated systems in space with great simplicity never seen before in the entire history of low earth orbit market.

Tom Vu: What is SMART design to your standards?

Manu Sharma: A SMART design for us is a design that solves real problems of people with minimal use of resources. We strongly believe in making sure that not only we solve the problems but also provide a great experience while interacting with our product.

ardulab-manu-sharma-infinity-aerospace-atmel

Kilobots, small vibrating robots use the ATmega328

Thanks to pals at Evil Mad Scientist, I learned about these small self-powered autonomous robots called Kilobits. Brought to you by Harvard University, the little gizmos are run by an Atmel ATmega328.

kilobots-stacked

The little robots move on the little wire pins. There are two vibrating motors, like in a pager. They are arranged in “quadrature” so to speak. One will rotate the robot clockwise, and the other will rotate the robot counterclockwise. If you run both motors, the robot will move forward.

kilobot_callouts

The robots can communicate with an IR (infrared) transceiver. This allows them to exhibit swarm behavior like insects. Check out this video of the Kilobots doing their thing.

Harvard is doing this to study complex self organizing behavior. This may help psychologists and economists understand complex human behavior that just appears, like the open-source movement, the Dabbawala lunch delivery system in India, and how day workers outside of the Home Depot settle on rates and seniority.

The hi-zoot Harvard Kilobots are preceded by the Make community Vibrobot. Evil Mad Scientist did a great vamp with their BristleBot, which uses the head of a toothbrush.

BristleBot

While created for research, to their credit, Harvard made this is an open-source project that is just perfect to be picked up by the Maker Movement. NY Maker 2013 starts Saturday, the Atmel team is setting up and the Evil Mad Science people will be at our booth to show off their cool Atmel-powered kits.

Reza Kazerounian talks MCUs, China and the IoT

Dr. Reza Kazerounian, SVP and GM of Atmel’s Microcontroller Business Unit, recently sat down with Yorbe Zhang of EE Times-China to discuss the company’s activities in Asia, with a specific emphasis on the Internet of Things (IoT).

Essentially, the Internet of Things (IoT) refers to a future world where all types of electronic devices link to each other via the Internet. Today, it’s estimated that there are nearly 10 billion devices in the world connected to the Internet, a figure expected to triple to nearly 30 billion by 2020.

As we’ve previously discussed on Bits & Pieces, the IoT may very well represent the greatest potential growth market for semiconductors over the next several years. Indeed, consumers want WiFi capability along with very low power consumption, as most connected mobile devices these days run off batteries. Atmel is certainly well positioned for the IoT, as our portfolio includes ultra-low power WiFi capability and an extensive lineup of microcontrollers (MCUs).

The full video of Dr. Kazerounian’s interview with EE Times-China can be viewed here. Please note that although Yorbe Zhang provides opening and closing remarks in Mandarin, the exchange between Zhang and Dr. Kazerounian is in English.

Digitizing sculptures with MakerBot

In August, MakerBot began accepting pre-orders for its new Digitizer 3D scanner which is expected to ship in October. The Digitizer is currently priced at $1,400, plus an optional $150 for MakerCare, a comprehensive service and support program.

As previously discussed on Bits & Pieces, MakerBot’s Digitizer allows users to quickly “transform” (scan) objects and items into 3D models that can be easily modified, shared and printed on 3D printers like the company’s Atmel-powered MakerBot Replicator 2.

Although Digitizer has yet to hit the streets, the MakerBot crew has already fashioned a number of new creations using the device, including figures based on famous sculptures, such as those found along the the Pont Neuf in Paris on a series of historic lampposts designed by Victor Baltard in 1854.

“Robert Steiner, our Chief Product Officer here at MakerBot, wanted to incorporate elements of these lampposts into a design for some furniture of his own. He sent pictures (above) off to a sculptor in the Philippines. A few months later these sculpts (below, left) arrived in the mail, but they were not great objects for casting into molds, as Robert had planned. He put them in a box and nearly forgot about them until we launched the Digitizer. Sensing an opportunity, he brought them into the office and the dolphin scanned beautifully,” MakerBot’s Bre Pettis wrote in a recent blog post.

“Plaster, due to its pale and textured surface, is a great material for scanning. The Digitizer software had no problem filling in the occlusion behind the lips. Plaster originals at left, Digitized and Replicated versions at right. Robert asked the sculptor to give Neptune an open mouth, in hopes of turning it into a fountain spout. The Neptune face didn’t scan well laying flat, so I attached some clay to the base to help it stand up straight. This gave his beard a trim, but now the printed version has a flat base to stand on.”

Meanwhile, MakerBot’s Kate Hannum noted that Thingiverse super user Dutch Mogul (aka Arian Croft) artfully remixed the company’s official MakerBot Gnome into a steampunk model dubbed Sir Occulum Tanberry.

“This little guy is ideal for gaming, as he retains his detail even at the 28mm gaming scale. You can easily print Sir Occulum Tanberry in halves or as one piece with supports. As is noted in the description, he looks especially at home next to the MakerBot Crystals,” said Hannum.

“3D scanning gives folks who aren’t expert 3D modelers an easy way to modify, improve, share, and 3D print. For people who are expert modelers like Arian, scanning provides a jumpstart to creating seriously awesome things. We can’t wait until Thingiverse is flush with exciting new remixes of scans from community members – beginners and experts alike!”

Indeed, the MakerBot Digitizer outputs standard 3D file formats, so Makers can improve, shape, mold, twist, animate and transform objects in a third-party 3D modeling program. There is no patching, stitching, or repairing required, so Makers are able to skip straight to the creative process. Adding one 3D model to another is easy, like putting a hat on top of a gnome. Plus, Makers can either scan a second object, or search for it on Thingiverse.com, scaling down and multiplying targeted objects to create charms or game pieces.

Additional information about MakerBot’s 3D printer lineup and Digitizer is available here.

Atmel to host analyst panel @ World Maker Faire

The 2013 World Maker Faire opens its doors on September 21st in the New York Hall of Science (NYSCI). We’ll be there at the Atmel booth in the Arduino pavilion, showcasing a number of exciting new companies that have developed innovative applications using Arduino boards powered by Atmel AVR and ARM microcontrollers.

Atmel is slated to host a public media/industry analyst panel on Friday, September 20th, on the Maker Community and education. Members of the panel include Atmel’s Reza Kazerounian, co-founder of Arduino Massimo Banzi, Atmel Maker and Hexbug guru Bob Martin, university engineer professor Annmarie Thomas, EDN’s Executive Editor Suzanne Deffree, 12-year old CEO and maker Quin (Qtechknow), and MAKE Books Senior Editor Brian Jepson. The panel will be moderated by Windell H. Oskay of Evil Mad Scientist Laboratories.

Tune into our live Twitter feed of the panel starting at 11:30 am ET on September 20th under #Atmelmakes or visit our recently launched microsite for more details. For those of you attending the Faire, Atmel’s booth will be taking center stage at the show with a number of uber-cool exhibits and demos including:

  • Hexbug/hovercraft hacking: Watch Atmel employees hack traditional Hexbugs and hovercrafts using Arduino boards.
  • MakerBot: We’ll be showcasing the wildly popular AVR-powered 3D printer and providing 3D samples over the weekend.
  • Pensa: This company uses Arduino boards to make their flagship DIWire, a rapid prototyping machine that bends metal wire to produce 2D and 3D shapes.
  • Infinity Aerospace: The ArduLab – powered by Atmel’s versatile ATMega 2560 microcontroller – is a highly capable experimentation platform ready for space right out of the box. Sensor mounting is straightforward, with unique functionality addressing the technical challenges of operating in space.

Additional exhibitors at the Atmel World Maker Faire booth include Fuzzbot (robots), Evil Mad Scientist and Colorado Micro Devices. We’re looking forward to seeing you at the Atmel booth, so don’t forget to follow us at @makerfaire, @atmel and @arduino!

Interested in attending Atmel’s panel? Be sure to email us at pr@atmel.com. Also, be sure to join us when Bob Martin presents Prototyping is as Easy as Uno, Due, Tres.

MakerFaireRibbon

The Ardruino Uno is an excellent lab tool for technicians and h/w engineers who have a specific design in mind. In this presentation, we will show how Atmel’s MCU apps lab uses the Uno to test harnesses for LED lighting stress testing, SBC reset response and power supply stress testing on a regular basis for the weather station prototype.

When: Sunday, September 22, 2013, 12:30PM – 1:00PM ET
Where: Make: Electronics Stage