Category Archives: Application Highlights

The Internet of Things and energy conservation

Humans are creative, and adaptive. We’ve done it all our lives, and all our existence. We needed more food, and so we created agriculture. We needed to live together, and so we created architecture. We needed to communicate, and so we created hundreds of ways to do just that; Internet, mobile telephone networks, computers. We are so fond of computers that we have them everywhere, often without noticing them. Yes, you might have a bulky desktop computer at home, or maybe even a flashy new laptop, but those are not the only computers. Your mobile telephone is a computer, but technically, so is your microwave, your car, your television set, and even your washing machine.

Our lives have changed greatly. We’ve all seen pictures and even films of medieval castles, and we know how we used to live. Today, our lives are made more comfortable by scores of machines; when was the last time you washed your clothes by hand? The clothes go in the washing machine, then in the dryer, and then in the cupboard. This all comes at a cost; financially, of course, but also in terms of energy.

Energy. The art of creating electrical power and delivering it to our homes and cities. For most people, this is as simple as having overhead power lines here and there, and paying a bill at the end of the month. Unfortunately, it is much more complicated than that. Power stations require scores of people to operate, and something surprising, data. In France, we have “too many” power stations, and most run at low capacity. When it gets hot, those who have air conditioning like to put it on, consuming electricity. Multiply that by a few thousand, and you get an idea of how much energy the power station needs to produce. When it gets cold, people like to heat their homes and businesses, and since everyone has radiators, electrical consumption soars. Imagine the amount of radiators an entire city can contain, and imagine even 50% of them turned on at the same time. Imagine.

Data is needed from other sources, not just from the weather. Imagine the amount of power required to let all the football fans watch the world cup. Our problem is that we can generate electricity, but we cannot store it (at least, not on this kind of scale). When everything gets turned on, the power station must be able to respond. If it can’t, bad things happen; the lights dim, or sometimes everything goes dark. We now know we cannot live without electricity.

SMART Energy Flow

We all know that we need to reduce our energy dependence, even if some of us don’t want to. To make more people aware, some cities turn off all the lights for an hour. It’s called Earth Hour. For one hour, people are encouraged to use as little electricity as possible; turning off the lights, for example. This does have an impact, but it is a double-edged sword. For one hour, the electricity usage drops considerably, while everyone thinks about the planet, and what we will leave behind for our children. At the end of the hour, everything goes back on, and this is where things get tricky. When electrical devices are first turned on, some can generate what is called an energy spike; a large consumption at first, before something more stable. It is visible just after Earth Hour, but it actually happens every day.

Building Appliances and Home Systems using Energy at Optimum Times

Peak hours. In my house, my electric water heater is connected to a peak-hour detection system. At 11:30 PM, my electricity provider starts “off-peak” hours, a time where electricity costs less. It costs less, an incentive to make me use power-hungry devices at a time when other devices are not needed. At this time of night, most businesses are closed, and so there is less demand. It is all about normalizing energy requirements, and to stop peaks during the day. At 7:30 AM, peak hours start, the water heater turns off, businesses start up, and my kettle turns on, the day is about to begin.

Ikea-kitchen_IoT-SMART-HOME-Connected

Energy is available, that isn’t the problem. Our problem is our use of energy. If only we had a way of using energy when it was available. Imagine, a certain amount of energy available. When I need light, I want my light to be usable immediately. I need a start time; now. However, when I put my clothes in the washing machine generally, I need them to be ready for the next day. I need and “end” time; I need the device to get the work done before a certain time. When will the washing machine start? Well, I don’t actually mind when it starts, and this is where I need help. This is where the IoT can help us, because we really need help.

The IoT will give us millions of connected sensors. This will also supply us with data, lots and lots of it. Why wouldn’t a small device in my house have direct control over my washing machine, or even better, actually be inside my washing machine? It could be programmed to start at a specific time, talking to other devices on the energy grid? Or even in my home; it could tell the water heater to wait until it has finished, and then the water heater gets its chance. The possibilities are endless.

Washing Machine is Connected - SMART HOME

IoT will give us an incredible amount of data, and data that can be used to help up control, and maybe even overcome our need to energy. But wait a minute, doesn’t the IoT itself need energy? It does, but the amount of energy that it will save outweighs the amount of energy it uses, and by a large factor. Take, for example, Atmel’s SAM D21 microcontroller. It uses less than 70µA per MHz, and that is when it is running at full speed. Of course, these devices have advanced power management, and with careful coding, they can last for months on cell batteries. Low power does not mean no power; it has enough flex to get the job done, and more. With built-in USB, ADCs, DACs and enough RAM and ROM for the most complex programs, it gets the job done. It also has the Atmel Event system, a powerful system that lets the microcontroller react to external events without the need to constantly look at inputs.

(Source CES 2014 - Samsung's Vision of the Now and Future of Connected Appliances)

We need a little help in our lives to make simple decisions; when should I turn the heating on? When is the best time to turn on the air conditioner? We think we know, but we don’t. IoT will allow us to know exactly when the cold weather is coming. IoT will know when to turn the lights off. In short, IoT will generate enough data that it will know better than us what to do, and when. What we have seen so far is only the beginning.

On the road from Makers to consumers

It’s time to break with conventional thinking. For decades, the measure of success for semiconductors has been OEM design wins. Most consumers haven’t known, or cared, about what is inside their electronic gadgets, as long as they work. That may be about to change, because a new intermediary is finding its voice – and being heard in high places.

Intel and Apple, in different ways, began challenging the norm by pursuing consumer branding and developing pull-through demand for their parts as drivers of the overall experience. Coupling what people “feel” about their devices with the technology powering them creates an almost unbreakable bond, akin to a religious response. Reaching billions of people has required billions of dollars and high profile advertising campaigns – out of the question for most embedded semiconductor companies.

A new road is being carved across the landscape, paved not with gigantic chips packing billions of transistors delivering a cascade of social chatter and streaming entertainment content. This road is built with ideas carried on small boards and open source software, and a sense of wonder about how the world works, and what we can do to shape it.

Somewhere on that road right now is a big truck, captured in pixels at a stop in June 2014 that may go down as a turning point in the annals of semiconductor evolution.

Overstated? The truck tour is a tried-and-true mechanism for reaching industrial OEMs, taking hands-on demonstrations to cities far from the sources of silicon and software innovation. If we were only talking about embedded design and the industrial IoT, it’d be business as usual, and this would be just another truck with a fancy paint job and a couple of FAEs inside.

But, it’s not. The industrial IoT is wonderful and welcome, however by and of itself it won’t generate the billions of units needed to drive a recovery and restart growth in semiconductors and the economy at-large. That will only come from reaching and capturing consumers with IoT technology, in a big way.

And that, so far, has proven difficult. After all, even industry experts are feverishly debating the name IoT, questioning what applications really fall under the moniker, or what exactly it means. Much like “smart grid” and “mHealth” before it, the term IoT means something in the developer community, but not so much to consumers who don’t yet see a connection between the Internet and how they use everyday things.

A recent SOASTA survey suggests 73% of the US has never heard of the IoT, at least until an interviewer explains it to them. (I’m curious why that number always seems to be 73% no matter the topic, but let’s just say 3 out of 4 – I believe it.) When hearing oral arguments in the Aereo case earlier this year, several US Supreme Court justices issued queries indicating a limited grasp of technology. (Cut to Keyrock: “I’m just a caveman … your modern ways frighten and confuse me.”)

This isn’t a lack of intelligence on their part; it’s a lack of generating the needed visibility on our part. These are the people we all must reach if we have a hope to succeed. Who is going to reach them? Makers, armed with our tools and their ideas. Atmel and other tech firms reaching Washington and the first-ever White House Maker Faire, side by side with people like the star of Sylvia’s Super-Awesome Maker Show, was a milestone in delivering the message to the masses. This goes way beyond the T and E in STEM; remember, the social transformation was driven by youth, and young makers are going to drive the uptake of the consumer IoT.

Why? Well, frankly speaking, they don’t think like engineers – they think like actual, real-life users. I made the comment recently that we need to be careful, the people we are trying to reach can drive smartphones, not (name of other popular maker module redacted … sorry, Arduino didn’t rhyme.) Don’t be distracted by a 17-foot tall mechatronic giraffe with lava lamps for ears and a penchant for partying, or by the Obama crack about we don’t spell “fair” with an ‘e’ in this country. These are people designing things they, and people like them, want to use. More importantly, they will provide the translation of what the new technology can do, renarrating the story from the language of semiconductor companies to the wants of the average consumer.

Makers are the people we need to win with. That idea isn’t lost on Chrysler, who has co-opted the maker movement as their idea in 2014 commercials. Makers care about what is inside, and they are choosing Atmel in droves – in part because Atmel has redirected technological and social media energy into nurturing them, away from just talking to the button-down, risk-adverse, safety-is-job-one industrial community. Intel and other chip suppliers are feverishly trying to catch the wave with makers, moving away from the “e2e” stance that only takes us so far in this next phase.

It’s not for the faint of heart, or the impatient. The industrial IoT is safe, somewhat predictable ground for experienced firms, whereas the consumer IoT still borders on bubble in many minds. The maker movement is now what the university programs were back when to semiconductor firms, taken to the next level and reaching an even wider audience. Design wins with makers now likely won’t show up in the volume shipments column right away – but, they will show up as consumers get the IoT over time.

This post has been republished with permission from SemiWiki.com, where Don Dingee is a featured blogger. It first appeared there on June 19, 2014.

For I have seen the shadow of the curved touchscreen

Last year’s CES was the modern technology equivalent of the voyage of Ferdinand Magellan, proving beyond any shadow of doubt displays no longer can be thought of as only flat. While the massive curved 105-inch TVs shown by LG and Samsung drew many gawkers, the implications of curved touch displays are even wider.

At DAC 50 there were more than a few chuckles and some mystified looks when Samsung’s Dr. Stephen Woo spent a lot of his keynote address highlighting flexible displays as one of the challenges for smarter mobile devices (spin to the 27:41 mark of the video for his forward-looking comments). I think if we had polled that room at that second, there would have been two reactions: 1) yeah, right, a flexible phone, or 2) hmmmm, there must be something else going on. His comments should have provided the clue the flat display theory was about to dissolve:

Is there any major revolution coming to us? My answer to that is yes. I’m afraid that we as EDA, as well as the semiconductor industry, are not fully appreciating the magnitude of the revolution.

Woo showed the brief clip from CES 2013 introducing the first Samsung flexible display prototype, hinting that while exciting, it is still a ways from practicality. Why? He went on to explore the rigid structure of the current high volume smartphone – flat display, flat and hard board with flat and hard chips, and a hard case. I have some unpleasant recollections of trying chips on flex harnesses in the defense industry, and the problems become non-trivial with bigger parts and shock forces coming into play, not to mention manufacturing costs.

We might be getting thrown off by the limiting context of a phone as we know it. A gently curved but still fixed display poses fewer problems in fabrication using current technology. Corning has announced 3D-shaped Gorilla Glass, and Apple, LG, and Samsung are all chasing curved display fabrication and gently curved phone concepts today.

The real possibilities for smaller curved displays jump out in the context of wearables and the Internet of Things. The missing piece from this discussion: the touch interface. Flexible displays present a challenge well beyond the simplistic knobs-and-sliders, or even the science of multi-touch that allows swiping and other gestures. Abandoning the relative ease of planar coordinates implies not only smarter touch sensors, but algorithms behind them that can handle the challenges of projecting capacitance into curved space.

Illustrating the potential for curved displays with touch interfaces in automotive design, AvantCar debuted at CES 2014. Courtesy Atmel.

 

Atmel fully appreciates the magnitude of this revolution, and through a combination of serendipity and good planning is in the right place at the right time to make curved touchscreens for wearables and the IoT happen. With CES becoming an almost-auto show, it was the logical place to showcase the AvantCar proof of concept, illustrating just what curves can do for touch-enabled displays in consumer design. (Old web design axiom, holds true for industrial design too: men tend to like straight lines and precise grids, women tend to like curves and swooshes – combine both in a design for the win.)

The metal mesh technology in XSense – “fine line metal” or FLM – means the touch sensor is fabricated on a flexible PET film, able to conform to flat or reasonably curved displays up to 12 inches. XSense uses mutual capacitance, with electrodes in an orthogonal matrix, really an array of small touchscreens within a larger display. This removes ambiguity in the reported multiple touch coordinates by reporting points independently, and coincidentally enables better handling of polar coordinates following the curve of a display using Atmel’s maxTouch microcontrollers.

Utilizing fine line metal - copper etch on PET film - Atmel's XSense touch sensor is able to conform to gently curved displays.

 

Now visualize this idea outside of the car environment, extended to a myriad of IoT and wearable devices. Gone are the clunky elastomeric buttons of the typical appliance, replaced by a shaped display with configurable interfaces depending on context. Free of the need for flat surfaces and mechanical switches in designs, touch displays can be integrated into many more wearable and everyday consumer devices.

Dr. Woo’s vision of flexible displays may be a bit early, but the idea of curved displays looks to be ready for prime time. The same revolution created by projected capacitance for touch in smartphones and tablets can now impact all kinds of smaller devices, a boon for user experience designers looking for more attractive and creative ways to present interfaces.

For more on the curved automotive console proof of concept, check out Atmel’s blog on AvantCar.

What do you think of the emergence of curved displays and the coming revolution in device design? How do you see curved touchscreens changing the way industrial designers think of the user interface on devices? Looking out further, what other technological improvements are needed?

This post has been republished with permission from SemiWiki.com, where Don Dingee is a featured blogger. It first appeared there on January 10, 2014.

Does your smartphone’s touchscreen support moisture touch?

Recently, I met an Atmel maXTouch customer whose smartphone brand is well recognized by consumers in West and East Africa, competing against smartphones made by global brands like Samsung and Nokia. When the customer selected our touchscreen controller for their smartphone product, they needed two features that were very important for African consumers: robust moisture performance and strong noise immunity. This is hardly a surprise as many African countries have unreliable power supplies, and surge protection is important for electronic devices; additionally, the warm climates in most African countries make robust moisture performance a basic requirement for touchscreen controllers to handle sweaty fingers, palms and faces. When the touchscreen controller has trouble in combating charger noise or moisture presence on the touchscreen, a symptom called “ghost touch” would occur – in other words, when the touchscreen automatically triggers a false touch without the presence of a finger contact at that specific location.

correct-touch

With Adaptive Sensing technology, Atmel’s maXTouch T-series scans the touchscreen of a smartphone using both mutual-capacitance and self-capacitance sensing.

sensor-panel-touch

Mutual-capacitance enables true multi-finger touch operations, such as multi-finger gestures and rotations used in gaming apps. However, self-capacitance sensing is much less sensitive to the presence of moisture or water droplets than mutual-capacitance. Atmel’s Adaptive Sensing technology combines the analog signals of both self-capacitance and mutual-capacitance, allowing the embedded maXTouch microcontroller to intelligently determine moisture presence through obvious differences in both measurement deltas for corresponding touch locations. As seen in the example below, here a maXTouch device combines both set of signals to eliminate false touch (a.k.a. ghost touch) typically associated with the presence of moisture on a touchscreen.

Self Cap Measurement - TouchI should point out that a smartphone with an excellent water-resistant rating does NOT necessarily mean that it has a robust moisture performance for its touchscreen. Here is a tidbit of consumer feedback on a premium smartphone with IP58 rating:

newbie-touch

In comparison, the OEM customer designs smartphones for African consumers that can offer excellent touch performance with the presence of moisture, thanks to our maXTouch T-series. The maXTouch mXT640T series of touchscreen controllers dynamically switches into a Self-Capacitance based single-touch mode when touches are detected in the presence of significant water. This meaning, the normal touch functionality of a mXT640T touchscreen will be maintained for as long as possible before eventually switching to a single touch operation to maintain reliable operation and prevent false touch conditions. The picture below illustrates how we set the bar for superior water/moisture performance in the market:

mist-and-water-ghost-touches-reported-touch-embedded-design

All in all, a touchscreen powered by Atmel’s maXTouch T-series controllers can support true multi-finger operations with the presence of moisture. Even in a rainy condition where water falls down to your smartphone, the system dynamically maintains reliable touch operations and prevents false touches, so that when you press a speed-dial for Uber in the rain, your phone will not innocently call your ex-girlfriend instead.

 

Accelerate your evaluation of Atmel 802.15.4 wireless solutions from your desktop

You have probably come across this scenario before: Management or the marketing department approaches you asking you to add wireless functionality to an existing product, or to develop a new product that needs to be able to support a wireless link. Today, there are many wireless technologies and options to consider.

It is also quite possible that marketing has already made part of that decision for you.

The marketing requirement may stipulate that you use Wi-Fi, Zigbee, 6lowpan or Bluetooth low energy (BLE). Or, maybe marketing has no idea what is required, and just tells you to implement a wireless link!

So, after a number of meetings and conference calls, you decide to use a solution that is based upon 802.15.4. This could include Zigbee, 6lowpan, Wireless HART, ISA100.11a, Openwsn, Lwmesh, among many other wireless stack solutions that all require an 802.15.4 compliant transceiver.

At this point you would need to decide if your solution, or the protocol you’ve selected, will operate in the 2.4 GHz band or in a SubGhz band. There are times when you will need to do some experimentation or RF performance evaluations to determine which RF band to use in your particular situation.

When evaluating Atmel 802.15.4 wireless solutions, the first tool you should turn to is Wireless Composer. Provided as an extension to Atmel Studio 6.x, the Wireless Composer is a free tool. In order to make it simple, each of the current Atmel 802.15.4 evaluation kits/platforms comes with a Performance Analyzer firmware application pre programmed into the kit. Running on the evaluation kit, this Performance Analyzer firmware is designed to communicate with both the Atmel Studio and Wireless Composer extension.

Some of the capabilities of Wireless Composer include:

  • PER (Packet Error Rate) Testing: Transmit and receive 1000’s of frames at a specific TX power level and RF channel and then review the results for errors (dropped bits/frames) while also evaluating throughput metrics.
  • CW Test Modes: Place a device in a Continuous test mode to monitor emissions with a spectrum analyzer or other RF test equipment
  • Antenna Evaluation: Provide a Large Digital Display to allow testing antenna radiation pattern’s at distances of up to around 3 meters from the device connected to the laptop PC.
  • Range Testing: Gather and log range data generated from a  wireless link set up between two nodes — this data includes RSSI (ED signal strength) and LQI (signal quality) from both sides of the RF link.

Here are a few additional example screen captures, available from within Wireless Composer.

Energy Detection Scan Mode:

Energy Detection Scan

Screenshot of Wireless Composer, an extension of Atmel Studio 6.x – Energy Detection Scan

Have you ever wanted to set up some RF tests and wanted to know if there were other transmissions already taking place on the channel you intended to test on ?  Maybe your colleagues  are performing tests in another section of the lab or building, or maybe at home you have Wi-Fi or Bluetooth or home automation devices operating in close proximity to where you want to run some experiments.  The ED scan mode, as shown here, allows you to get a quick glimpse of what RF activity is happening around you. You can do a one time scan or you can configure the test to continuously scan one or all channels and repeat this until you stop the test.

PER Test:

A common RF test to perform on a packet based wireless communication system is a PER (Packet Error Rate) test.

This test mode allows you to configure operation on a particular channel, at a specific TX power level, using a selected antenna option. You are then provided the ability to set the number of bytes to send in a transmitted frame, and to set how many frames you are going to send during the test. All of these parameters are configured in the left hand Transceiver Properties Pane, as shown in the capture below. Once the test is performed, the right hand window provides data regarding the results of the test.

This can be useful for confirming RX sensitivity parameters, and data throughput characteristics under different conditions. Here is an example of sending 1000 frames and achieving zero errors using a frame length of 20 bytes.

Packet Error Rate test mode

Screenshot of Wireless Composer, an extension to Atmel Studio 6.2 – Packet Error Rate test mode

 

Continuous Transmission Test Mode:

If you have attempted to develop a wireless RF product before, you know that a considerable amount of time will be spent performing regulatory pre – scan certification testing. This typically involves configuring your device to transmit a continuous wave RF emission on a particular RF channel using a specified amount of Transmit power. The RF emissions are monitored using a spectrum analyzer or other RF test equipment. To help save time and provide a useful tool, Wireless Composer provides a Continuous Transmission Tab that allows selection of a few different tests of this type.

In the example shown below, an unmodulated CW test transmission has been started on channel 16 using a TX power level of +4dBm. These are no results reported here, because all measurement results would come from observing the RF test equipment that monitors the RF emissions.

Screenshot of Wireless Composer, an extension to Atmel Studio 6.2 -  Continuous Wave test mode

Screenshot of Wireless Composer, an extension to Atmel Studio 6.2 – Continuous Wave test mode

 

Antenna Evaluation Range Test Numerical Display:

For any wireless product, the antenna is one of the most important sections of the design. A great radio with a poor antenna results in poor product performance, while a mediocre radio with a great antenna can end up with very good performance. So, one of the tasks for any wireless product developer is to understand the characteristics and performance of his antenna design. This may be some type of on board antenna like a ceramic chip antenna, or a pcb trace antenna, or it just may be connecting an external antenna to an RF connector mounted on the product’s pcb. Many on board antenna designs are shortened quite a bit to reduce the footprint or space required by the antenna. This usually will affect the performance of the antenna in a negative way, or at a minimum create directivity to the antenna’s radiation pattern. A nice capability of Wireless Composer is the ability to allow you  to place the device connected to the PC, running Wireless Composer, on a table or tripod at a specific height above the floor in an open indoor or outdoor area. Then, in the range test tab within Wireless Composer, select “Numerical “ as the display mode. This will then display a screen as shown below.

One would then take a battery operated mobile node about three meters away from the PC display and watch the values being displayed for ED/RSSI and LQI change as you rotate or change the orientation of the antenna with respect to the unit at the other end of the link. This display shows the LQI and ED/RSSI values at both ends of the link and can be used to examine any changes in antenna pattern, as the device orientation is changed. Knowing what orientation provides the best signal levels will later help you understand how to position the unit when mounting it at its final location. You will also acquire information on how to set up additional range tests where you could be up to one mile away, and all you have is a blinking led to indicate whether or not you still have communications with the unit under test.

Screenshot of Wireless Composer, an extension to Atmel Studio 6.2 - Range Test Numerical Display

Screenshot of Wireless Composer, an extension to Atmel Studio 6.2 – Range Test Numerical Display

 

Range Test Log With Multiple Markers (Push Button Marker Recording):

Wireless Composer also has a range test mode for logging signal level and quality to a PC display or to an Excel file, as shown in the screen capture below.

When two paired devices are configured in this range test mode, the unit connected to the PC will periodically (about every two seconds to conserve battery life) send a beacon type frame to the mobile unit, at which point the mobile unit will send back a reply to the logging device. This activity can also be seen in the screen capture below.

The LQI and ED (average RSSI) levels for each side of the wireless link are recorded with a time stamp to an Excel file.

Have you ever tried to do an RF range test by yourself? If you have, then you know that it sometimes can be difficult to set up a test, such that you can leave one node at a fixed location and take the other battery operated mobile unit to various locations where you want to gather signal level and link quality information.

This is especially true when your simple wireless device lacks any type of user interface, or display attached to it, as in the case of a wireless sensor, or an simple evaluation board. This becomes even more difficult if you are doing LOS (line of sight) measurements outdoors. The performance analyzer app only assumes you have access to two IO pins — one is typically an input for a push button or jumper, while the other is an output for an LED.

Outdoor LOS measurements may allow you to achieve distances of hundreds of meters, as well as one or more miles in the SubGhz RF bands.

To make this measurement task a lot easier, the performance analyzer app has the ability to enable you to press a button on the battery operated portable unit that you have in your hand, and have this RF device send an RF frame back to the unit connected to the PC that is doing the logging; as a result, that marker frame is recorded into the log, allowing you to place marker indicators for time and place in the log file. This will enable you to determine where you have been when you return to review the log data.

For instance, you could press the button once while at a specific location in room A, and then press it twice in for a location in room B. Or, if you are outdoors you could press the button and insert markers at various distances as you move away from the logging unit. Then, all you would have to write on your notepad while doing the test would be the name of your location (or the distance at which you were away from the logging unit) and the number of times you pressed the button at that location.

Upon your return to examine the recorded log, you’ll have all of the necessary information to understand the recorded results, including where in space and time the measurements were made.

See the example below:

Screenshot of Wireless Composer, an extension to Atmel Studio 6.2 -  - Recorded Logs

Screenshot of Wireless Composer, an extension to Atmel Studio 6.2 – – Recorded Logs

 

For each of the supported wireless platforms, Atmel Studio contains complete example projects with source files for the performance analyzer application. When you are finished making measurements on an Atmel evaluation board that you used to help make device selection or RF band selection decisions, you can then use this same application with possibly some minor modifications to support your own final hardware design with regards to the IO assignments for a push button or led. This performance analyzer application along with Wireless Composer have proven to be very useful when performing tests on first prototype boards, and even for use in performing FCC or other governmental regulation pre-scan testing.

Interested in learning more? You can access Wireless Composer here and Atmel Studio here.

 

 

International_Space_Station_National-Design-Challenge-Ardulab-Atmel-AVR-sm

Making space available to everyone

I’m Brian and one of the Founders of Infinity Aerospace. In 2012, our company developed and marketed an Arduino powered platform for easily conducting custom experiments autonomously on board the International Space Station. We called it Ardulab and it was well received in the space industry. In essence, the Ardulab is a small microcontroller with an Atmel chip as the brain that’s enclosed by a space ready aluminum chassis. The Ardulab is an Atmel powered machine that’s won the faith of organizations like NASA and Stanford because of its advanced capabilities in a small form factor and its reliability.

Brian Rieger

Brian Rieger, Co-Founder of Ardulab (Source: Infinity Aerospace)

The microcontroller is heavily modified from a basic Arduino to be compatible with the Space Station computers, and the chassis adheres to a compliant form factor (10cm cube). The microcontroller only uses about 10% of the internal volume of the chassis, leaving the rest for an experiment to be installed.

ardulab-closeup

Powering your Ardulab up for the first time, then get to know all the features and functions. (Source: Ardulab.com)

Fast forward to present day; Ardulab users include prominent space organizations like NASA-JPL, NanoRacks, and Stanford University. In addition, the overseer organization of the International Space Stations’ National Lab, CASIS, created a program called the National Design Challenge that funds k-12 schools to use Ardulabs in their science classrooms to build an experiment and then launch them to the Space Station. We couldn’t be more proud that the Ardulab product has catalyzed so many positive activities within the space community.

AL-Chassis-Ardulab-

The Ardulab Chassis. (Source: Ardulab.com)

Up until today, the Ardulab had a minimum purchase price of $2,000 and was sold directly from us. This allowed us to recuperate the cost of design and development of the Ardulab as well as the incremental manufacturing cost of each unit. Unfortunately, this limited who could use the Ardulab and gain access to its features – features that make it very easy to conduct experiments autonomously on the Space Station. We realized this was a departure from the fundamental philosophy behind Ardulab; to give as many people as possible the tools and information they need to be successful in space.

The overseer organization of International Space Stations' National Lab, CASIS, created a program called the National Design Challenge that funds k-12 schools to use Ardulabs in their science classrooms to build an experiment and then launch them to the Space Station. (Source: Wikipedia)

The overseer organization of International Space Stations’ National Lab, CASIS, created a program called the National Design Challenge that funds k-12 schools to use Ardulabs in their science classrooms to build an experiment and then launch them to the Space Station. (Source: Wikipedia)

We are so excited to share that the Ardulab is now completely open-source. To support this, we’ve launched a brand new website (www.ardulab.com) where anyone can learn about Ardulab, download the plans with a click of a button, and follow the provided guidance that will take anyone from idea to space experiment. A middle school class in Houston Texas used the Ardulab to create a space ready experiment in 6 months, I can only imagine what the space community at large will create with full access to the Ardulab technology.

Interested? You can explore Ardulab in more depth on its official website.

 

IoT set for takeoff…

Nantes, France. I’m here to pick up a friend from the airport. There is a great view of the runway, and I’ve seen his plane land, a beautiful Airbus A320 flying Air France colors. This is a domestic flight, and ten minutes later, he is off the plane and has his luggage.

We talk about his business trip, and how it went. He’s a technical recruiter, and has been working on a project in the south of France. He tells me just some of the details. We clear the terminal and walk towards the parking lot. On the other side of a fence, an A320 is being looked over by a crew of technicians. After a quick refuel, it will be ready to take off and fly to another destination.

– You know, they keep on talking about IoT, but I can’t see any solid examples yet.

I smile. He stops dead in his tracks.

– You have an example?

I do. You just flew it.

He has a blank expression on his face.

Look, it is right over there.

I point to the A320.

airbus320-IoT-parameters-transmitted-MCU

Source: Aviation Photos – Airbus A320

– What do you mean IoT? The airplane is IoT?

Well, not exactly. IoT is the Internet of Things, devices that communicate. This plane has an onboard system called ACARS, and it communicates with the ground throughout the flight. Hundreds of parameters are monitored and sent to the ground crews.

Global ACARS Infrastructure

Source: Rockwell Collins – Global ACARS Infrastructure

ACARS-IoT

Source: Aviation Knowledge Wiki – ACARS

– But why?

Modern aircraft are highly reliable, comfortable and silent. All this comes at a price, and a modern aircraft can cost a small fortune. Even worse, an airplane will only make money when it is flying, if it stays on the ground, the company doesn’t make any money at all. In order to maximize revenue, companies need to keep their fleet flying, but not at the cost of safety. On board systems monitor the flight, and inform ground crews of any problems. It monitors critical systems, but it also monitors other systems; if the in-flight coffee machine stops working, it alerts the ground. If there is a malfunction with the toilet, again, the ground will be alerted.

– Why?

Imagine an international flight. Halfway over the Atlantic, one of the ovens stops working. Of course, the flight crew will have a problem getting all the food ready for the passengers, but it can still be done. It is a nuisance, but it doesn’t force the airplane to make an emergency landing. Imagine arriving at Paris, and telling the ground crew that there is a problem. They only have an hour to find a replacement, and get it installed. That probably won’t happen, so the plane will take off with a defective oven, which will be replaced at a later date. Now, imagine that the airline’s center is notified as soon as there is a problem. The flight is scheduled to land in 6 hours, to the airline notifies the ground crew at the destination that there is a defective component, they have a few hours to find replacement parts, and when the airplane touches down, they will already be there, waiting, prepared to replace everything necessary.

– That seems like a lot of effort to change an oven.

Maybe. The oven isn’t the best example, I’ll grant you that. Think about this, then. The engines. Aircraft engines are an incredible feat of engineering, and are some of the most reliable mechanical systems ever built, but they are still mechanical, and things can go wrong. Engines do fail from time to time, even if it is extremely rare. Luckily, an A320 can perform very well with a single engine, but it still requires action. An emergency landing at another airport, having to take the engine off the wing, inspect it, find the fault, and then replace the components, before putting the engine back on. This can take a very long time, and can be horrendously expensive. What if the engine itself could communicate with the ground team?

– They can do that?

Some of them can, yes. Engines are monitored, and hundreds of parameters are analyzed. The engine in your car doesn’t fail without a reason, and simply taking your car to the garage from time to time saves costly repairs. Jet engines are even more advanced. Failures rarely “just happen”; they can often be predicted by looking at variables; oil pressure, temperature, vibration, etc. Instead of waiting for a failure to occur, they can be prevented with close monitoring, changing elements as required. It saves on cost by replacing small parts before big parts fail. It saves cost by replacing elements quickly, putting the aircraft back into service as soon as possible. That is one of the reasons for IoT; cost saving. Being aware of all the parameters means the best choice can be made. Airlines know when to change components, thermostats know when to turn the heat on, greenhouses know when to open the windows.

– I never knew that panes could do that;

One of the things that makes IoT so good is the fact that it isn’t visible. There is no point in adding a screen to a thermostat to display “Calculating ideal temperature”, or “contacting server”. We expect things like that with the programs that we have had on our computers, but that is about to end. People want simple devices that work, and IoT is all about that. Just walking through the airport, you probably didn’t notice the wireless equipment used to broadcast Wi-Fi and to power the wireless telephones used by the airport staff.

Imagine walking through a beautiful garden, completely unaware that there are hundreds of sensors, monitoring soil humidity, temperature, plant growth and other parameters that sets off the sprinkler system only when needed. The world has limited resources, we are painfully aware of that, and this is the technology that could save us. It will make calculations far better than man could, and create data far more precise than we can imagine. All of this can be powered by a solar panel, making it even more eco-friendly.

He remains silent as we walk to the parking lot. Behind us, passengers are getting ready to board their plane, unaware that their trip is made easier and cheaper with IoT. The plane will soon be ready to depart, a trip monitored by processors and microcontrollers like Atmel’s SAM D21.

Linduino is a USB-isolated Arduino

My pals over at Linear Technology have developed the Linduino board to drive their ADCs (analog to digital converters) and DACs (digital to analog converters) as well as temp sensors and other devices. The board is not a clone of an Arduino, that would be pointless for them. Linear Tech sells analog chips, not Maker boards.

KONICA MINOLTA DIGITAL CAMERA

The Linear Technology Linduino board uses the same Atmel chip as a Arduino Uno, but has isoalted USB and more dc power.

So the first and most essential difference is that in addition to the normal shield headers on an Arduino, there is a header that Linear Tech has used for years to drive their demo boards. This computer interface function used to be done with their DC590 interface board. Indeed, the firmware that comes shipped with the Linduino emulates that board, so you can run the original Linear Tech interface program on your PC, and it can’t tell if its the old board or a Linduino.

KONICA MINOLTA DIGITAL CAMERA

The Linduino board will accept all the Shield mezzanine boards for Arduino, but has this extra header to control Linear Tech demo boards as well.

But wait, there is more. So much more. Linear tech also used one of their USB isolators on the Linduino board. This means that the board and what you plug into it are galvanically isolated from the computer you have the USB plugged into. This means you can measure things off a car or an audio system without worrying about ground loops polluting the measurement. Its as handy as a hand-held DVM (digital voltmeter). My former employer Analog Devices also makes bidirectional USB isolators and there may be others that have come to market. You might make your own isolator, but the great thing about the Linduino is that all the system engineering is done for you and the firmware works.

KONICA MINOLTA DIGITAL CAMERA

The Linduino has a LMT2884Y-USB isolator module on it so your PC is not electrically connected to the Linuduino or its Shields or Linear Tech demo boards.

Since Linear Tech is also a power supply chip company, they beefed up the power supply on the board, using a switching regulator to replace the linear regulator on the Arduino. This means you can get 750mA out of the power system. Since a USB can’t supply this much power, that means you have to feed the board with an external wall wart. Now you have the power to drive actuators or other heavy loads.

KONICA MINOLTA DIGITAL CAMERA

Linear Tech also beefed up the power system with a 750mA switching regulator that will not get hot even at full load while dropping for a high input voltage.

Dan Eddelman worked on the Linduino as did Mark Thoren, my pal from Linear Tech. Tomorrow I will plug in the beast and  show how to get it working. I did have a few glitches the first time.

KONICA MINOLTA DIGITAL CAMERA

Mark Thoren, shown here giving his daughter some STEM instruction at the Silicon Valley eFlea, helped develop the Linduino.

Just like Atmel’s demo boards, Linear Tech is selling the Linduino pretty much at cost. This can give you a great foundation to build an isolated data acquisition and control system for cheap. And don’t forget, all the Arduino shields plug into the board and work with the existing libraries and firmware and available open source code. Linear Tech used the same Atmel chip as the Arduino, so this is not just “shield compatible,” is is truly compatible with an Arduino.

The Microcosm of IoT and connected cars in Formula 1 (Part 2)

…Continued from The Microcosm of IoT in Formula 1 (Part 1)

The typical F1 racing car embodies the sophisticated engineering — designed to win and only but win. The racing platform itself (both team, driver, and car) executes every deductive decision vetted against one pillar called “performance.”

Here’s the quantified car and driver. At 1.5 gigabytes of data wirelessly transmitted per connected car during a race, the ECU (electronic control unit) generates 2-4 megabytes per second of data from the F1 cars’ 120+ various sensors, which also include the drivers’ heartbeat and vitals.  Now let’s add the upgraded network fiber deployed across each race of the year set forth to ensure every turn and tunnel can stream and broadcast this telemetry and data.

Source: ESPN Formula 1 News

Source: ESPN Formula 1 News Computers, Software, and BI [Visualization and Data]

These embedded systems comprise of technology not limited to neither automotive nor Formula 1; embedded systems are used in the aero industry, marine, medical, emergency, industrial, and in the larger home entertainment industry. Therefore, advanced technology, little by little take place in the devices that we use every day. There are many useful products that are used in the industry — even though they first surfaced — as an application in F1 racing [the proven, moving lab].

F1 electronic devices used may be generally regarded in groups [using embedded systems] by the following:

Steering Wheel Display, Interface Unit, Create a Message, Electronic Control, Telemetry, Speed, Interface Unit, EV, Regenerative Power, Ignition Coil, Management System, Access to Pitstop, Power Source, Gryro Stabilizer, Humidty, Triggering Device, Acceleration, Rainy Lights, Air Resistance, Linear Movement, Angular positions, Lambda probe, Liquid pressure, Tire pressure, Temperature, Torque, Signaling, Server, Computer, Display Data (BI), Software

igure 4: Steering Wheel of Sauber F1 Source - nph / Dieter Mathis/picture-alliance/dpa/AP Images

Source – nph / Dieter Mathis/picture-alliance/dpa/AP Images

Here is an example Formula 1 steering wheel. It’s the embedded electronic enchilada, serving information [resulting from actuators and sensors] to a driver [on a need to know basis]. The driver coincides his race style and plan [tire management, performance plan, passing maneuvers, aggressive tactic] to every bit of data and resulted in a formatted display. These are literally at his fingers.

What are some of the F1 connected car implications?

Drivers in Formula 1 have access to functionality through their race platforms, which helps improve speed and increase passing opportunities. The DRS (Drag Reduction System) helps control and manage moveable rear wing. For a driver, in conjunction with Pirelli tires and KERS, it has proven successful in its pursuit of increasing overtaking which is all good for the fan base and competitive sport. The DRS moves an aerodynamic wing on a Formula 1 race car. When activated via the driver’s steering wheel, the DRS system alters the wing profile shape and direction, greatly reducing the drag on the wing by minimizing down force [flattening of the wing and reduce drag by 23%.]. Well, now coupled with the reduction in drag, this enables faster acceleration and a higher top speed while also changes variably the driving characteristics and style for over-taking. These are called driver and protocol adjustable body works.

How it works? Like all movable components of an F1 pure breed, the system relies on hydraulic lines tied to embedded control units, and actuators to control the flap. Managed by a cluster of servo valves manufactured by Moog, the Moog valves are interfaced via an electronic unit receiving a secure signal from the cockpit. Of course, this all happens under certain circumstances. When two or more cars pass over timing loops in the surface of the track, if a following car is measured at less than one second behind a leading car it will be sent a secure signal [encrypted then transmitted via RF] that will allow its driver to deploy the car’s active rear wing. Since the timing loops will be sited after corners, drivers will only be able to deploy the active rear wing as a car goes down a specific straight paths in many tracks.  In essence, the modern day Formula 1 car is a connected platform dynamically enabled to produce a stronger driver, appealing more to both driver performance and fan engagement.

Moveable aerodynamic components are nothing new. But still, for an Airbus A320 or even a modern UAV or fighter jet, there is a huge amount of space to work in. On a grand prix car, it’s quite different. This is also achieved in a very hyper fast, mobile, and logistically drained environment of Formula 1, where performance, equipment, and configuration are a demanded at all times. Next we’ll summarize how this relates to the broader connected car concept…

F1 showcases the finer elements of connected cars, making it possible

Just discussed, cars in general are going to become literally the larger mobile device. They will be connected to all sorts of use-cases and applications. Most importantly, we are the drivers, and we will become connected drivers. Both driver and connected car will become more seamless.

The next phase where smart mobility is going to change how we do and behave after we before or after we reach our destination. In Wired Magazine’s column named Forget the Internet of Things: Here Comes the ‘Internet of Cars’, Thilo Koslowski discusses the improvements and why connected cars are inevitably near. Thilo, a leading expert on the evolution of the automotive industry and the connected vehicle says, ““Connected vehicles” are cars that access, consume, create, enrich, direct, and share digital information between businesses, people, organizations, infrastructures, and things. Those ‘things’ include other vehicles, which is where the Internet of Things becomes the Internet of Cars.”

Yes, for the connected car, there still exist a number of technology challenges and legislative issues to build out a successful broader impact. Like Formula 1, we attribute many of its tech surfacing into main stream markets [previously discussed in part 1]. This next automotive revolution stems on current and related industry trends such as the convergence of digital lifestyles, the emergence of new mobility solutions, demographic shifts, and the rise of smartphones and the mobile internet.Thilo further claims “As these vehicles become increasingly connected, they become self-aware, contextual, and eventually, autonomous. Those of you reading this will probably experience self-driving cars in your lifetime — though maybe not all three of its evolutionary phases: from automated to autonomous to unmanned.”

connected-sensors-microcontrollers-atmel-iot-new-services

Actually, a consumer shift is happening. Consumers now expect to access relevant information ranging from geo location, integration of social data, way points, destination, sites of interest, recommendations, ones digital foot print integrated into the “connected car” experience. The driver will become connected with all the various other touch points in his/her digital life. Moreover, this will happen wherever they go including in the automobile. Thilo even goes to as far as claiming, “At the same time, these technologies are making new mobility solutions – such as peer-to-peer car sharing – more widespread and attractive. This is especially important since vehicle ownership in urban areas is expensive and consumers, especially younger ones, don’t show the same desire for vehicle ownership as older generations do.

To be successful, connected vehicles will draw on the leading technologies in sensors, displays, on-board and off-board computing, in-vehicle operating systems, wireless and in-vehicle data communication, machine learning, analytics, speech recognition, and content management. (That’s just to name a few.) “

All together, the build out of the connected car, [aspects proven in F1], contributes considerable business benefits and opportunities:

  •  Lowered emissions & extended utility of EVs — remote Battery swap stations, cars as (Internet as a service), peer to peer car sharing, cars with payment capabilities, subscription of energy, vehicles as power plants back to the grid, KERS, and other alternative fuel savings displaced with electrical motors and emerging consumer conscience accountability to clean energy
  • New entertainment options — countless integration opportunities with mobile (M2M and IoT) ecosystem of value added connected Apps and mobile services (i.e. Uber disrupted an old traditional market)
  • New marketing and commerce experiences — countless use-cases in increasing the engagement and point of arrival offerings
  • Reduced accident rates — albeit found in crash avoidance systems, location based services, driver monitoring, emergency response automation, early warning automation, telemetry to lower insurance cost, or advanced assisted driving
  • Increased productivity — gains achieved via efficiencies/time management towards more sustainable commutes
  • Improved traffic flow — efficient system merging various datasets to advance navigation to minimize and balance capacity or re-route traffic

Sensors-connected-IoT-Car

Personalization-connected-driver Like all technology, old ideas will progress, evolve to newer platforms to bring new functionality that can adapt to the latest popular ecosystem [simply being mobile & connected]. Connected cars will expand automotive business models augmenting new services and products to many industries — retail, financial services, media, IT, and consumer electronics. The traditional automotive business model can be significantly transformed for the betterment of the consumer experience. Today, emphasis is placed much purely on the  output, sale, and maintenance of a vehicles.  Later on, once connected cars reach market maturity with wide adoption, companies will focus on the sum of business opportunities [value add chain ecosystem] leveraged from the connected vehicles and the connected driver.

Are you a product maestro or someone with domain expertise for your company seeking to improve processes or developing value added services to build IoT enabled products? Perhaps, you are in a vertical intended to accelerate business and customer satisfaction? With all this business creation stirring up, it’s quite clear the connected car platform will open new customer connected services or product enhanced offerings.

That all being said, we are already in this moment of the future with Formula 1. Connected cars will eventually come. It’s just a matter of time…

(Interested in reading more? Don’t forget to check out Part 1.)

Home, smart home

By Taylor Alexander, Co-Founder of Flutter Wireless

As founder of Flutter Wireless, a company that is building new hardware for the internet of things and connected devices movement, I spend a lot of time thinking about how this new technology will affect our lives. Right now computers are all relatively separate workstations, with tasks isolated to one individual machine. We may check email on our phone and on our desktop, but only recently have companies begun making it fluid to switch between the two. As our software advances and connectivity becomes more widespread and robust, we will begin to see programs that run across multiple machines simultaneously. I’d love to open an app on my phone and stream music to every device with speakers in my house, for example, rather than needing to buy a “home speaker system”. Ultimately, I see our home networks evolving into a single computing entity with many access points. A common home or cloud access point could provide services across multiple devices simultaneously. I could send one stream simultaneously to my living room TV and my kitchen tablet, for example, so I can catch up on a TV show while preparing dinner. As our homes become more connected, we will have increasing freedom with how we use computing to improve our lives, and entirely new possibilities will come out of these new use cases. Below is a story I wrote imagining a time maybe a decade from now, when the connected home is perhaps as commonplace as self-driving cars.

I hope you enjoy it, and that it prompts you to dream of what else a connected home can do for you.

I live in a connected home. Every electronic thing in my house is controlled by the home system. Not toasters or blenders or the fridge — not things that only sensibly need physical access. Those things have their own local user interface, though some may report back to the home. The microwave, for example, communicates photos of food to the server for analysis, but you can only turn it on from its front panel. The interface panel is just a touch-oled with images for its interface controlled by the home. In default mode it just has 3 buttons, and they change based on what I put in. Put in my favorite mug with a clear liquid and you just get a big “hot water” button. The house interface on my phone shows graphs that prove that the cook time it chose is optimal based on my use of this cup in this microwave every morning since I started my new job, but honestly… I never look at it, since it never fails. Usually when I’m using the house interface on my phone, it’s to control the music or change the channel.

I took a YouTube class in the living room last month, and the inductive charging in my new coffee table means that I could leave the interface open for the whole hour of class without draining its battery. The home has a local content stream it can serve to any audio or video device with a speaker box or cheap HDMI streamer. The audio channel let’s me do things like play music, talk with my friends, or control the lights and temperature.

I also have interface pads in the rooms. Interface pads are like the interface on the microwave – they have a touch-oled and an audio system for voice interaction. Four microphones mean it can pick up quiet conversation even with the fan on, and it blocks out other sounds like the TV like they aren’t there. This makes it feel like the system is in my head. I’ll mutter to myself “I wonder if I turned off the coffee pot”, and the system sometimes butts in and tells me. Usually I have to address the house to get it to listen, but I’m running some software that let’s me play back my ramblings when I am deep in thought, so right now it’s live all the time. This lets it answer questions without having to repeat myself. If I think out loud, sometimes the house is a pretty good assistant.

We call ours Hiro, and while he can’t tell me everything without a manual query at a terminal, he’s pretty good at answering basic questions about the world. Anything with a clear answer like… how deep is the English channel, how much money did I spend last month, or who won the gaming competition last week… those questions Hiro answers well. Of course he’s also great for taking notes for me and reading them back so I can edit them. He’ll read anything I want. He’s been reading me Steinbeck and Plato lately, and in the mornings I’ll usually have him read the news. Last night I streamed live ocean sounds from a beach in Madagascar as I slept.

In the mornings I read my emails on the terminal in the kitchen while I stir my coffee. I keep work emails out of the morning routine, but read what my mom is up to over a bagel and eggs. I fill my foodbox once a week and it serves up a hot bagel and fresh eggs every morning. It only fits a few types of meals but it’s enough for all my breakfast and lunch for a week, and using it beats rummaging through cold storage for all the pieces. It will slice bagels and fruit, even core an apple, and it has refrigerated dispensers for eggs, cream cheese, peanut butter and jelly, even mustard and mayonnaise. It has a small compartment for fresh meat and cheese, so I make sandwiches for lunch. The machine prepares the bread and washes itself, just like it does with my morning bagel, egg, and yogurt. It tracks the age of each perishable, and the deliveryman brings by fresh food weekly for things like meat and eggs. It even breaks the eggs and cooks them, and stores the shells in an oxygen free environment with the apple cores, until I empty the canister.

I charge a tablet on the kitchen table, and use it to watch last night’s news footage. I use the house interface app, which shows me stuff I probably want to watch. Anything I don’t want to watch on the tablet screen I can throw to any TV too. I am studying be a paramedic, so I’ll usually stream class to both displays at night when I’m cooking and cleaning. There are so many times where I need my hands for one thing but can use my mind and voice for another. The tablet was pretty good for that before, but with Hiro I don’t need to bring anything with me. I can wander to the other room mid voice chat without ever losing my train of thought. When I talk to friends, its like they’re in the same room and follow me around. With Hiro’s chat interface I can log into voice chat rooms with friends. Its like we’re sitting in a room together, either quietly working, having a meeting, or just watching the news together. I feel like I always have my friends with me.

A computer block and a storage block that I keep in the office control the whole system. All my home computers store data on the storage block, and the computer block runs Hiro’s software. We have phone and tablet apps along with interface panels, and cheap HDMI dongles on the TV. Voice is usually handled by the interface panel most rooms have. But there is a voice-only interface panel that is the cheapest. It skips the touch display on the large interface for a four-button fob and voice control. You can plug headphones and speakers into that one for a custom speaker setup, but by default the internal speaker is pretty good. It still has four microphones so we usually don’t use an external for that, just output.

It cost about a two grand for the whole system, but that’s the lights, computer, audio tactile pucks and 4tb storage brick. I saved up for one summer when I was in college and got this system. Its been around for a few years so the CPU takes longer to recognize my food scans from the microwave than the new models, but its a few milliseconds difference – 250 maybe – I don’t worry about stuff like that.

All in all, my connected home system was the best purchase I made since switching to a self-driving car.