Tag Archives: tag5

IT cloud vs. IoT cloud


Kaivan Karimi, Atmel VP and GM of Wireless Solutions, shares the top 10 factors to consider when transitioning from IT cloud to IoT cloud.


In mid-2013, the buzz phrase “Internet of Things,” also known as the “IoT,” set the technology world on fire. As a result of this craze, a lot of products that were developed for completely different end applications changed all their marketing collateral overnight to become IoT products. We saw companies that added the acronym “IoT” to the title of every executive and gadgets that became a part of an IoT enablement ecosystem. New tradeshows claimed their authoritative position on IoT, and angel investors and venture capitalists started IoT funds feeding incredible ideas — some which reminded me of the late 1990s bubble when Lemonade.com was funded. New standard bodies were formed around provisioning IoT devices, and all of a sudden, overnight, most of us in the technology community became IoT experts.

IoTCloud

Cloud companies are not an exception. While the physical infrastructure of the cloud didn’t change, the platform and software services that were developed for enterprise IT management and mobility apps support became IoT PaaS & SaaS platforms with claims of “IoT compliance.” By late 2013, at an IoT event in Barcelona, every keynote not only talked about the “metaphorical pyramid” of Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS), but almost every keynote talked about “Everything as a Service (EaaS)” thanks to IoT.

With so much hype and noise, it is hard to separate fact from fiction — unless you dig deep, really deep. This fuzziness is caused by the breath of IoT and the many vertical markets it encompasses, covering all aspects of life as we know it. And each vertical has its own unique “things,” so one size doesn’t fit all from a device perspective, requiring different types of standards and transport layers with silicon and software infrastructure to support this vast frontier. What has further muddied the water is that many large industry players look at IoT as an inflection point that they can transform themselves to something else and get into other businesses. Because of this, these players are looking at their current assets and are defining the infrastructure required for IoT differently than what logically and technically makes sense. For companies that have no play in hardware or software for the data centers, they publicly promote that the majority of the data processing should be done in other parts of the network (“closer to the source”). And, while the others promote just the opposite, a third group advocates that much of the processing should be done directly by the hierarchy of smart gateways boxes in the customer premises, along with everything in-between. The same goes for the choice of RF communications protocols, gateways, definition of things, provisioning schemes, etc.

A great example of what gets heavily promoted by one of the biggest industry players is calling IoT an “always ON revolution” and allowing sensor data collected at the edge/sensing nodes (thing side) to ALWAYS be sent to the cloud. This method requires a lot of bandwidth and storage capacity to collect data in the cloud, and encourages the promotion of their passive big data analytics capabilities to process this volume of data in the cloud. Clearly they sell hammers here, and see everything in the world as a nail. In reality IoT is a “mostly OFF revolution,” with significantly less data created than portrayed, and few of that data will make it to the cloud. For instance:

  1. A door or a key lock is mostly sleeping, until a sensor triggers a wake-up command during an opening or proximity event, in which case it communicates a few bytes of data to a gateway and then goes back to sleep.
  2. The temperature sensors on a bridge wakes-up every so often to report temperature fluctuation to the gateway on the side of the road, and report if the bridge is frozen and then telling the department of transportation to send the sand trucks to avoid accidents.
  3. The seismic sensors on the A/C unit in an office building located in Texas monitoring of the sound of the motor every 2 hours. If the motor sounds as if it will be breaking down in a couple weeks, the sensors inform the building manager to call a technician to fix what is going bad, so that they will not be stuck without air conditioning in the middle of July.
  4. The ethylene gas sensors (ripening phytohormone of fruits and plants) on fruit containers in the back of an eighteen wheeler wake up every 30 minutes and send the data to the gateway in the cabin of the truck. These signals predict the decay rate of the fruit and allow the driver to change the destination to a close by city if needed, and give some additional shelf life to the fruit, or allowing the driver to send the fruits straight to the jam factory, avoiding fuel waste of carrying a bad cargo.

In each of the aforementioned cases, and in other examples similar to these, the things (fruit container, A/C unit, bridge, home door, etc.) spend a majority of their time sleeping and only wake up based on an event trigger or predetermined wake up time based on programmed policy. This is the only way these devices can operate on batteries for years of usage. How many bytes (not mbps or even kbps) of data is really required to report those events? Would all of these events be worthy of sending to the cloud? In fact, the local event processing and analytics engine running on the local gateway will determine what will go to the cloud and only the exception events (door is open, fruit is going bad, motor is going to break down, bridge is frozen, etc.) will go to the cloud right away. But, as long as everything is normal (within policy range events), it will get registered on predetermined intervals (e.g. once every 24 hours) and the meta data will get uploaded to the cloud. Even if video capture was involved, no more than 2Mbps of bandwidth is needed.

Based on my experience with the analysis of multiple large enterprise campuses with many buildings, without video for IoT-type services, only an aggregate level of 15Mbps bandwidth max is required to fully support this type of IoT communication to the cloud for provisioning services. So one should question the folks who promote the fallacy that all types of applications, things will always be ON and lots of bandwidth will be needed. What’s in it for them to portray IoT in this manner? Of course if you are considering an enterprise campus full of smart devices with people moving massive amount of data with “chatty and persistent communication agents”, then you will need a lot more than 15mbps of connectivity to cloud, for sure. Could it be these folks are confusing an IT infrastructure with an IoT infrastructure?

For a comprehensive IoT implementation, a system-level approach is required to cover the tiniest edge/sensing nodes (things), to various types of gateways, all the way to the cloud and data centers, applications and service providers. These include data analytic engines embedded both on premise and in the cloud with a variety of SDKs and communication agents, data caching and bandwidth management as different layer and levels of hierarchy, etc. There aren’t many companies in the world that cover all of these (single-digit) items. Even if they do, these companies still require partnerships with the gadget/things side companies. Therefore, when someone claims that they are a one-stop shop, they can either: support an existing infrastructure of things to a cloud and add a new twist to it (subset of most IoT verticals), OR their system is not as comprehensive as they claim, OR ultimately a combination of both.

Not to mention, at this moment we are exclusively dealing with silo-ed clouds, and silo-ed IoT systems. While an ecosystem of cloud (cloud of clouds) is in a nascent stage for some companies, it is far from a true IoT cloud ecosystem that it will become in the near future.

The IT cloud ecosystem (versus the IoT cloud ecosystem) has had a journey of its own in the past few years. This ecosystem has shown signs of success as originally predicted with the technology distributed to provide a virtually seamless and infinite environment for communications, storage, computing, Web and mobile services, analytics, and other business uses. The cloud benefit model has come to fruition, with many examples of upfront CAPEX largely minimized or eliminated. This includes increased flexibility and control to scale users and the ability to add functionality by various organizations on demand, with the added pay-as-you-go benefit. Cloud providers have taken over the responsibility of IT requirements for many organizations, and have become vital business and channel partners.

InternetofThings_contentfullwidth

That said, the fundamental question still remains: Is the traditional IT cloud and its ecosystem the same as an IoT cloud and its ecosystem?

The answer: While 60-70 percent is the same, a 30-40 percent difference can kill your IoT roll-out and make a seemingly IoT-ready cloud almost useless for your applications.

The differences are present throughout the full end-to-end system, from the “thing” side, all the way to the data centers on the cloud side. The traditional IT cloud, web or mobility applications cloud mirrors much bigger devices with more resources on the cloud side. Over the last couple of years, a “thing” for the traditional cloud system consisted of a computer, a vending machine, a car, a gateway in customer premise, or a smart device (laptops, tablets, Smartphone, etc.). These devices are typically connected to the cloud via direct cellular links, a cellular (WAN) + Wi-Fi (LAN), or Fiber (WAN) + Wi-Fi (LAN). With the new generation of IoT “things,” you can find much more resource-constrained devices such as small battery operated sensors on doorways to keep track of people entering through the back gate of the house, battery-operated seismic sensors on roadway infrastructures (bridges, etc.), or any of the examples earlier. Instead of 20 smart devices in an office that are plugged into the wall outlet or through a large battery capacity recharging on a regular basis, you will be dealing with 500 different types of sensors and things covering that office. With multiple offices, 1000s of things at the same time, most of which are powered by batteries for years (4-5 years of battery life in consumer IoT, and 8-12 years of battery life in industrial IoT). Some of these things have a small 8-bit MCU as its brain, with very little memory and other resources, and may be hiding behind layers of gateways, relays, switches, even other things, in sleepy networks. The communication link when available (remember that they are mostly in an off state), may have very little bandwidth, and communication may go through multiple hops in mesh networks. A “Chatty” communication system that pings on the things on regular basis defeats the purpose here.

The important thing is to remember that a system needs to be fully extendable and scalable not just on the cloud side, but also on the link side from the cloud to the things–and finally on the thing side. You also need scalable data capture and aggregation to go along with a secure communication system. If you are targeting a consumer application, then a solid mobile application development platform working with your popular Smartphone operating system is a basic requirement, meaning you need to rewrite your middleware to become more agile, scalable, and be able to manage many more things simultaneously. You also need to rethink your whole communication topologies of the past. Lastly you need to pay more attention to your analytic engines and applications development environment, and depending on your IoT application, it may require completely different visualization tools and business models.

Here are some factors that an IT cloud provider transitioning to an IoT cloud provider needs to consider:

  • Understand the verticals you target; become a one-stop shop for a given vertical. In IoT, one size does not fit all. Understanding a vertical includes the evolution of that vertical and future business models that need to be considered. For example, if you are targeting the tracking of people in a hospital and their location at any given time, in the future that group would require wearables with biometric sensors, and their vital statistics would also need to be monitored. The expectation would be that your service can also cover the tracking of biometric sensors, which are usually battery-operated constraint devices with minimal bandwidth. Working with one PaaS or SaaS supplier for managing one set of its assets in the same premise and another cloud provider for a separate set of assets is not an option. The issues to consider include the protocols, networks, bandwidth management and transport technologies your IoT cloud framework would need to support.
  • Scalable data analytics and event processing engine is a must-have as the majority of the IoT value creation comes from the data analytics, and “data capital” is where the differentiation will come from. Do you have the right analytics engine on both the cloud side as well as the premise/gateways? The new in-memory streaming technologies which change the rate we can act on data will be required for some IoT applications. Hence the traditional extraction, transformation and loading (ETL)will give way to just in time (JIT) methodologies (real-time vs. batch-oriented). Can you manage fast/streaming data analytics processing for applications where extremely fast processing of (near) real-time data is required? For instance, in tele-health and elderly monitoring where passive data analytics in the cloud is not adequate, and local fast data analytics running on the local smart gateway is required to report a heart attack, or a fire in home automation, etc. Also it is imperative that you find a service provider for a given vertical—if you are not a service provider, partner with one—so that your event processing and data analytics engines are tuned for specific use cases and business logic. If your analytics engine only provides insight into the visibility or availability of a limited set of parameters in the network, work with a partner that brings the rest.
  • Know the specific type of data required to monitor/gather, the insight required for your customers. That means developing a diverse set of device data models for specific functionalities. Don’t try to be the Swiss Army Knife of the IoT cloud providers. Remember, while a Swiss Army Knife can perform many functions, they are not good at doing anything well. Understanding the verticals you need to support (item number 1) will also help you with this item. For certain applications, before the data sets get processed by analytics and visualization tools, it gets combined with external algorithmic classification and enrichment tools. This increases productivity and ease-of-use dramatically (e.g. user will know where the water tables are before drilling for a well, or what the maps of other distribution centers are before redirecting a cargo).
  • Develop a fully modularized end-to-end system. As most large OEMs may already have their own branded cloud and would only want to use a part of the functionality you offer. Arm yourselves with well defined APIs, and firewall-friendly adaptive connectivity architecture and become comfortable with working with your customers’ infrastructure, analytics engine, applications, visualization tools, things, etc. They may only be interested in your communication system. Or, ask for a mix of capabilities. The more flexible your approach, the better you can customize your offerings to their needs. On the cloud side, the formation of the cloud ecosystem (cloud of clouds – server to server(s) communication) is right around the corner. A robust ecosystem is at the heart of the IoT cloud management.

A modularized system as described above may mean a different tiered pricing approach to your business model. Flexibility needs to extend beyond your technology offerings, so be open to new business models.

  • Follow the new service delivery frameworks with large ecosystems, such as the Open Interconnect Consortium (OIC), etc. Standardization will eventually dominate both the consumer and industrial IoT space. While the alphabet soup of protocols may be expanding (e.g. MQTT, XMPP, DDS, AMQP, CoAP, RESTful HTTP, etc.), standardization is also happening and provide more clarity. Standards are being developed so there are “horses for different courses.” Get used to the idea that your proprietary system of today requires an upgrade to a standard system tomorrow or your ecosystem will leave you behind. How would you change your system today with that knowledge in hand?
  • Develop RF communication specialization (Cellular, WiFi, BLE, 802.15.4/Zigbee, 6LowPan, subGig, SigFox, etc), or partner with someone who has that expertise. A lot of the IT Cloud companies today have a big gap here and need to find a partner to optimize their IT Cloud to use such complex RF Communication protocols. They also need to optimize their systems based on the type of RF links and bandwidth limitations they will be using. This also affects the application development side, as such customization is essential for IoT, and what normally works for Cellular might not work for WiFi or BLE or Zigbee, etc. This is especially important to consider when it comes to target vertical markets, as different verticals might need different RF communication protocols or even multiple ones simultaneously, with all the coexistence issues that one may encounter. A semiconductor partner, who understands your IoT cloud requirements, can help you optimize your system from an RF communications and bandwidth management perspective.
  • Whether you have an SDK or agent-based mechanism, implement a lightweight communication system. Typical SDKs make the development and management of mobile apps easy, but remember that your smart phone has a lot more resources on it than a tiny resource-constraint sensor feeding data into an IoT system. A lightweight SDK, or agent-based system is a lot more predictable and simpler to integrate into low memory or battery-operated devices. Lightweight agents reduce device complexity and cost and can incrementally add to their capabilities depending on where they reside on the system. Obviously the more ‘bells and whistles’ you add to your system on the thing side (number of statistics to track or alarm states), the larger footprint of your SDK or agent. As you move to gateway levels of hierarchy and have more types of mechanisms, functionalities, sensors, communications, and alarms to monitor, the size of your agent or SDK will grow. One size will not fit all, but be frugal with your application and data management. So far working with various IoT cloud ecosystem partners, I have seen SDK and agent sizes varying from 3K to 150K of memory footprint. IoT cloud journey has already started, and I have no doubt the higher end of the spectrum (and some of the intermediate steps) will be shrinking in the near future, while the caching mechanism will become more robust.

Also deploy a context-centric bandwidth management system that won’t hog the entire bandwidth for your management plane activities. The rule of thumb will tell you not to occupy more than 15% of the communication link with intermediate proxy and caching functionality.

  • Pay attention to “things” with the focus on ease-of-use. That means an easy way of provisioning a device that even a nascent thing developer can follow the steps and do it on their own, regardless of the transport technology or resources available. If it takes too long, is error prone or requires an army of your developers to port and customize/optimize your agent for a particular architecture, you will be reducing your target market to only the very large OEMs. If you assume that you will be doing it for services fees, it won’t scale and you will only be targeting the large OEMs. If you partner with software services houses, you will scale better and gain additional bandwidth at a cost. And, this will still be reducing your market footprint to companies that can afford to pay for provisioning services. Why not make it easy right up front for maximum customer coverage? From the syntax of your APIs for things/sensor, to local gateways, cloud gateways, programming your agent logic and communications and service APIs, focus on simplicity, ease-of-use, and the out-of-the-box experience for your customers/developers.
  • Pay attention to visualization tools and user experience in all parts of the system. “Thing virtualization and visualization,” (including elegant and robust application that turns the device data models to comprehendible information in the cloud) are great value propositions. If you are focusing on the consumer IoT verticals where smart phones will have a prominent role, include a robust mobile apps development environment. IT cloud and IoT cloud have different consumers of data, and elegant visualization features can set you apart from your competitors.
  • Last but not least, do you have a robust and hardened security and authentication mechanism that works with advance encryption algorithms? Do you support both ECC and AES-128/256? How about PUF based key generation mechanism? In IoT, the stakes are very high and you need to spend more attention to the security of the system, from the tiniest resource constraint thing all the way to the cloud. Please note that the security knowledgebase between the thing developers is low at the moment, and the cloud partner needs to bring some of the competence needed as well as enforcing best practices. Some basic elements on the thing side that need to be protected include secure boot, thing authentication, message encryption and integrity, and a trusted key management and storage scheme. A semiconductor partner who understands your IoT cloud requirements can help you optimize your system from a “thing” security perspective.

The transition from the IT cloud to the IoT cloud has already started, and as the IT cloud was a journey, the transformation to support IoT applications will also be a journey. What’s the best way to go about this change? Make this a comprehensive approach that will make your IoT cloud sustainable as the market transitions forward.

Which Arduino board is right for you?


Picking an Arduino is as easy as Uno, Due, Tre! 


Thinking about starting a project? See which Arduino board is right for the job.

Arduino Uno

This popular board — based on the ATmega328 MCU — features 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz ceramic resonator, USB connection, power jack, an ICSP header and a reset button.

ArduinoUno_r2_front450px

The Uno does not use the FTDI USB-to-serial driver chip. Instead, it features the ATmega16U2 (ATmega8U2 up to version R2) programmed as a USB-to-serial converter.

In addition, Revision 3 of the Uno offers the following new features:

  • 
1.0 pinout: added SDA and SCL pins that are near to the AREF pin and two other new pins placed near to the RESET pin, the IOREF that allow the shields to adapt to the voltage provided from the board. Note: The second is not a connected pin.
  • 
Stronger RESET circuit.
  • ATmega16U2 replace the 8U2.

Arduino Leonardo

The Arduino Leonardo is built around the versatile ATmega32U4. This board offers 20 digital input/output pins (of which 7 can be used as PWM outputs and 12 as analog inputs), a 16 MHz crystal oscillator, microUSB connection, power jack, an ICSP header and a reset button.

300-xl

The Leonardo contains everything needed to support the microcontroller; simply connect it to a computer with a USB cable or power it with a AC-to-DC adapter or battery to get started. Plus, the ATmega32U4 offers built-in USB communication, eliminating the need for a secondary processor. This allows it to appear as a mouse and keyboard, in addition to being recognized as a virtual (CDC) serial / COM port.

Arduino Due

The Arduino Due is an MCU board based on the Atmel | SMART SAM3X8E ARM Cortex-M3 CPU.

ArduinoDue_Front

As the first Arduino built on a 32-bit ARM core microcontroller, Due boasts 54 digital input/output pins (of which 12 can be used as PWM outputs), 12 analog inputs, 4 UARTs (hardware serial ports), an 84 MHz clock, USB OTG capable connection, 2 DAC (digital to analog), 2 TWI, a power jack, an SPI header, a JTAG header, a reset button and an erase button.

Unlike other Arduino boards, the Due runs at 3.3V. The maximum voltage that the I/O pins can tolerate is 3.3V. Providing higher voltages, like 5V to an I/O pin, could damage the board.

Arduino Yún

The Arduino Yún features an ATmega32U4, along with an Atheros AR9331 that supports a Linux distribution based on OpenWRT known as Linino.

ArduinoYunFront_2

The Yún has built-in Ethernet and Wi-Fi support, a USB-A port, a microSD card slot, 20 digital input/output pins (of which 7 can be used as PWM outputs and 12 as analog inputs), a 16 MHz crystal oscillator, microUSB connection, an ICSP header and 3 reset buttons. The Yún is also capable of communicating with the Linux distribution onboard, offering a powerful networked computer with the ease of Arduino.

In addition to Linux commands like cURL, Makers and engineers can write their own shell and python scripts for robust interactions. The Yún is similar to the Leonardo in that the ATmega32U4 offers USB communication, eliminating the need for a secondary processor. This enables the Yún to appear as a mouse and keyboard, in addition to being recognized as a virtual (CDC) serial?COM port.

Arduino Micro

Developed in conjunction with Adafruit, the Arduino Micro is powered by ATmega32U4.

The board is equipped 20 digital input/output pins (of which 7 can be used as PWM outputs and 12 as analog inputs), a 16 MHz crystal oscillator, microUSB connection, a ICSP header and a reset button. The Micro includes everything needed to support the microcontroller; simply connect it to a computer with a microUSB cable to get started. The Micro even has a form factor that lets the device be easily placed on a breadboard.

Arduino Robot

The Arduino Robot is the very first official Arduino on wheels. The robot is equipped with two processors — one for each of its two boards.

Robot_Top

The motor board drives the motors, while the control board is tasked with reading sensors and determining how to operate. Each of the ATmega32u4 based units are fully-programmable using the Arduino IDE. More specifically, configuring the robot is similar to the process with the Arduino Leonardo, as both MCUs offer built-in USB communication, effectively eliminating the need for a secondary processor. This enables the Robot to appear to a connected computer as a virtual (CDC) serial?COM port.

Arduino Esplora

The Arduino Esplora is an ATmega32u4 powered microcontroller board derived from the Arduino Leonardo. It’s designed for Makers and DIY hobbyists who want to get up and running with Arduino without having to learn about the electronics first.

The Esplora features onboard sound and light outputs, along with several input sensors, including a joystick, slider, temperature sensor, accelerometer, microphone and a light sensor. It also has the potential to expand its capabilities with two Tinkerkit input and output connectors, along with a socket for a color TFT LCD screen.

Arduino Mega (2560)

The Arduino Mega features an ATmega2560 at its heart.

It is packed with 54 digital input/output pins (of which 15 can be used as PWM outputs), 16 analog inputs, 4 UARTs (hardware serial ports), a 16 MHz crystal oscillator, USB connection, a power jack, an ICSP header and a reset button. Simply connect it to a computer with a USB cable or power it with a AC-to-DC adapter or battery to get started. The Mega is compatible with most shields designed for the Arduino Duemilanove or Diecimila.

Arduino Mini

Originally based on the ATmega168, and now equipped with the ATmega328, the Arduino Mini is intended for use on breadboards and projects where space is at a premium.

Mini05_front

The board is loaded with 14 digital input/output pins (of which 6 can be used as PWM outputs), 8 analog inputs and a 16 MHz crystal oscillator. It can be programmed with the USB Serial adapter, the other USB, or the RS232 to TTL serial adapter.

Arduino LilyPad

The LilyPad Arduino is designed specifically for wearables and e-textiles. It can be sewn to fabric and similarly mounted power supplies, sensors and actuators with conductive thread.

The board is based on the ATmega168V (the low-power version of the ATmega168) or the ATmega328V. The LilyPad Arduino was designed and developed by Leah Buechley and SparkFun Electronics. Readers may also want to check out the LilyPad Simple, LilyPad USB and the LilyPad SimpleSnap.

Arduino Nano

The Arduino Nano is a tiny, complete and breadboard-friendly board based on the ATmega328 (Arduino Nano 3.x) or ATmega168 (Arduino Nano 2.x).

The Nano has more or less the same functionality of the Arduino Duemilanove, but in a different package. It lacks only a DC power jack and works with a Mini-B USB cable instead of a standard one. The board is designed and produced by Gravitech.

Arduino Pro Mini

Powered by an ATmega328, the Arduino Pro Mini is equipped with 14 digital input/output pins (of which 6 can be used as PWM outputs), 8 analog inputs, an on-board resonator, a reset button and some holes for mounting pin headers.

50720-11114-02

A 6-pin header can be connected to an FTDI cable or Sparkfun breakout board to provide USB power and communication to the board. Note: See also Arduino Pro.

Arduino Fio

The Arduino Fio (V3) is a microcontroller board based on Atmel’s ATmega32U4. It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 8 analog inputs, an on-board resonator, a reset button and holes for mounting pin headers. It also offers connections for a lithium polymer battery and includes a charge circuit over USB. An XBee socket is available on the bottom of the board.

The Arduino Fio is intended for wireless applications. The user can upload sketches with an a FTDI cable or Sparkfun breakout board. Additionally, by using a modified USB-to-XBee adaptor such as XBee Explorer USB, the user can upload sketches wirelessly. The board comes without pre-mounted headers, facilitating the use of various types of connectors or direct soldering of wires. The Arduino Fio was designed by Shigeru Kobayashi and SparkFun Electronics.

Arduino Zero

Last year, the tandem of Atmel and Arduino debuted the Zero development board – a simple, elegant and powerful 32-bit extension of the platform. The Arduino Zero board packs an Atmel | SMART SAM D21 MCU, which features an ARM Cortex M0+ core. Additional key hardware specs include 256KB of Flash, 32KB SRAM in a TQFP package and compatibility with 3.3V shields that conform to the Arduino R3 layout.

Zero-Blog-1

The Arduino Zero boasts flexible peripherals along with Atmel’s Embedded Debugger (EDBG) – facilitating a full debug interface on the SAMD21 without the need for supplemental hardware. Beyond that, EDBG supports a virtual COM port that can be used for device programming and traditional Arduino bootloader functionality. This highly-anticipated board will be available for purchase from the Arduino Store in the U.S. on Monday June 15th.

Arduino AtHeart

The Arduino AtHeart program was specifically launched for Makers and companies with products based on the open-source board that would like to be clearly identified as supporters of the versatile platform. The program is available for any device that includes a processor that is currently supported by the Arduino IDE, including the following Atmel MCUs:

Participants in the program include startups like:

EarthMake – ArLCD

The touchscreen ArLCD combines the ezLCD SmartLCD GPU with the Arduino Uno.

ee9bedf0de7441839ba219cb9df9f51a.image.446x354

Bare Conductive Touch Board

The ATmega32U4 based Touch Board can turn nearly any material or surface into a sensor by connecting it to one of its 12 electrodes, using conductive paint or anything conductive.

0f334dab601135bf329b68c8aee984f3.image.538x354

Blend Micro

The RedBearLab integrated dev platform “blends” the powers of Arduino with Bluetooth 4.0 Low Energy into a single board. It is targeted for Makers looking to develop low-power IoT projects in a quick, easy and efficient manner. The MCU is driven by an ATmega32U4 and a Nordic nRF8001 BLE chip.

14627c6f2dffee59911a21ede7a71a9a.image.447x354

littleBits Arduino Module

The fan-favorite Arduino module, which happens to also be based on an ATmega32U4, lets users easily write programs in the Arduino IDE to read sensors and control lights and motors within the littleBits system.

arduino_withlogo

Smart Citizen Kit

An Arduino-compatible motherboard with sensors that measure air composition (CO and NO2), temperature, light intensity, sound levels, and humidity. Once configured, the Smart Citizen Kit is capable of streaming data collected by the sensors over Wi-Fi.

8913f25184f084d52da77d70f5261203_large

What’s ahead this year in digital insecurity?


Here’s a closer look at the top 10 cyber security predictions for 2015.


In 2014 worries about security went from a simple “meh” to “WTF!” Not only did high-profile attacks get sensational media coverage, but those incidents led to a pivotal judicial ruling that corporations can be sued for data breaches. And as hard as it is to believe, 2015 will only get worse because attack surfaces are expanding as mobile BYOD policies overtake enterprises, cloud services spread, and a growing number of IoT networks get rolled out. Add m-commerce, e-banking, and mobile payments to the questionable tradition of lax credit card security infrastructure in the U.S. and you get a perfect storm for cybercrime.

In fact, 92% of attacks across the range of segments come from nine basic sources (seen in the diagram below), according to Verizon. More numerous and sophisticated cyber crimes are anticipated for this year and beyond.

z1

 1. More companies to get “Sony’d”

2014 saw the release of highly-evolved threats from criminals that in the past only came from governments, electronic armies and defense firms. A wide-range of targets included organizations in retail, entertainment, finance, healthcare, industrial, military, among countless other industries. As a repeat offender, Sony is now the cyber-victim poster child, and the term “Sony’d” has become a verb meaning digital security incompetence. Perhaps Sony’s motto should be changed from “make.believe.” to “make.believe.security.” Just saying!

Prior to 2014, companies on a wholesale basis tended to simply deny cyber vulnerabilities. However, a string of higher profile data breaches — such as Sony, Heartbleed, Poodle, Shellshock, Russian Cyber-vor, Home Depot, Target, PF Chang’s, eBay, etc. — have changed all of that. Denial is dead, but confusion and about what to do is rampant.

2. Embedded insecurity rising

Computing naturally segregates into embedded systems and humans sitting in front of screens.  Embedded systems are processor-based subsystems that are “embedded” into other machines or bigger systems.  Examples are routers, industrial controls, avionics, automotive engine and in-cabin systems, medical diagnostics, white goods, consumer electronics, smart weapons, and countless others.  Embedded security was not a big deal until the IoT emerged, which will lead to billions of smart, communicating nodes.  15 to more than 20 billion IoT nodes are being forecast by 2020, which will create a gigantic attack platform and make security paramount.

IoT Installed

A recent study by HP revealed that 70% of interconnected (IoT) devices have serious vulnerabilities to attacks. The devices they investigated consisted of “things” like cloud-connected TVs, smart thermostats and electronic door locks.

“The current state of Internet of Things security seems to take all the vulnerabilities from existing spaces — network security, application security, mobile security and Internet-connected devices — and combine them into a new, even more insecure space, which is troubling,” HP’s Daniel Miessler stated.

Issues HP identified ranged from weak passwords, to lack of encryption, to poor interfaces, to troubling firmware, to unencrypted updating protocols. Other notable findings included:

  • 60% of devices were subject to weak credentials
  • 90% collected personal data
  • 80% did not use passwords or used very weak passwords
  • 70% of cloud connected mobile devices allowed access to user accounts
  • 70% of devices were unencrypted

Investigators at the Black Hat Conference demonstrated serious security flaws in home automation systems. At DEFCON, investigators hacked NFC-based payment systems showing that passwords and account data was vulnerable. They also revealed that the doors of a Tesla car could be hacked to open while in motion. Nice! Other attacks were exploited on smart TVs, Boxee TV devices, smartphone biometric systems, routers, IP cameras, smart meters, healthcare devices, SCADA (supervisory, control and data acquisition) devices, engine control units, and some wearables. Even simple USB firmware was proven to be highly vulnerable… “Bad USB.”

These are just the tip of the embedded insecurity iceberg. Under the surface is the entire Dark Net which adds even more treacherousness. Security companies like Symmantic have identified home automation as a likely early IoT attack point. That is not surprising because home automation will be an early adopter of IoT technologies, after all. In-house appliances also represent an attractive attack surface as more firmware is contained in smart TVs, set top boxes, white goods, and routers that also communicate. Node-to-node connectivity security extends to industrial settings as well.

Tools like Shodan, which is the Google of embedded systems, make it very easy for hackers to get into the things in the IoT.  CNN recently called Shodan the scariest search engine on the Internet. You can see why since everything that is connected is now accessible. Clearly strong security, including hardware-based crypto elements, is paramount.

 3. More storms from the cloud

z5

It became clear in 2014 that cloud services such as iCloud, GoogleDrive, DropBox and others were rather large targets because they are replete with sensitive data (just ask Jennifer Lawrence). The cloud is starting to look like the technological Typhoid Mary that can spread viruses, malware, ransomware, rootkits, and other bad things around the world. As we know by now, the key to security is how well cryptographic keys are stored.   Heartbleed taught us that, so utilizing new technologies and more secure approaches to maintain and control cryptographic keys will accelerate in 2015 to address endemic cloud exposure. Look for more use of hardware-based key storage.

4. Cyber warfare breaks out

eBay, PF Chang’s, Home Depot, Sony, JP Morgan, and Target are well-known names on the cybercrime blotter, and things will just get worse as cyber armies go on the attack. North Korea’s special cyber units, the Syrian Electronic Army, the Iranian Cyber Army (ICA), and Unit 61398 of the People’s Liberation Army of China are high profile examples of cyber-armies that are hostile to Western interests. Every country now seems to have a cyber-army units to conduct asymmetric warfare. (These groups are even adopting logos, with eagles appearing to be a very popular motif.)

z6

Cyber warfare is attractive because government-built malware is cheap, accessible, and covert, and thus highly efficient. Researchers have estimated that 87% of cyber-attacks on companies are state-affiliated, 11% by organized crime, 1% by competitors, and another 1% by former employees. Long story short, cyber war is real and it has already been waged against non-state commercial actors such as Sony. It won’t stop there.

 5. Cybercrime mobilizes

According to security researchers, mobile will become an increasingly attractive target for hackers. Fifteen million mobile devices are infected with malware according to a report by Alcatel-Lucent’s Kindsight Security Labs. Malvertising is rampant on untrusted app stores and ransomware is being attached to virtual currencies. Easily acquired malware generation kits and source code make it extremely easy to target mobile devices. Malicious apps take advantage of the Webkit plugin and gain control over application data which hands credentials, bank account, and email details over to hackers. What’s more, online banking malware is also spreading. 2014 presented ZeuS, which stole data, and VAWTRAK that hit online banking customers in Japan.

Even two-factor authentication measures that banks employ have recently been breached using schemes, such as Operation Emmental. Emmental is the real name of Swiss cheese, which of course is full of holes just like the banking systems’ security mechanisms.  Emmental uses fake mobile apps and Domain Name System (DNS) changers to launch mobile phishing attacks to get at online  banking  accounts and steal identities. Some researchers believe that cybercriminals will increasingly use such sophisticated attacks to make illegal equity front running and short selling scams.

z7

6. Growing electronic payments tantalize attackers

Apple Pay could be a land mine just waiting to explode due to NFC’s susceptibility to hacking. Google Wallet is an example of what can happen when a malicious app is granted NFC privileges making it capable of stealing account information and money. M-commerce schemes like WeChat could be another big potential target.

z8

E-payments are growing and with that so will the attacks on mobile devices using schemes ranging from FakeID to master key. Master key is an exploit kit similar to blackhole exploit kit that specifically targets mobile, where FakeID allows malicious apps to impersonate legitimate apps that allow access to sensitive data without triggering suspicion.

7. Health records represent a cyber-crime gold mine

Electronic Health Records (EHR) are now mandatory in the U.S. and a vast amount of personal data is being collected and stored as never before. Because information is money, thieves will go where the information is (to paraphrase Willie Sutton). Health records are considered higher value in the hacking underground than stolen credit card data. Criminals throughout both the U.S. and UK are now specializing in health record hacking. In fact, the U.S. Identity Theft Resource Center reported 720 major data breaches during 2014 with 42% of those being health records.

8. Targeted attacks increase

Targeted attacks, also known as Advanced Persistent Threats (APTs), are very frightening due to their stealthy nature. The main differences between APTs and traditional cyber-attacks are target selection, silence, and duration of attack. According to research company APTnotes, the number of attacks by year went from 3 in 2010 to 14 in 2012 to 53 in 2014. APT targets are carefully selected, in contrast to traditional attacks that use any available corporate targets. The goal is to get in quietly and stay unnoticed for long periods of time, as seen in the famous APT attack that victimized the networking company Nortel. Chinese spyware was present on Nortel’s systems for almost ten years without being detected and drained the company of valuable intellectual property and other information. Now that’s persistent!

9. Laws and regulations try to play catch up

A number of cyber security laws are being considered in the U.S. including the National Cybersecurity Protection Act of 2014, which advocates the sharing of cybersecurity information with the private sector, provide technical assistance and incident response to companies and federal agencies.   Another one to note is the Federal Information Security Modernization Act of 2014 that is designed to better protect federal agencies from cyber-attacks. A third is the Border Patrol Agent Pay Reform Act of 2013 to recruit and retain cyber professionals who are in high demand. Additionally, there is the Cybersecurity Workforce Assessment Act, which aims to enhance the readiness, capacity, training, recruitment, and retention of the cybersecurity workforce. President Obama stated that wants a 30-day deadline for notices and a revised “Consumer Privacy Bill of Rights.”

One of the more interesting and intelligent recommendations came from the FDA, who issued guidelines for wireless medical device security to ensure hackers could not interfere with things such as implanted pacemakers and defibrillators. This notion was is part stimulated by worry about Dick Cheney’s pacemaker being hacked. In fact countermeasures were installed by on the device by Cheney’s surgeon. More regulation of health data and equipment is expected in 2015.

“Security — or the lack of it — will largely determine the success or failure of widespread adoption of internet-connected devices,” the FTC Commissioner recently shared in an article. The FTC also released a report entitled, “Privacy & Security in a Connected World.”

10. Hardware-based security may change the game

According to respected market researcher Gartner, all roads to the digital future lead through security. At this point, who can really argue with that statement? Manufacturers and service providers are seeing the seriousness of cyber-danger and are starting to integrate security at every connectivity level. Crypto element integrated circuits with hardware-based key storage are starting to be employed for that. Furthermore, these crypto elements are a kind of silver bullet given that they easily and instantly add the strongest type of security possible (i.e. protected hardware-based key storage) to IoT endpoints and embedded systems. This is a powerful concept whose fundamental value is only starting to be recognized.

IoT Node Chart 1

Crypto elements contain cryptographic engines to efficiently handle crypto functions such as hashing, sign-verify, ECDSA, key agreement (e.g.  ECDH), authentication (symmetric or asymmetric), encryption/decryption, message authentication coding (MAC), run crypto algorithms (e.g. elliptic curve cryptography, AES, SHA) and many other functions.

The hardware key storage plus crypto engine combination in a single device makes it simple, ultra-secure, tiny, and inexpensive to add robust security. Recent crypto element products offer ECDH for key agreement and ECDSA for authentication. Adding a device with both of these powerful capabilities to any system with a microprocessor that can run encryption algorithms (such as AES) brings all three pillars of security (confidentiality, data integrity and authentication) into play.

2014-Crypto-Security-at-our-Core-Atmel-Has-You-Covered

With security rising in significance as attack platforms increase in size and threats become more sophisticated, it is good to know that solutions are already available to ensure that digital systems are not only smart and connected, but robustly secured by hardware key storage. This could be the one of the biggest stories in security going forward.

5 IoT challenges for connected car dev

Growth in adoption of connected cars has exploded as of late, and is showing no signs of slowing down, especially the vehicle-to-infrastructure and vehicle-to-retail segments. As adoption grows exponentially, the challenges in how we develop these apps emerge as well.

One of the biggest challenges to consider will be connectivity, and how we connect and network the millions of connected cars on the road. How can we ensure that data gets from Point A to Point B reliably? How can we ensure that data transfer is secure? And how do we deal with power, battery, and bandwidth constraints?

connected car

1. Signaling

At the core of a connected car solution is bidirectional data streaming between connected cars, servers, and client applications. Connected car revolves around keeping low-powered, low-cost sockets open to send and receive data. This data can include navigation, traffic, tracking, vehicle health and state (Presence); pretty much anything you want to do with connected car.

Signaling is easy in the lab, but challenging in the wild. There are an infinite amount of speed bumps (pun intended) for connected cars, from tunnels to bad network connectivity, so reliable connectivity is paramount. Data needs to be cached, replicated, and most importantly sent in realtime between connected cars, servers, and clients.

2. Security

Then there’s security, and we all know the importance of that when it comes to connected car (and the Internet of Things in general). Data encryption (AES and SSL), authentication, and data channel access control are the major IoT data security components.

NHTSA-Connected-Cars

In looking at data channel access control, having fine-grain publish and subscribe permissions down to individual channel or user is a powerful tool for IoT security. It enables developers to create, restrict, and close open channels between client apps, connected car, and servers. With connected car, IoT developers can build point-to-point applications, where data streams bidirectionally between devices. Having the ability to grant and revoke access to user connection is just another security layer on top of AES and SSL encryption.

3. Power and Battery Consumption

How will we balance the maintaining of open sockets and ensuring high performance while minimizing power and battery consumption? As with other mobile applications, for the connected car, power and battery consumption considerations are essential.

M2M publish/subscribe messaging protocols like MQTT are built for just this, to ensure delivery in bandwidth, high latency, and unreliable environments. MQTT specializes in messaging for always-on, low-powered devices, a perfect fit for connected car developers.

4. Presence

Connected devices are expensive, so we need a way to keep tabs on our connected cars, whether it be for fleet and freight management, taxi dispatch, or geolocation. ‘Presence’ functionality is a way to monitor individual or groups of IoT devices in realtime, and has found adoption across the connected car space. Developers can build custom vehicle states, and monitor those in realtime as they go online/offline, change state, etc.

connected car

Take fleet management for example. When delivery trucks are out on route, their capacity status is reflected in realtime with a presence system. For taxi and dispatch, the dispatch system knows when a taxi is available or when its currently full. And with geolocation, location data is updated by the millisecond, which can also be applied to taxi dispatch and freight management.

5. Bandwidth Consumption

Just like power and battery, bandwidth consumption is the fifth connected car challenge we face today. For bidirectional communication, we need open socket connections, but we can’t have them using massive loads of bandwidth. Leveraging M2M messaging protocols like the aforementioned MQTT lets us do just that.

Building the connected car on a data messaging system with low overhead, we can keep socket connections open with limited bandwidth consumption. Rather than hitting the servers once multiple times per second, keeping an open socket allows data to stream bidirectionally without requiring requests to the server.

Solution Kit for Connected Cars

The PubNub Connected Car Solution Kit makes it easy to reliably send and receive data streams from your connected car, facilitating dispatch, fleet management applications and personalized auto management apps. PubNub provides the realtime data stream infrastructure that can bring connected car projects from prototype to production without scalability issues.

Building XMEGA-based energy harvesting RF sensor nodes

An energy harvesting RF sensor node is a device powered by various environmental means including solar, thermal (heat/cold) and even vibration. RF sensor nodes are typically used to monitor environmental changes such as temperature, pressure and ambient light – with data transmitted via RF to a host for remote sensing and control.

energyharvesting

Energy harvesting RF sensor nodes are routinely deployed by manufacturers of building automation, climate control, access control and other self-powered sensor networks. Key design considerations include ultra-low power and low operating voltage, the (potential) expansion of such technology into a broader range of applications and high precision analog peripherals.

The following Atmel components can be used to design an energy harvesting RF sensor node that meets the above-mentioned industry requirements: Atmel’s ATxmega D or E series, AT86RF231/232/233 RF transceiver and AT30TSE Serial EEPROM with temperature sensor.

“Atmel’s AVR XMEGA D/E series and 86RF23x series offer low power consumption and true 1.62V operation, addressing the key design requirements for energy harvesting RF sensor nodes,” an Atmel engineering rep told Bits & Pieces. “Atmel’s XMEGA D/E series also boasts true 1.62V-3.6V operation, 5 sleep modes with fast wake up time, < 1uA in Power Save mode (RTC), 190uA/MHz at 1.8V in active mode, along with an Event system and Peripheral DMA Controller to further offload CPU activity.”

Atmel’s 86RF23x series is also capable of maintaining a sleep current consumption of < 20nA, along with a current consumption as low as 6.0mA RX and 13.8mA TX. As expected, the 86RF23x series is supported by Atmel’s complete line of IEEE 802.15.4-compliant protocols for low power applications: IPv6/6LoWPAN, ZigBee, 802.15.4 MAC and lightweight mesh network stack.

On the software and development side, engineers designing XMEGA-based energy harvesting RF sensor nodes can take full advantage of Studio 6 and Atmel Software Framework (ASF), ASF high-level drivers for sensors and wireless interfaces, as well as Atmel’s comprehensive portfolio of Xplained kits.

Interested in learning more about building XMEGA-based energy harvesting RF sensor nodes? Be sure to check out some of the links below: