Kaivan Karimi, Atmel VP and GM of Wireless Solutions, shares the top 10 factors to consider when transitioning from IT cloud to IoT cloud.
In mid-2013, the buzz phrase “Internet of Things,” also known as the “IoT,” set the technology world on fire. As a result of this craze, a lot of products that were developed for completely different end applications changed all their marketing collateral overnight to become IoT products. We saw companies that added the acronym “IoT” to the title of every executive and gadgets that became a part of an IoT enablement ecosystem. New tradeshows claimed their authoritative position on IoT, and angel investors and venture capitalists started IoT funds feeding incredible ideas — some which reminded me of the late 1990s bubble when Lemonade.com was funded. New standard bodies were formed around provisioning IoT devices, and all of a sudden, overnight, most of us in the technology community became IoT experts.
Cloud companies are not an exception. While the physical infrastructure of the cloud didn’t change, the platform and software services that were developed for enterprise IT management and mobility apps support became IoT PaaS & SaaS platforms with claims of “IoT compliance.” By late 2013, at an IoT event in Barcelona, every keynote not only talked about the “metaphorical pyramid” of Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS), but almost every keynote talked about “Everything as a Service (EaaS)” thanks to IoT.
With so much hype and noise, it is hard to separate fact from fiction — unless you dig deep, really deep. This fuzziness is caused by the breath of IoT and the many vertical markets it encompasses, covering all aspects of life as we know it. And each vertical has its own unique “things,” so one size doesn’t fit all from a device perspective, requiring different types of standards and transport layers with silicon and software infrastructure to support this vast frontier. What has further muddied the water is that many large industry players look at IoT as an inflection point that they can transform themselves to something else and get into other businesses. Because of this, these players are looking at their current assets and are defining the infrastructure required for IoT differently than what logically and technically makes sense. For companies that have no play in hardware or software for the data centers, they publicly promote that the majority of the data processing should be done in other parts of the network (“closer to the source”). And, while the others promote just the opposite, a third group advocates that much of the processing should be done directly by the hierarchy of smart gateways boxes in the customer premises, along with everything in-between. The same goes for the choice of RF communications protocols, gateways, definition of things, provisioning schemes, etc.
A great example of what gets heavily promoted by one of the biggest industry players is calling IoT an “always ON revolution” and allowing sensor data collected at the edge/sensing nodes (thing side) to ALWAYS be sent to the cloud. This method requires a lot of bandwidth and storage capacity to collect data in the cloud, and encourages the promotion of their passive big data analytics capabilities to process this volume of data in the cloud. Clearly they sell hammers here, and see everything in the world as a nail. In reality IoT is a “mostly OFF revolution,” with significantly less data created than portrayed, and few of that data will make it to the cloud. For instance:
- A door or a key lock is mostly sleeping, until a sensor triggers a wake-up command during an opening or proximity event, in which case it communicates a few bytes of data to a gateway and then goes back to sleep.
- The temperature sensors on a bridge wakes-up every so often to report temperature fluctuation to the gateway on the side of the road, and report if the bridge is frozen and then telling the department of transportation to send the sand trucks to avoid accidents.
- The seismic sensors on the A/C unit in an office building located in Texas monitoring of the sound of the motor every 2 hours. If the motor sounds as if it will be breaking down in a couple weeks, the sensors inform the building manager to call a technician to fix what is going bad, so that they will not be stuck without air conditioning in the middle of July.
- The ethylene gas sensors (ripening phytohormone of fruits and plants) on fruit containers in the back of an eighteen wheeler wake up every 30 minutes and send the data to the gateway in the cabin of the truck. These signals predict the decay rate of the fruit and allow the driver to change the destination to a close by city if needed, and give some additional shelf life to the fruit, or allowing the driver to send the fruits straight to the jam factory, avoiding fuel waste of carrying a bad cargo.
In each of the aforementioned cases, and in other examples similar to these, the things (fruit container, A/C unit, bridge, home door, etc.) spend a majority of their time sleeping and only wake up based on an event trigger or predetermined wake up time based on programmed policy. This is the only way these devices can operate on batteries for years of usage. How many bytes (not mbps or even kbps) of data is really required to report those events? Would all of these events be worthy of sending to the cloud? In fact, the local event processing and analytics engine running on the local gateway will determine what will go to the cloud and only the exception events (door is open, fruit is going bad, motor is going to break down, bridge is frozen, etc.) will go to the cloud right away. But, as long as everything is normal (within policy range events), it will get registered on predetermined intervals (e.g. once every 24 hours) and the meta data will get uploaded to the cloud. Even if video capture was involved, no more than 2Mbps of bandwidth is needed.
Based on my experience with the analysis of multiple large enterprise campuses with many buildings, without video for IoT-type services, only an aggregate level of 15Mbps bandwidth max is required to fully support this type of IoT communication to the cloud for provisioning services. So one should question the folks who promote the fallacy that all types of applications, things will always be ON and lots of bandwidth will be needed. What’s in it for them to portray IoT in this manner? Of course if you are considering an enterprise campus full of smart devices with people moving massive amount of data with “chatty and persistent communication agents”, then you will need a lot more than 15mbps of connectivity to cloud, for sure. Could it be these folks are confusing an IT infrastructure with an IoT infrastructure?
For a comprehensive IoT implementation, a system-level approach is required to cover the tiniest edge/sensing nodes (things), to various types of gateways, all the way to the cloud and data centers, applications and service providers. These include data analytic engines embedded both on premise and in the cloud with a variety of SDKs and communication agents, data caching and bandwidth management as different layer and levels of hierarchy, etc. There aren’t many companies in the world that cover all of these (single-digit) items. Even if they do, these companies still require partnerships with the gadget/things side companies. Therefore, when someone claims that they are a one-stop shop, they can either: support an existing infrastructure of things to a cloud and add a new twist to it (subset of most IoT verticals), OR their system is not as comprehensive as they claim, OR ultimately a combination of both.
Not to mention, at this moment we are exclusively dealing with silo-ed clouds, and silo-ed IoT systems. While an ecosystem of cloud (cloud of clouds) is in a nascent stage for some companies, it is far from a true IoT cloud ecosystem that it will become in the near future.
The IT cloud ecosystem (versus the IoT cloud ecosystem) has had a journey of its own in the past few years. This ecosystem has shown signs of success as originally predicted with the technology distributed to provide a virtually seamless and infinite environment for communications, storage, computing, Web and mobile services, analytics, and other business uses. The cloud benefit model has come to fruition, with many examples of upfront CAPEX largely minimized or eliminated. This includes increased flexibility and control to scale users and the ability to add functionality by various organizations on demand, with the added pay-as-you-go benefit. Cloud providers have taken over the responsibility of IT requirements for many organizations, and have become vital business and channel partners.
That said, the fundamental question still remains: Is the traditional IT cloud and its ecosystem the same as an IoT cloud and its ecosystem?
The answer: While 60-70 percent is the same, a 30-40 percent difference can kill your IoT roll-out and make a seemingly IoT-ready cloud almost useless for your applications.
The differences are present throughout the full end-to-end system, from the “thing” side, all the way to the data centers on the cloud side. The traditional IT cloud, web or mobility applications cloud mirrors much bigger devices with more resources on the cloud side. Over the last couple of years, a “thing” for the traditional cloud system consisted of a computer, a vending machine, a car, a gateway in customer premise, or a smart device (laptops, tablets, Smartphone, etc.). These devices are typically connected to the cloud via direct cellular links, a cellular (WAN) + Wi-Fi (LAN), or Fiber (WAN) + Wi-Fi (LAN). With the new generation of IoT “things,” you can find much more resource-constrained devices such as small battery operated sensors on doorways to keep track of people entering through the back gate of the house, battery-operated seismic sensors on roadway infrastructures (bridges, etc.), or any of the examples earlier. Instead of 20 smart devices in an office that are plugged into the wall outlet or through a large battery capacity recharging on a regular basis, you will be dealing with 500 different types of sensors and things covering that office. With multiple offices, 1000s of things at the same time, most of which are powered by batteries for years (4-5 years of battery life in consumer IoT, and 8-12 years of battery life in industrial IoT). Some of these things have a small 8-bit MCU as its brain, with very little memory and other resources, and may be hiding behind layers of gateways, relays, switches, even other things, in sleepy networks. The communication link when available (remember that they are mostly in an off state), may have very little bandwidth, and communication may go through multiple hops in mesh networks. A “Chatty” communication system that pings on the things on regular basis defeats the purpose here.
The important thing is to remember that a system needs to be fully extendable and scalable not just on the cloud side, but also on the link side from the cloud to the things–and finally on the thing side. You also need scalable data capture and aggregation to go along with a secure communication system. If you are targeting a consumer application, then a solid mobile application development platform working with your popular Smartphone operating system is a basic requirement, meaning you need to rewrite your middleware to become more agile, scalable, and be able to manage many more things simultaneously. You also need to rethink your whole communication topologies of the past. Lastly you need to pay more attention to your analytic engines and applications development environment, and depending on your IoT application, it may require completely different visualization tools and business models.
Here are some factors that an IT cloud provider transitioning to an IoT cloud provider needs to consider:
- Understand the verticals you target; become a one-stop shop for a given vertical. In IoT, one size does not fit all. Understanding a vertical includes the evolution of that vertical and future business models that need to be considered. For example, if you are targeting the tracking of people in a hospital and their location at any given time, in the future that group would require wearables with biometric sensors, and their vital statistics would also need to be monitored. The expectation would be that your service can also cover the tracking of biometric sensors, which are usually battery-operated constraint devices with minimal bandwidth. Working with one PaaS or SaaS supplier for managing one set of its assets in the same premise and another cloud provider for a separate set of assets is not an option. The issues to consider include the protocols, networks, bandwidth management and transport technologies your IoT cloud framework would need to support.
- Scalable data analytics and event processing engine is a must-have as the majority of the IoT value creation comes from the data analytics, and “data capital” is where the differentiation will come from. Do you have the right analytics engine on both the cloud side as well as the premise/gateways? The new in-memory streaming technologies which change the rate we can act on data will be required for some IoT applications. Hence the traditional extraction, transformation and loading (ETL)will give way to just in time (JIT) methodologies (real-time vs. batch-oriented). Can you manage fast/streaming data analytics processing for applications where extremely fast processing of (near) real-time data is required? For instance, in tele-health and elderly monitoring where passive data analytics in the cloud is not adequate, and local fast data analytics running on the local smart gateway is required to report a heart attack, or a fire in home automation, etc. Also it is imperative that you find a service provider for a given vertical—if you are not a service provider, partner with one—so that your event processing and data analytics engines are tuned for specific use cases and business logic. If your analytics engine only provides insight into the visibility or availability of a limited set of parameters in the network, work with a partner that brings the rest.
- Know the specific type of data required to monitor/gather, the insight required for your customers. That means developing a diverse set of device data models for specific functionalities. Don’t try to be the Swiss Army Knife of the IoT cloud providers. Remember, while a Swiss Army Knife can perform many functions, they are not good at doing anything well. Understanding the verticals you need to support (item number 1) will also help you with this item. For certain applications, before the data sets get processed by analytics and visualization tools, it gets combined with external algorithmic classification and enrichment tools. This increases productivity and ease-of-use dramatically (e.g. user will know where the water tables are before drilling for a well, or what the maps of other distribution centers are before redirecting a cargo).
- Develop a fully modularized end-to-end system. As most large OEMs may already have their own branded cloud and would only want to use a part of the functionality you offer. Arm yourselves with well defined APIs, and firewall-friendly adaptive connectivity architecture and become comfortable with working with your customers’ infrastructure, analytics engine, applications, visualization tools, things, etc. They may only be interested in your communication system. Or, ask for a mix of capabilities. The more flexible your approach, the better you can customize your offerings to their needs. On the cloud side, the formation of the cloud ecosystem (cloud of clouds – server to server(s) communication) is right around the corner. A robust ecosystem is at the heart of the IoT cloud management.
A modularized system as described above may mean a different tiered pricing approach to your business model. Flexibility needs to extend beyond your technology offerings, so be open to new business models.
- Follow the new service delivery frameworks with large ecosystems, such as the Open Interconnect Consortium (OIC), etc. Standardization will eventually dominate both the consumer and industrial IoT space. While the alphabet soup of protocols may be expanding (e.g. MQTT, XMPP, DDS, AMQP, CoAP, RESTful HTTP, etc.), standardization is also happening and provide more clarity. Standards are being developed so there are “horses for different courses.” Get used to the idea that your proprietary system of today requires an upgrade to a standard system tomorrow or your ecosystem will leave you behind. How would you change your system today with that knowledge in hand?
- Develop RF communication specialization (Cellular, WiFi, BLE, 802.15.4/Zigbee, 6LowPan, subGig, SigFox, etc), or partner with someone who has that expertise. A lot of the IT Cloud companies today have a big gap here and need to find a partner to optimize their IT Cloud to use such complex RF Communication protocols. They also need to optimize their systems based on the type of RF links and bandwidth limitations they will be using. This also affects the application development side, as such customization is essential for IoT, and what normally works for Cellular might not work for WiFi or BLE or Zigbee, etc. This is especially important to consider when it comes to target vertical markets, as different verticals might need different RF communication protocols or even multiple ones simultaneously, with all the coexistence issues that one may encounter. A semiconductor partner, who understands your IoT cloud requirements, can help you optimize your system from an RF communications and bandwidth management perspective.
- Whether you have an SDK or agent-based mechanism, implement a lightweight communication system. Typical SDKs make the development and management of mobile apps easy, but remember that your smart phone has a lot more resources on it than a tiny resource-constraint sensor feeding data into an IoT system. A lightweight SDK, or agent-based system is a lot more predictable and simpler to integrate into low memory or battery-operated devices. Lightweight agents reduce device complexity and cost and can incrementally add to their capabilities depending on where they reside on the system. Obviously the more ‘bells and whistles’ you add to your system on the thing side (number of statistics to track or alarm states), the larger footprint of your SDK or agent. As you move to gateway levels of hierarchy and have more types of mechanisms, functionalities, sensors, communications, and alarms to monitor, the size of your agent or SDK will grow. One size will not fit all, but be frugal with your application and data management. So far working with various IoT cloud ecosystem partners, I have seen SDK and agent sizes varying from 3K to 150K of memory footprint. IoT cloud journey has already started, and I have no doubt the higher end of the spectrum (and some of the intermediate steps) will be shrinking in the near future, while the caching mechanism will become more robust.
Also deploy a context-centric bandwidth management system that won’t hog the entire bandwidth for your management plane activities. The rule of thumb will tell you not to occupy more than 15% of the communication link with intermediate proxy and caching functionality.
- Pay attention to “things” with the focus on ease-of-use. That means an easy way of provisioning a device that even a nascent thing developer can follow the steps and do it on their own, regardless of the transport technology or resources available. If it takes too long, is error prone or requires an army of your developers to port and customize/optimize your agent for a particular architecture, you will be reducing your target market to only the very large OEMs. If you assume that you will be doing it for services fees, it won’t scale and you will only be targeting the large OEMs. If you partner with software services houses, you will scale better and gain additional bandwidth at a cost. And, this will still be reducing your market footprint to companies that can afford to pay for provisioning services. Why not make it easy right up front for maximum customer coverage? From the syntax of your APIs for things/sensor, to local gateways, cloud gateways, programming your agent logic and communications and service APIs, focus on simplicity, ease-of-use, and the out-of-the-box experience for your customers/developers.
- Pay attention to visualization tools and user experience in all parts of the system. “Thing virtualization and visualization,” (including elegant and robust application that turns the device data models to comprehendible information in the cloud) are great value propositions. If you are focusing on the consumer IoT verticals where smart phones will have a prominent role, include a robust mobile apps development environment. IT cloud and IoT cloud have different consumers of data, and elegant visualization features can set you apart from your competitors.
- Last but not least, do you have a robust and hardened security and authentication mechanism that works with advance encryption algorithms? Do you support both ECC and AES-128/256? How about PUF based key generation mechanism? In IoT, the stakes are very high and you need to spend more attention to the security of the system, from the tiniest resource constraint thing all the way to the cloud. Please note that the security knowledgebase between the thing developers is low at the moment, and the cloud partner needs to bring some of the competence needed as well as enforcing best practices. Some basic elements on the thing side that need to be protected include secure boot, thing authentication, message encryption and integrity, and a trusted key management and storage scheme. A semiconductor partner who understands your IoT cloud requirements can help you optimize your system from a “thing” security perspective.
The transition from the IT cloud to the IoT cloud has already started, and as the IT cloud was a journey, the transformation to support IoT applications will also be a journey. What’s the best way to go about this change? Make this a comprehensive approach that will make your IoT cloud sustainable as the market transitions forward.
Pingback: IT cloud vs. IoT cloud - Internet of Things | Wearables | M2M | Industry 4.0
Pingback: IoT Embedded Blog Review – Week of Apr 13, 2015 • JB Systems Tech
Pingback: The 10 challenges of securing IoT communications | Bits & Pieces from the Embedded Design World