Tag Archives: SHA256

Atmel’s SAM L21 MCU for IoT tops low power benchmark


SAM L21 MCUs consume less than 940nA with full 40kB SRAM retention, real-time clock and calendar, and 200nA in the deepest sleep mode.


The Internet of Things (IoT) juggernaut has unleashed a flurry of low-power microcontrollers, and in that array of energy-efficient MCUs, one product has earned the crown jewel of being the lowest-power Cortex M-based solution with power consumption down to 35µA/MHz in active mode and 200nA in sleep mode.

How do we know if Atmel’s SAM L21 microcontroller can actually claim the leadership in ultra-low-power processing movement? The answer lies in the EEMBC ULPBench power benchmark that was introduced last year. It ensures a level playing field in executing the benchmark by having the MCU perform 20,000 clock cycles of active work once a second and sleep the remainder of the second.

 

 ULPBench shows SAM L21 is lower power than any of its competitor's M0+ class chips

ULPBench shows SAM L21 is lower power than any of its competitor’s M0+ class chips.

Atmel has released the ultra-low-power SAM L21 MCU it demonstrated at Electronica in Munich, Germany back in November 2014. Architectural innovations in the SAM L21 MCU family enable low-power peripherals — including timers, serial communications and capacitive touch sensing — to remain powered and running while the rest of the system is in a reduced power mode. That further reduces power consumption for always-on applications such as fire alarms, healthcare, medical and connected wearables.

Next, the 32-bit ARM-based MCU portfolio combines ultra-low-power with Flash and SRAM that are large enough to run both the application and wireless stacks. Collectively, these three features make up the basic recipe for battery-powered mobile and IoT devices for extending their battery life from years to decades. Moreover, they reduce the number of times batteries need to be changed in a plethora of IoT applications.

Low Power Leap of Faith

Atmel’s SAM L21 microcontrollers have achieved a staggering 185.8 ULPBench score, which is way ahead of runner-up TI’s SimpleLink C26xx microcontroller family that scored 143.6. The SAM L21 microcontrollers consume less than 940nA with full 40kB SRAM retention, real-time clock and calendar, and 200nA in the deepest sleep mode. According to Atmel spokesperson, it comes down to one-third the power of competing solutions.

Markus Levy, President and Founder of EEMBC, credits Atmel’s low-power feat to its proprietary picoPower technology and the company’s low-power expertise in utilizing DC-DC conversion for voltage monitoring. Atmel’s picoPower technology employs flexible clocking options and short wake-up time with multiple wake-up sources from even the deepest sleep modes.

ULPBench aims to provide developers with a reliable methodology to test MCUs

ULPBench aims to provide developers with a reliable methodology to test MCUs.

In other words, Atmel has taken the low-power game beyond architectural improvements to the CPU while optimizing nearly every peripheral to operate in standalone mode and then use a minimum number of transistors to complete the given task. Most lower-power ARM chips simply disable the clock to various parts of the device. The SAM L21 microcontroller, on the other hand, turns off power to those chip parts; hence, there is no leakage current in thousands of transistors in that part.

Here is a brief highlight of Atmel’s low-power development efforts that now encompass almost every peripheral in an MCU device:

Sleep Modes

Sleep modes not only gate away the clock signal to stop switching consumption, but also remove the power from sub-domains to fully eliminate leakage. Atmel also employs SRAM back-biasing to reduce leakage in sleep modes.

Consider a simple application where the temperature in a room is monitored using a temperature sensor with the analog-to-digital converter (ADC). In order to reduce the power consumption, the CPU would be put to sleep and wake up periodically on interrupts from a real-time counter (RTC). The measured sensor data is checked against a predefined threshold to decide on further action. If the data does not exceed the threshold, the CPU will be put back to sleep waiting for the next RTC interrupt.

SleepWalking

SleepWalking is a technology that enables peripherals to request a clock when needed to wake-up from sleep modes and perform tasks without having to power up the CPU Flash and other support systems. For instance, Atmel’s ultra-low-power capacitive touch-sensing peripheral can run in all operating modes and supports wake-up on a touch.

For the temperature monitoring application, as mentioned above, this means that the ADC’s peripheral clock will only be running when the ADC is converting. When the ADC receives the overflow event from the RTC, it will request its generic clock from the generic clock controller and peripheral clock will stop as soon as the ADC conversion is completed.

Event System

The Event System allows peripherals to communicate directly without involving the CPU and thus enables peripherals to work together to solve complex tasks using minimal gates. It allows system developers to chain events in software and use an event to trigger a peripheral without CPU involvement.

Again, taking temperature monitor as a use case, the RTC must be set to generate an overflow event, which is routed to the ADC by configuring the Event System. The ADC must be configured to start a conversion when it receives an event. By using the Event System, an RTC overflow can trigger an ADC conversion without waking up the CPU. Moreover, the ADC can be configured to generate an interrupt if the threshold is exceeded, and the interrupt will wake up the CPU.

533

Low Power MCU Use Case

Paul Rako has mentioned a sensor monitor in his recent post in Atmel’s Bits & Pieces blog. Rako writes in his post titled “The SAM L21 pushes the boundaries of low power MCUs” about this sensor monitor being asleep 99.99 percent of the time, waking up once a day to take a measurement and send it wirelessly to a host. Such tasks can be conveniently handled by an 8-bit device.

However, moving to IoT applications, which constitute protocol stacks, there is number crunching involved and that requires a faster ARM-class 32-bit chip. So, for battery-powered IoT applications, Rako makes the case for 32-bit ARM-based chip that can wake up, do its thing, and go back to sleep. If a high-current chip wakes up 10 times faster but uses twice the power, it will still use less energy and less charge than the slower chip.

Next, Rako presents sensor fusion hub as a case study in which the device saves power by skipping the radio chip to send the data from each sensor and instead uses the ARM-based microcontroller that does the math and pre-processing to combine the raw data from all sensors and then assembles the result as a simple chunk of data.

Atmel has scored an important design victory in the ongoing low-power game that is now prevalent in the rapidly expanding IoT market. Atmel already boasts credentials in the connectivity and security domains — the other two key IoT building blocks. Its connectivity solutions cover multiple wireless arenas — Bluetooth, Wi-Fi, Zigbee and 6LoWPan — to enable IoT communications.

Likewise, Atmel’s CryptoAuthentication devices come with protected hardware key storage and are available with SHA256, AES128 or ECC256/283 cryptography. The IoT triumvirate of low power consumption, broad connectivity portfolio and crypto engineering puts Atmel in a strong position in the promising new market of IoT that is increasingly demanding low power portfolio of MCUs to be matched with high performance.


Majeed Ahmad is author of books Smartphone: Mobile Revolution at the Crossroads of Communications, Computing and Consumer Electronics and The Next Web of 50 Billion Devices: Mobile Internet’s Past, Present and Future.

Symmetric or asymmetric encryption, that is the question!


With the emergence of breaches and vulnerabilities, the need for hardware security has never been so paramount.


Confidentiality — one of the three foundational pillars of security, along with data integrity and authenticity — is created in a digital system via encryption and decryption. Encryption, of course, is scrambling a message in a certain way that only the intended party can descramble (i.e. decrypt) it and read it.

pillars

Throughout time, there have been a number of ways to encrypt and decrypt messages. Encryption was, in fact, used extensively by Julius Caesar, which led to the classic type of encryption aptly named, Caesar Cipher. The ancient Greeks beat Caesar to the punch, however. They used a device called a “Scytale,” which was a ribbon of leather or parchment that was wrapped around a rod of a diameter, of which only the sender and receiver were aware. The message was written on the wrapping and unfurled, then sent to the receiver who wrapped on on the rod of the same diameter in order to read it.

Skytale

 

Modern Encryption

Modern encryption is based on published and vetted digital algorithms, such as Advanced Encryption System (AES), Secure Hashing Algorithms (SHA) and Elliptic Curve Cryptography (ECC), among many others. Given that these algorithms are public and known to everyone, the security must come from something else — that thing is a secret cryptographic “key.” This fundamental principal was articulated in the 19th century by  Auguste Kerckhoffs, a Dutch linguist, cryptographer and professor.

Kerckhoffs’ principle states that a cryptosystem should be secure even if everything about the system, except the key, is public knowledge. In other words: “The key to encryption is the key.” Note that Kirchoffs advocated what is now commonly referred to as “open-source” for the algorithm. Point being, this open-source method is more secure than trying to keep an algorithm itself obscured (sometimes called security by obscurity). Because the algorithms are known, managing the secret keys becomes the most important task of a cryptographer. Now, let’s look at that.

kirchoff 1

Symmetric and Asymmetric

Managing the key during the encryption-decryption process can be done in two basic ways: symmetric and asymmetric. Symmetric encryption uses the identical key to both encrypt and decrypt the data. Symmetric key algorithms are much faster computationally than asymmetric algorithms because the encryption process is less complicated. That’s because there is less processing involved.

The length of the key size directly determines the strength of the security. The longer the key, the more computation it will take to crack the code given a particular algorithm. The table below highlights the NIST guidelines for key length for different algorithms with equivalent security levels.  You can see that Elliptic Curve Cryptography (ECC) is a very compact algorithm. It has a small software footprint, low hardware implementation costs, low bandwidth requirements, and high device performance. That is one of the main reasons that ECC-based asymmetric cryptographic processes, such as ECDSA and  ECDH, are now being widely adopted. The strength of the sophisticated mathematics of ECC are a great ally of all three pillars of security, especially encryption.

table

Not only is symmetric faster and simpler; furthermore, a shorter key length can be used since the keys are never made public as is the case with asymmetric (i.e. Public Key Infrastructure) encryption. The challenge, of course, with symmetric is that the keys must be kept secret on both the sender and receiver sides. So, distributing a shared key to both sides is a major security risk. Mechanisms that maintain the secrecy of the shared key are paramount. One method for doing this is called Symmetric Session Key Exchange.

Asymmetric encryption is different in that it uses two mathematically related keys (a public and private key pair) for data encryption and decryption.  That takes away the security risk of key sharing. However, asymmetric requires much more processing power. Unlike the public key, the private key is never exposed. A message that is encrypted by using a public key can only be decrypted by applying the same algorithm and using the matching private key.

A message that is encrypted by using the private key can only be decrypted by using the matching public key. This is sort of like mathematical magic. Some of the  trade offs of symmetric and asymmetric are summarized below.

Symmetric

  • Keys must be distributed in secret
  • If a key is compromised the attacker can decrypt any message and/or impersonate one of the parties
  • A network requires a large number of keys

Asymmetric

  • Around 1000 times slower than symmetric
  • Vulnerability to a “man-in-the-middle” attack, where the public key is intercepted and altered

Due to the time length associated with asymmetric, many real-world systems utilize combination of the two, where the secret key used in the symmetric encryption is itself encrypted with asymmetric encryption, and sent over an insecure channel.Then, the rest of the data is encrypted using symmetric encryption and sent over the insecure channel in the encrypted format. The receiver gets the asymmetrically encrypted key and decrypts it with his private key. Once the receiver has the symmetric key, it can be used to decrypt the symmetrically encrypted message. This is a type of key exchange.

Note that the man in the middle vulnerability can be easily addressed by employing the other pillar of security; namely authentication. Crypto engine devices with hardware key storage, most notably Atmel’s CrypotoAuthentication, have been designed specifically to address all three pillars of security in an easy to design and cost-effective manner. Ready to secure your next design? Get started here.

Why Should You Consider Hardware Security on the Host Side?

By: Rocendo Bracamontes

Over the last year, I’ve come across many different applications and systems that require security. The majority of them can be categorized as follows:  accessory authentication, consumables, system anti-cloning and session key exchange.

Since the ATSHA204, the latest Atmel CryptoAuthentication™ device, uses a symmetric algorithm, the system where the security is implemented requires the same key at the host and the client.

To provide the best security, designers are recommended, with few exceptions, to include a “host” chip ATSHA204 that holds the system’s symmetric keys.

The following example illustrates a critical application where the usage of hardware security on the transmitter (host) is crucial to perform a receiver (client) authentication over a network. For example, this applies to smart meters, industrial lighting and sensitive sensor networks.

Without it, the transmitter would have to store the secret keys in Flash and perform the cryptographic functions by software, making the system vulnerable to malicious hacks, and impacting overall system performance.  To learn more about why hardware security is recommended over software security, check out our previous blog post on this topic.

Hardware Security on Host Side

Hardware Security on Host Side

4 Different Authentication Models—Which One is Right for You?

By: Rocendo Bracamontes

Atmel’s ATSHA204 CryptoAuthentication™ device  allows four different ways to perform symmetric cryptographic authentication on a system:

  • Fixed Challenge Authentication
    • Fixed Challenge Authentication is an easy way to add security to a product without the expense of added hardware to the host, interactive testing, or extensive software development. With Fixed Challenge Authentication, the client requires an ATSHA204 device programmed with secret keys. The host is able to use any number of pre-calculated challenge/response pairs to validate the presence of a valid ATSHA204 on the client side.
  • Random Challenge Authentication
    • Random Challenge Authentication improves on the Fixed Challenge method by adding a Random Changing Challenge to each request. This feature enables the system to defend against replay-style attacks.
    •  By adding an ATSHA204 device to the host, the system can generate a Random Challenge for the client on the fly. In addition, by generating the challenge internally with the host’s ATSHA204 device, the response is unknown to the system, allowing the use of an unsecured processor without the threat that an attacker will be able to learn system secrets. This dramatically limits the ability of an unauthorized device from producing the correct response.
  • Unique Challenge Authentication
    • Unique Challenge Authentication improves on the Fixed Challenge by adding a Unique Challenge to each request. This authentication feature enables the system to defend against replay-style attacks.
    • By adding an ATSHA204 device to the host, the system can generate a challenge for the client on the fly. This allows a unique challenge to be sent for every validation request.
  • Diversified Key Authentication
    • This method includes the unique serial number of each ATSHA204 as part of the Cryptographic Authentication calculation. Diversified Key Authentication enables the host to identify the specific accessory that is trying to authenticate with it. This approach also enables the use of access lists (black lists) by the system.

With so many different options of authentication models, you can select the approach that best fits your design’s requirements, keeping your valuable intellectual property (IP) safe from malicious attacks or cloning.  To learn more about designing with the ATSHA204, including some design tips and tricks, check out this white paper.  Also stay tuned for further deep dives into each these models in the weeks to come.