Category Archives: Hardware

Enhance Raspberry Pi security with ZymKey


In this blog, Zymbit’s Scott Miller addresses some of the missing parts in the Raspberry Pi security equation. 


Raspberry Pi is an awesome platform that offers people access to a full-fledged portable computing and Linux development environment. The board was originally designed for education, but has since been embedded into countless ‘real world’ applications that require remote access and a higher standard of security. One of, if not, the most notable omissions is the lack of a robust hardware-based security solution.

Zymkey_004-1

At this point, a number of people would stop here and say, “Scott, you can do security on RPi in software just fine with OpenSSL/SSH and libgcrypt. And especially with the Model 2, there are tons of CPU cycles left over.” Performance is not the primary concern when we think about security; the highest priority is to address the issue of “hackability,” particularly through remote access.

What do you mean by “hackability?”

Hackability is a term that refers to the ease by which an attacker can:

  • take over a system;
  • insert misleading or false data in a data stream;
  • decrypt and view confidential data.

Perhaps the easiest way to accomplish any or all of the aforementioned goals is for the attacker to locate material relating to security keys. In other words, if an attacker can gain access to your secret keys, they can do all of the above.

Which security features are lacking from Raspberry Pi?

Aside from not having hardware-based security engines to do the heavy lifting, there’s no way to secure shared keys for symmetric cryptography or private keys for asymmetric cryptography.

Because all of your code and data live on a single SD card, you are exposed. Meaning, someone can simply remove the SD card, pop it into a PC and have possession of the keys and other sensitive material. This is particularly true when the device is remote and outside of your physical control. Even if you somehow try to obfuscate the keys, you are still not completely safe. Someone with enough motivation could reverse engineer or work around your scheme.

The best solution for protecting crypto keys is to ensure the secret key material can only be read by standalone crypto engines that run independently from the core application CPU. This basic feature is lacking in the Raspberry Pi.

Securing Raspberry Pi with silicon and software

With this in mind, Zymbit has decided to extract some of the core security features from the Zymbit.Orange and combine them into a tiny device that embeds onto the Raspberry Pi, providing seamless integration with Zymbit’s remote device management console. Meet the ZymKey!

ZymKey for secure remote device management

ZymKey brings together silicon, firmware drivers and software services into a coherent package that’s compatible with Zymbit’s secure IoT platform. This enables a Raspberry Pi to be accessed and managed remotely, firmware to be upgraded and access rights to be administered.

Zymkey-System-Overview-5-1

Secure software services

Zymbit’s Connect libraries enhance the security and utility of Raspberry Pi in the following ways:

  • Add message authentication to egress messages to the Zymbit cloud by attaching a digital signature, which proves that the data originated to a specific Raspberry Pi/Key combination. (Meaning that it was not forged or substituted along the way).
  • Assist in providing security certificates to the Zymbit cloud.
  • Authenticate security certificates from the Zymbit cloud.
  • Optionally help to encrypt/decrypt the content of messages to/from the Zymbit cloud.

Data that is encrypted/authenticated through ZymKey will be stored in this encrypted/authenticated form, thereby preserving the privacy and integrity of the data.

Zymkey-System-Detail-1

In addition to its standard attributes, developers can access lower level features through secure software services, including general cryptography (SHA-256 MAC and HMAC with secure keys, public key encryption/decryption), password validation, and ‘fingerprint’ services that bind together specific hardware configurations.

Stealth hardware

ZymKey’s low-profile hardware plugs directly into the Pi’s expansion header while still allowing Pi-Plates to be added on top. Lightweight firmware drivers run on the RPi core and interface with software services through zymbit.connect. It should also be noted that a USB device is in the works for other Linux boards.

ZYMKEY-RPi-Annotated-2

At the heart of the ZymKey is the newly released ATECC508A CryptoAuthentication IC. Among some of its notable specs are:

  • ECC asymmetric encryption engine
  • SHA digest engine
  • Random number generator
  • Unique 72-bit ID
  • Tamper prevention
  • Secure memory for storing:
    • Sensitive key material – an important thing to point out is that private keys are unreadable by the outside world and, as stated above, are only readable by the crypto engine.
    • X.509 security certificates.
    • Temporary items: nonces, random numbers, ephemeral keys
  • Optional encryption of transmitted data across the I2C bus for times when sensitive material must be exchanged between the Raspberry Pi and the ATECC508A

Life without ZymKey

Raspberry Pi can be used with the Zymbit Connect service without the ZymKey; however, the addition of ZymKey ensures that communications with Zymbit services are secured to a higher standard. Private keys are unreadable by the outside world and usable only by the ATECC508A, thus making it difficult (if not practically impossible) to compromise.

Each ZymKey has a unique set of keys. So, if, on the off chance that a key is compromised, only that key is affected. Simply stated, if you have several Raspberry Pi/ZymKey pairs deployed and one is compromised, the others will still be secure.

Once again, it is certainly possible to achieve the above goals purely through software (OpenSSL/libgcrypt/libcrypto). However, especially regarding encryption paths, without ZymKey’s secure storage, key material must be stored on the Raspberry Pi’s SD card, exposing private keys for anyone to exploit.

Stay tuned! The ZymKey will be making its debut on Kickstarter in the coming days.

$60 hack can trick LIDAR systems used by most self-driving cars


A security researcher has created a $60 system with Arduino and a laser pointer that can spoof the LIDAR sensors used by most autonomous vehicles. 


Many self-driving cars use LIDAR sensors to detect obstacles and build 3D images to help them navigate. However, one security researcher has developed a $60 device with “off-the-shelf parts” that can trick the systems into seeing objects which don’t actually exit, thereby forcing the autonomous vehicles to take unnecessary actions, like slowing down or stopping to avoid a collision with the phantom thing. Ultimately, this further highlights the need for stringent security measures for automobiles that would otherwise be vulnerable to cyber criminals armed with nothing more than a low-power laser and pulse generator.

JeffKowalskyCorbis4254044417-1441388783311-2

“It’s kind of a laser pointer, really. And you don’t need the pulse generator when you do the attack. You can easily do it with a Raspberry Pi or an Arduino,” explains researcher Jonathan Petit, principle scientist at Security Innovation.

According to IEEE Spectrum, Petit began by simply recording pulses from a commercial IBEO Lux LIDAR unit. The pulses were not encoded or encrypted, which allowed him to replay them at a later point. He was then able to create the illusion of a fake car, wall, cyclist or pedestrian anywhere from 65 to 1,100 feet from the LIDAR system, and make multiple copies of the simulated obstacles. In tests, the attack worked at all angles — from behind, the side and in front without alerting the passengers — and didn’t always require a precise hit of the device for it to achieve its goal.

“I can spoof thousands of objects and basically carry out a denial of service attack on the tracking system so it’s not able to track real objects,” Petit adds.

As IEEE Spectrum notes, sensor attacks are not limited to self-driving cars, either. The same homebrew laser pointer can be employed to carry out an equally devastating denial of service attack on a human motorist by simply dazzling them, and without the need for sophisticated laser pulse recording, generation or synchronization equipment.

toyota_self-driving_car_lidar_laser-100020089-orig

While the DIY system won’t necessary affect everyone, it does state the case that security should be at the forefront of auto design. Petit concludes. “There are ways to solve it. A strong system that does misbehavior detection could cross-check with other data and filter out those that aren’t plausible. But I don’t think carmakers have done it yet. This might be a good wake-up call for them.”

The researcher described his proof-of-concept hack in a paper entitled “Potential Cyberattacks on Automated Vehicles,” which will be presented at Black Hat Europe in November.

[Images: Jeff Kowalsky/IEEE Spectrum, TechHive]

How to prevent execution surprises for Cortex-M7 MCU


We know the heavy weight linked with software development, in the 60% to 70% of the overall project cost.


The ARM Cortex-A series processor core (A57, A53) is well known in the high performance market segments, like application processing for smartphone, set-top-box and networking. If you look at the electronic market, you realize that multiple applications are cost sensitive and don’t need such high performance processor core. We may call it the embedded market, even if this definition is vague. The ARM Cortex-M family has been developed to address these numerous market segments, starting with the Cortex-M0 for lowest cost, the Cortex-M3 for best power/performance balance, and the Cortex-M4 for applications requiring digital signal processing (DSP) capabilities.

For the audio, voice control, object recognition, and complex sensor fusion of automotive and higher-end Internet of Things sensing, where complex algorithms for audio and video are needed for rich audio and visual capabilities, Cortex-M7 is required. ARM offers the processor core as well as the Tightly Coupled Memory (TCM) architecture, but ARM licensees like Atmel have to implement memories in such a way that the user can take full benefit from the M7 core to meet system performance and latency goals.

Figure 1. The TCM interface provides a single 64-bit instruction port and two 32-bit data ports.

The TCM interface provides a single 64-bit instruction port and two 32-bit data ports.

In a 65nm embedded Flash process device, the Cortex-M7 can achieve a 1500 CoreMark score while running at 300 MHz, offering top class DSP performance: double-precision floating-point unit and a double-issue instruction pipeline. But algorithms like FIR, FFT or Biquad need to run as deterministically as possible for real-time response or seamless audio and video performance. How do you best select and implement the memories needed to support such performance? If you choose Flash, this will require caching (as Flash is too slow) leading to cache miss risk. Whereas SRAM technology is a better choice since it can be easily embedded on-chip and permits random access at the speed of processor.

Peripheral data buffers implemented in general-purpose system SRAM are typically loaded by DMA transfers from system peripherals. The ability to load from a number of possible sources, however, raises the possibility of unnecessary delays and conflicts by multiple DMAs trying to access the memory at the same time. In a typical example, we might have three different entities vying for DMA access to the SRAM: the processor (64-bit access, requesting 128 bits for this example) and two separate peripheral DMA requests (DMA0 and DMA1, 32-bit access each). Atmel has get round this issue by organizing the SRAM into several banks as described in this picture:

Figure 2. By organizing the SRAM into banks, multiple DMA bursts can occur simultaneously with minimal latency.

By organizing the SRAM into banks, multiple DMA bursts can occur simultaneously with minimal latency.

For a chip maker designing microcontrollers, licensing ARM Cortex-M processor core provides numerous advantages. The very first is the ubiquity of the ARM core architecture, being adopted in multiple market segments to support variety of applications. If this chip maker wants to design-in a new customer, the probability that such OEM has already used ARM-based MCU is very high, and it’s very important for this OEM to be able to reuse existing code (we know the heavy weight linked with software development, in the 60% to 70% of the overall project cost). But this ubiquity generates a challenge: how do you differentiate from the competition when competitors can license exactly the same processor core?

Selecting a more aggressive technology node and providing better performance at lower cost are an option, but we understand that this advantage can disappear as soon as the competition also move to this node. Integrating larger amount of Flash is another option, which is very efficient if the product is designed on a technology that enables it to keep the pricing low enough.

If the chip maker has designed on an aggressive technology node for higher performance and offers a larger amount of Flash than the competition, it may be enough differentiation. Completing with the design of a smarter memory architecture unencumbered by cache misses, interrupts, context swaps, and other execution surprises that work against deterministic timing allow bringing strong differentiation.

Pic

If you want to more completely understand how Atmel has designed this SMART memory architecture for the Cortex-M7, I encourage you to read this white paper from Jacko Wilbrink and Lionel Perdigon entitled “Run Blazingly Fast Algorithms with Cortex-M7 Tightly Coupled Memories.” (You will have to register.) This paper describes MCUs integrating SRAM organized into four banks that can be used as general SRAM and for TCM, showing one example of a Cortex-M7 MCU being implemented in the Atmel | SMART SAM S70, SAM E70 and SAM V70/V71 families.


This post has been republished with permission from SemiWiki.com, where Eric Esteve is a principle blogger, as well as one of the four founding members of the site. This blog was originally shared on August 6, 2015.

“It’s not a feature, it’s a bug”


Embedded systems no longer need to be a ‘black box’ that leaves engineers guessing what may be happening, Percepio AB CEO Dr. Johan Kraft explains his latest guest blog post.


Anyone involved with software development will have most likely heard (and perhaps even said) the phrase “it’s not a bug, it’s a feature” at some point, and while its origins remain a mystery, its sentiment is clear — it’s a bug that we haven’t seen before.

connected_views

Intermittent ‘features’ in an embedded system can originate in either the software or hardware domain, often only evident when certain conditions collide in both. In the hardware domains, the timings involved may be parts of a nano second and where the logic is accessible, such as an address line or data bus — there exist instruments that can operate at high sample rates, allowing engineers to visualize and verify such ‘glitches.’ In the software domain, this becomes much more challenging.

Sequential Processing

While parallel processing is being rapidly adopted across all applications, single-processor systems remain common in embedded systems, thanks partly to the continued increases in the performance of microcontroller cores. Embedded MCUs are now capable of executing a range of increasingly sophisticated Real-Time Operating Systems (RTOS), often including the ability to run various communication protocols for both wired and wireless interfaces.

Whether in a single- or multi-processing system, combining these tasks with the embedded system’s main application, written by the engineering team, can make embedded software builds large, complex and difficult to fault-find, particularly when visibility into the code’s execution is limited. It can also lead to the dreaded intermittent fault which, if part of the system’s operation is ‘hidden’, can make solving them even more challenging.

A typical example may be an unexplained delay in a scheduled task. Of course, an RTOS is intended to guarantee specific tasks happen at specific times but this can be dependent on the task’s priority and what else may be happening at any time. In one real-world example, where a sensor needed to be sampled every 5ms, it was found that occasionally the delay between samples reached 6.5ms, with no simple explanation as to the cause. In another example, a customer reported that their system exhibited random resets; the suspected cause was that the watchdog was expiring before it was serviced, but how could this be checked? In yet another example, a system running a TCP/IP stack showed slower response times to network requests after minor changes in the code, for no obvious reason.

These are typical examples of how embedded systems running complex software can behave in unforeseen ways, leaving engineering teams speculating on the causes and attempting to solve the problems with only empirical results from which to assess their efforts. In the case of intermittent faults or system performance fluctuations, this is clearly an inefficient and unreliable development method.

Trace Tools

The use of logging software embedded in a build in order to record certain actions isn’t new, of course, and it can offer a significantly improved level of visibility into a system. However, while the data generated by such trace software is undoubtedly valuable, exploiting that value isn’t always simple.

Analyzing trace data and visually rendering it in various ways is the key function of Percepio’s Tracealyzer tools. It offers visualization at many levels, ranging from an event list to high-level dependency graphs and advanced statistics.

Over 20 different graphical views are provided, showing different aspects of the software’s execution that are unavailable with debuggers alone, and as such it complements existing software debug tools in a way that is becoming essential in today’s complex embedded systems. It supports an increasing range of target operating systems.

Figure 1(a): It appears that the ControlTask may be disabling interrupts.

Figure 1(a): It appears that the ControlTask may be disabling interrupts.

The main view in Tracealyzer, as shown in Figure 1(a) and 1(b), is a vertical timeline visualizing the execution of tasks/threads and interrupts. Other logged events, such as system calls, are displayed as annotations in this timeline, using horizontal colour-coded text labels. Several other timeline views are provided using horizontal orientation and all horizontal views can be combined on a common horizontal timeline. While much important data is created by the operating system’s kernel, developers can also extend the tracing with User Events, which allow any event or data in a user’s application to be logged. They are logged similar to calling the classic ‘printf’ C library function but are much faster as the actual formatting is handled in the host-side application, and can therefore also be used in time-critical code such as interrupt handlers. And, of course, they can also be correlated with other kernel-based events.

Figure 1(b): By changing the way ControlTask protects a critical section, SamplerTask is able to run as intended.

Figure 1(b): By changing the way ControlTask protects a critical section, SamplerTask is able to run as intended.

Tracealyzer understands the general meaning of many kernel calls, for instance locking a Mutex or writing to a message queue. This allows Tracealyzer to perform deep analysis to connect related events and visualize dependencies, e.g., which tasks communicate (see the communication flow graph, shown in Figure 3). This allows developers to quickly understand what’s really going on inside their system.

Insights

Returning to the first example, where a scheduled task was being inexplicably delayed intermittently, Tracealyzer was used to graphically show the task in question, time-correlated with other tasks. By invoking an exploded view of the task of interest, it was found that a lower priority task was incorrectly blocking the primary task from executing. It was discovered that the second task was disabling interrupts to protect a critical section unrelated to the primary task, which blocked the operating system scheduling. After changing the second task to using a Mutex instead, the primary task was able to meet its timing requirements. Figure 1(a) shows the SamplerTask being delayed by the (lower priority) ControlTask before the bug fix; Figure 1(b) confirms that SamplerTask is now occurring every 5ms as intended.

In the second example, User Events were used to not only record when the Watchdog was reset or when it expired, but also to log the remaining Watchdog timer value, thereby showing the time left in the Watchdog timer when it is reset. By inspecting the logged system calls it was found that the task in question did not only reset the Watchdog timer; it also posted a message to another task using a (fixed-size) message queue. The Watchdog resets seemed to occur while the Watchdog task was blocked by this message posting. Once realised, the question then became ‘why’. By visually exploring the operations on this message queue using the Kernel Object History view, it became clear that the message queue sometimes becomes full, as suspected. By correlating a view of the CPU load against how the Watchdog timer margin varied over time, as shown in Figure 2, it was found that Fixed Priority Scheduling was allowing a medium-priority task (ServerTask) to use so much CPU time that the message queue wasn’t always being read. Instead, it became full, leading to a Watchdog reset. The solution was in this case to modify the task priorities.

Figure 2: The CPU Load graph, correlated to the Watchdog Timer User Event, gives valuable insights.

Figure 2: The CPU Load graph, correlated to the Watchdog Timer User Event, gives valuable insights.

In the last example, where a software modification caused increased response time to network requests, using the Communications Flow view (Figure 3) it was found that one particular task — Logger — was receiving frequent but single messages with diagnostics data to be written to a device file system, each causing a context switch. By modifying the task priorities, the messages were instead buffered until the network request had finished and thereafter handled in a batch. This way, the number of context-switches during the handling of network requests was drastically reduced, thereby improving overall system responsiveness.

Figure 3: The Communication Flow reveals 5 tasks sending messages to Logger.

Figure 3: The Communication Flow reveals 5 tasks sending messages to Logger.

Conclusion

The complexity of embedded software is increasing rapidly, creating demand for improved development tools. While runtime data can be recorded in various ways, understanding its meaning isn’t a simple process, but through the use of innovative data visualization tools such as Tracealyzer it can be.

Many companies have already benefited from the many ways of using the tool to really discover what’s going on in the runtime system. Some Tracealyzer users even include it in production code, allowing them to gather invaluable data about real systems running in the field.

Embedded systems need no longer be a ‘black box,’ leaving engineers to suppose what may be happening; powerful visualization tools now turn that black box into an open box.

4 designs tips for AVB in-car infotainment


AVB is clearly the choice of several automotive OEMs, says Gordon Bechtel, CTO, Media Systems, Harman Connected Services.


Audio Video Bridging (AVB) is a well-established standard for in-car infotainment, and there is a significant amount of activity for specifying and developing AVB solutions in automobiles. The primary use case for AVB is interconnecting all devices in a vehicle’s infotainment system. That includes the head unit, rear-seat entertainment systems, telematics unit, amplifier, central audio processor, as well as rear-, side- and front-view cameras.

The fact that these units are all interconnected with a common, standards-based technology that is certified by an independent market group — AVnu — is a brand new step for the automotive OEMs. The AVnu Alliance facilitates a certified networking ecosystem for AVB products built into the Ethernet networking standard.

Figure 1 - AVB is an established technology for in-car infotainmentAccording to Gordon Bechtel, CTO, Media Systems, Harman Connected Services, AVB is clearly the choice of several automotive OEMs. His group at Harman develops core AVB stacks that can be ported into car infotainment products. Bechtel says that AVB is a big area of focus for Harman.

AVB Design Considerations

Harman Connected Services uses Atmel’s SAM V71 microcontrollers as communications co-processors to work on the same circuit board with larger Linux-based application processors. The software firm writes codes for customized reference platforms that automotive OEMs need to go beyond the common reference platforms.

Based on his experience of automotive infotainment systems, Bechtel has outlined the following AVB design dos and don’ts for the automotive products:

1. Sub-microsecond accuracy: Every AVB element on the network is hooked to the same accurate clock. The Ethernet hardware should feature a time stand to ensure packet arrival in the right order. Here, Bechtel mentioned the Atmel | SMART SAM V71 MCU that boasts screen registers to ensure advanced hardware filtering of inbound packets for routing to correct receive-end queues.

2. Low latency: There is a lot of data involved in AVB, both in terms of bit rate and packet rate. AVB allows low latency through reservations for traffic, which in turn, facilitate faster packet transfer for higher priority data. Design engineers should carefully shape the data to avoid packet bottlenecks as well as data overflow.

Figure 2 - Bechtel

Bechtel once more pointed to Atmel’s SAM V71 microcontrollers that provide two priority queues with credit-based shaper (CBS) support that allows the hardware-based traffic shaping compliant with 802.1Qav (FQTSS) specifications for AVB.

3. 1588 Timestamp unit: It’s a protocol for correct and accurate 802.1 AS (gPTP) support as required by AVB for precision clock synchronization. The IEEE 802.1 AS carries out time synchronization and is synonymous with generalized Precision Time Protocol or gPTP.

Timestamp compare unit and a large number of precision timer counters are key for the synchronization needed in AVB for listener presentations times and talker transmissions rates as well as for media clock recovery.

4) Tightly coupled memory (TCM): It’s a configurable high-performance memory access system to allow zero-wait CPU access to data and instruction memory blocks. A careful use of TCM enables much more efficient data transfer, which is especially important for AVB class A streams.

It’s worth noting that MCUs based on ARM Cortex-M7 architecture have added the TCM capability for fast and deterministic code execution. TCM is a key enabler in running audio and video streams in a controlled and timely manner.

AVB and Cortex-M7 MCUs

The Cortex-M7 is a high-performance core with almost double the power efficiency of the older Cortex-M4. It features a six-stage superscalar pipeline with branch prediction — while the M4 has a three-stage pipeline.  Bechtel of Harman acknowledged that M7 features equate to more highly optimized code execution, which is important for Class A audio implementations with lower power consumption.

Again, Bechtel referred to the SAM V71 MCUs — which are based on the Cortex-M7 architecture — as particularly well suited for the smaller ECUs. “Rear-view cameras and power amplifiers are good examples where the V71 microcontroller would be a good fit,” he said. “Moreover, the V71 MCUs can meet the quick startup requirements needed by automotive OEMs.”

Figure 3 - Atmel's V71 is an M7 chip for Ethernet AVB networking and audio processing

The infotainment connectivity is based on Ethernet, and most of the time, the main processor does not integrate Ethernet AVB. So the M7 microcontrollers, like the V71, bring this feature to the main processor. For the head unit, it drives the face plate, and for the telematics control, it contains the modem to make calls so echo cancellation is a must, for which DSP capability is required.

Take the audio amplifier, for instance, which receives a specific audio format that has to be converted, filtered and modulated to match the requirement for each specific speaker in the car. This means infotainment system designers will need both Ethernet and DSP capability at the same time, which Cortex-M7 based chips like V71 provide at low power and low cost.

Medical applications are leading advancement in 3D printing


Does healthcare hold the future for 3D printing? 


According to Gartner’s latest Hype Cycle for 3D Printing, medical applications are leading to some of the most significant deployments of the next-gen technology. The research firm’s report reveals that 3D printing of medical devices has reached the “Peak of Inflated Expectations,” but certain specialist applications are already becoming the norm in medical care.

“In the healthcare industry, 3DP is already in mainstream use to produce medical items that need to be tailored to individuals, such as hearing aids and dental devices,” explained Pete Basiliere, Gartner research director.

One notable example is hearing aids, as manufacturers are now offering personalized devices that fit to the exact shape of a customer’s ear.

“This is evidence that using 3DP for mass customization of consumer goods is now viable, especially given that the transition from traditional manufacturing in this market took less than two years. Routine use of 3DP for dental implants is also not far from this level of market maturity,” Basiliere added.

Some medical 3DP technologies are further from mainstream use, but are equally, if not more, exciting. These include hip and knee replacements, which are a $15 billion industry and one of the most common surgical procedures. Early trials using personalized 3D-printed replacements suggest improved healing times and function of the implant, as well as an increased success rate in more complex operations. Given the size of the market, Gartner predicts that 3D-printed hip and knee replacements, in addition to other recurrent internal and external medical devices, will be in mainstream use within two to five years.

Looking further out, at least five to 10 years to mainstream adoption, there is bioprinting. 3D bioprinting, which has been featured in a number of news stories as of late, is found in two categories on the Hype Cycle: one focused on producing living tissues for human transplant, the other for life sciences’ research and development.

Gartner goes on to note that 3D printers have already proven to be capable of creating cells, proteins, DNA and drugs, but are currently being held back by a couple of “significant barriers.”

There is still rapid advancement outside of medical fields as well. While 3D prototyping has for many years been the only mainstream use, it will likely be joined by many technologies that will spur much wider utilization of printers outside of specialist fields.

“Advancements outside of the actual printers themselves may prove to be the catalyst that brings about widespread adoption,” Basiliere said. “Technologies such as 3D scanning, 3D print creation software and 3D printing service bureaus are all maturing quickly, and all — in their own way — have the potential to make high quality 3DP more accessible and affordable.”

3D printing software, for example, has in the past been limited to commercial 3D CAD programs that were not simple to use. Consumer-oriented design libraries and modelling tools are becoming established, providing a far simpler method for producing printable designs. Moreover, 3D scanners are also advancing in adoption and dropping in price, enabling users to create complex printable models of real-world items without any CAD skills.

Though still several years away, the 3D printing of consumable products has been added to the Hype Cycle. This should come to no surprise, given the recent debuts of food, chocolate and even drug printers. Also listed in the “Innovation Trigger” stage include  intellectual property protection, macro 3D printing and classroom 3D printing.

Beyond that, the emergence of 3DP service bureaus continues to accelerate. This enables enthusiasts and organizations to test and experiment with the capabilities of advanced 3DP systems in situations where an investment in purchasing a 3D printer would be hard to justify. As this ecosystem matures around the printers, so market demand and competition will keep increasing and more use cases will become commonplace.

Interested? You can check out the entire report from Gartner here.

[Image: Gartner]

BitCloud ZigBee PRO SDK achieves Golden Unit status


Compatible with the Atmel | SMART SAM R21 and ATmega256RFR2, the BitCloud ZigBee PRO Software Development Kit has achieved Golden Unit status.


Atmel has announced that the BitCloud ZigBee PRO Software Development Kit (SDK) has achieved the prestigious Golden Unit status for the ZigBee PRO R21 standard. As an approved Golden Unit, the Atmel BitCloud solution will be used by ZigBee testhouses to verify standard compliancy for all future ZigBee 3.0 products. This guarantees superior interoperability for customers designing the latest connected lighting, security and comfort control products for smart home applications.

banner-ZigBit-Modules-496x190

With improved security, interoperability and ease-of-use, the Atmel BitCloud SDK provides a comprehensive set of tools to quickly design and develop wireless products compliant to ZigBee LightLink and ZigBee Home Automation Profiles, as well as the upcoming ZigBee 3.0 standard. The BitCloud SDK includes full-featured reference applications, ZigBee PRO stack libraries and API, user documentation, and implements reliable, scalable and secure wireless solution that supports large mesh networks of hundreds of devices, and is optimized for ultra-low power consumption with up to 15 years battery life.

BitCloud ZigBee PRO SDK fully supports Atmel | SMART SAM R21 devices, a single-chip solution integrating Atmel’s Atmel | SMART ARM Cortex-M0+-based MCU and high-performance IEEE 802.15.4 RF transceiver available as a standalone component or production-ready certified modules. The Atmel BitCloud is also compatible with the AVR ATmega256RFR2 wireless MCU, an ideal hardware platform delivering the industry’s lowest power consumption at 12.5mA in active receive mode, combined with receiver sensitivity at 101dBm.

WC_256RFR2

“Intelligence, wireless connectivity and security are key elements to enable the anticipated growth of the Internet of Things market,” says Pierre Roux, Atmel director of wireless solutions. “Achieving the prestigious Golden Unit Status for our BitCloud SDK ensures designers that our wireless solutions are world class and will cater next-generation solutions for this smart, connected world. We are excited to achieve this certification again.”

The Sensel Morph is a next-gen, multi-touch input device


This pressure-sensitive, multi-touch input device will enable users to interact with the digital world like never before.


Despite all the advancements in technology, the keyboard and mouse have collectively withstood the test of time, remaining relatively unchanged for decades — until now. That’s because Mountain View, California startup Sensel is hoping to usher in a new generation of multi-touch interaction with an input device that they call the Morph.

58007c4aeaf1ecaddafdbfdba9e69a4f_original

Powered by the company’s patented Pressure Grid technology, the Morph will let users interact with computers and programs in a whole new way. While on the surface it may appear to look like an ordinary trackpad, it is far from that. Inside lies approximately 20,000 sensors (or “sensels”) that can detect and measure the force of even the slightest touch. And given that it’s not a capacitive touch device, it doesn’t require a human to press on its outer force-sensing material. Instead, any object ranging from a paintbrush to a drumstick will do the trick.

c8747240b8afea5489d8bea1d88ca194_original

“Unlike other touch technologies, which can only sense conductive objects, each of the sensor elements in our device senses pressure with a high dynamic range. These sensors allow us to capture a high-resolution image of the pressure applied to the device. Highly tuned algorithms on the device take these pressure images and turn them into a list of touch locations, each with their own force and shape information,” its creators write.

What’s nice is that the Morph works right out of the box with an assortment of applications, and is even hackable for the tech-savvy bunch. Simply connect it to your computer via USB, to your iPad over Bluetooth, or to your Atmel powered Arduino with developer cables, and you’re good to go.

831980e7ec08c19c823512be3bb06803_original

As its name would imply, the unit can literally “morph” depending upon your activities throughout the day. This is achieved with the help of magnetic, fully customizable overlays (each shipment will come with three) that are placed over the gadget and instantly provide a visual “map” for each mode’s unique functionality. Backers can choose from a QWERTY keyboard, a music production controller, a piano, a drum pad, a game console, an art overlay, as well as one more to be decided by the Kickstarter community.

What’s more, Sensel has introduced an “innovator’s” overlay, which gives the Maker crowd the ability to design, print and use their own custom interfaces. And as if that wasn’t enough, you can actually combine multiple devices to amplify the awesomeness. For example, you can put four Morphs together to make an instrument with 96 keys.

7ab982bb9080e4fde7f01fa48f8e9800_original

“Imagine having your art tablet, music production controller, QWERTY keyboard, piano, video game controller (and anything else your mind can fathom) all in one device. If you can imagine something so limitless without your brain imploding, you’ve imagined the Sensel Morph,” the team explains.

With the Morph, you will also be able to create new, custom interfaces. The Sensel crew is developing a web-based drag-and-drop interface that will go live when the first batch of devices ship. With this interface, you will be able to devise your own overlay without having to do any coding. As for the developers out there, Sensel’s open source API will enable you to integrate the propietary technology into your own applications. The Morph is compatible with Windows, Mac, Linux, iOS and Arduino.

“Our mission from the start was to address the mismatch between the expressive capabilities of our hands and the restrictive interfaces of today’s devices,” the folks at Sensel add. “We want to enable new ways of interaction with digital devices and allow Morph users to unleash new possibilities in the worlds of music, art, gaming (cue Buzz Lightyear), and beyond!”

c2c47e27452fe390e35e274d0c26439c_original

Housed inside the iPad-sized device’s aluminum casing and beneath its super thin, force-sensing material lies a patented electrical drive scheme and circuitry, which includes a microprocessor, an accelerometer, LEDs, Bluetooth LE support, a rechargeable battery and a microUSB port.

Ready to interact with your digital world like never before? Head over to the Morph’s Kickstarter campaign, where Sensel is currently seeking $60,000. Delivery is slated for next summer.

Autonomous vehicles and IoT are the most-hyped technologies of 2015


Gartner’s latest Hype Cycle reveals intelligent robots and smart home products are now closer to mainstream. 


Another year, another Gartner Hype Cycle for Emerging Technologies. New to the report in 2015 is the emergence of technologies that support what the firm defines as “digital humanism” — the notion that people are the central focus in the manifestation of digital businesses and digital workplaces.

emerging-tech-hc.png;wa0131df2b233dcd17

“The Hype Cycle for Emerging Technologies is the broadest aggregate Gartner Hype Cycle, featuring technologies that are the focus of attention because of particularly high levels of interest, and those that Gartner believes have the potential for significant impact,” said Betsy Burton, vice president and distinguished analyst at Gartner. “This year, we encourage CIOs and other IT leaders to dedicate time and energy focused on innovation, rather than just incremental business advancement, while also gaining inspiration by scanning beyond the bounds of their industry.”

Major changes in the 2015 Hype Cycle include the placement of autonomous vehicles, which have shifted from pre-peak to peak. While autonomous vehicles are still embryonic, this movement represents a significant advancement, with all major automotive companies putting autonomous vehicles on their near-term roadmaps. Similarly, the growing momentum (from post-trigger to pre-peak) of the smart home has introduced entirely new solutions and platforms enabled by new technology providers and existing manufacturers.

One of the newcomers to this year’s list is smart dust. Categorized in the “innovation trigger” section, smart dust refers to a collection of tiny dust-like sensors or devices which can be used to detect factors like light or sound.

Gartner has defined a set of six “business era models” that enterprises can aspire to in the future. However, since the Hype Cycle is purposely focused on more emerging technologies, it mostly supports the last three of these stages. These include: digital marketing (stage 4), digital business (stage 5) and autonomous (stage 6).

The digital marketing stage sees the emergence of the Nexus of Forces (mobile, social, cloud and information). Enterprises in this stage focus on new and more sophisticated ways to reach consumers, who are more willing to participate in marketing efforts to gain greater social connection, or product and service value. According to the analysts, enterprises that are seeking to achieve this should consider gesture control, hybrid cloud computing, Internet of Things, machine learning, people-literate technology, and speech-to-speech translation.

Moreover, digital business is the first “post-nexus stage” on the roadmap and focuses on the convergence of people, business and things, with the IoT and the concept of blurring the physical and virtual worlds playing prominent roles. Physical assets become digitalized and become equal actors in the business value chain alongside already-digital entities, such as systems and apps.

Gartner notes that enterprises seeking to go past the Nexus of Forces technologies to become a digital business should look to 3D bioprinting, human augmentation, affective computing, augmented reality, bioacoustics sensing, biochips, brain-computer interface, citizen data science, connected home, cryptocurrencies, digital dexterity, digital security, enterprise 3D printing, intelligent robots, smart advisors, gesture control, micro data centers, quantum computing, software-defined security, virtual reality, and wearables.

Lastly, autonomous represents the final “post-nexus stage.” This stage is defined by an enterprise’s ability to leverage technologies that provide human-like or human-replacing capabilities, such as using autonomous vehicles to move people or products and employing cognitive systems to recommend a potential structure for an answer to an email, write texts or answer customer questions. Enterprises seeking to reach this stage to gain competitiveness should consider self-driving cars, smart dust, virtual personal assistants, and volumetric and holographic displays.

“Although we have categorized each of the technologies on the Hype Cycle into one of the digital business stages, enterprises should not limit themselves to these technology groupings,” added Burton. “Many early adopters have embraced quite advanced technologies, for example, autonomous vehicles or smart advisors, while they continue to improve nexus-related areas, such as mobile apps.”

Interested in learning more? You can check out Gartner’s entire report here.

[Image: Gartner]

The SAM L22 is a Cortex-M0+ MCU with a segment LCD controller


The Atmel | SMART SAM L22 delivers down to 39uA/MHz running CoreMark and features a segment LCD controller, peripheral touch controller and tamper detection. 


Atmel has expanded its popular lineup of secure, ARM Cortex M0+-based MCUs with the new SAM L22 series. The Atmel | SMART SAM L family is the highest scoring product family in the EEMBC ULPBench and offers an ultra-low power capacitive touch with a segment LCD controller that can deliver up to 320 segments, making the devices ideal for low-power applications such as thermostats, electric/gas/water meters, home control, medical and access systems.

lowpower_Banner_tradingcard_081815

The Internet of Things is driving connectivity in various battery-powered devices making security and ultra-low power critical features in these devices. With this in mind, the SAM L22 series boasts 256-bit AES encryption, cyclic redundancy check (CRC), a true random number generator, Flash protection and tamper detection to ensure information is securely stored, delivered and accessible. To get the lowest possible power consumption, the devices use Atmel’s proprietary picoPower technologies and smart low-power peripherals that work independently of the CPU in sleep modes. The latest MCU can run down to 39µA/MHz in active mode, consuming only 490nA with RTC in backup-mode.

“As more devices in the consumer, industrial and home automation segments are becoming smarter and connected, these devices require a number of unique features including ultra-low power, security, touch capability with an LCD — all features that are currently provided in the SAM L22,” explained Oyvind Strom, Atmel Senior Director of MCUs. “Atmel is already engaged with a number of alpha customers developing metering, thermostat and industrial automation solutions based on the new Atmel | SMART SAM L22 series.”

In addition to segment LCD supporting up to eight communication lines, capacitive touch sensing and built-in security measures, the SAM L22 includes up to 256KB of Flash and 32KB of SRAM, crystal-less USB device, programmable Serial Communication modules (SERCOM) and Atmel’s patented Event System and Sleepwalking technologies.

Those wishing to accelerate their designs will be happy to learn that the new SAM L22 Atmel Xplained Pro is now available. This professional evaluation board with an on-board debugger and standardized extension connectors is also fully supported by Atmel Studio. While the Atmel SAM L22 series is currently sampling, production release is slated for December 2015.