Category Archives: Resources

Pico Cassettes are like NES cartridges for your smartphone


One startup has introduced a small cartridge that plugs directly into your phone’s headphone jack to unlock games.


Who could forget the days of whipping out their Nintendo system, blowing into the base of their Super Mario Bros. cartridge and then slipping it into the console for some 8-bit gaming goodness? Well, one Tokyo startup has come up with a similar platform for the smartphone era called Pico Cassettes. 

CPae0qXWcAALB97

The team at Beatrobo have developed a tiny game cassette that plugs into a phone’s audio jack, acting as a physical key for unlocking old-school games. These cartridges are actually an extension of the startup’s PlugAir technology, which has been used to sell physical music and video content throughout Japan. (In fact, it was on display inside the Atmel CES booth back in 2014.) While the dongles themselves don’t actually store any software, each one serves as as an authentication key for unlocking content by sending out an inaudible sound to the device.

What’s nice is that, since each Pico Cassette has a unique identifier and can securely communicate with Beatrobo’s servers, players will have the ability to save their games and play each one on a number of gadgets.

plug

“We’ve been thinking about gaming since we designed PlugAir,” Beatrobo founder and CEO Hiroshi Asaeda recently told Tech in Asia. “We thought it could be cool to plug a character-themed dongle into your phone and then unlock a special character in a game like Puzzles and Dragons. We had the idea before Nintendo announced its Amiibo figures, and we still think there’s an opportunity to make that kind of device for mobile.”

For now, Beatrobo’s first goal is to relaunch some classic NES titles like Super Mario Bros. and Pac-Man. In the process, the team is hoping that Pico Cassette can one day do the same for vintage gaming as Spotify has done for the music industry. While still merely at the demo stage, you can follow along with the project on Beatrobo’s website here.

[Image: The Verge]

Your touchscreen can now seamlessly transition between hover, finger and glove touch


The new maXTouch mXT641T family is the industry’s first auto-qualified self- and mutual-capacitance controller meeting the AEC-Q100 standards for high reliability in harsh environments.


Optimized for capacitive touchpads and touchscreens from five to 10 inches, Atmel has expanded its robust portfolio of automotive-qualified maXTouch controllers with the all-new mXT641T family. These devices are the industry’s first auto-qualified self- and mutual-capacitance controllers meeting the AEC-Q100 standards for high reliability in harsh environments.

Glo1

The maXTouch mXT641T family incorporates Atmel’s Adaptive Sensing technology to enable dynamic touch classification, a feature that automatically and intelligently switches between self- and mutual-capacitance sensing to provide users a seamless transition between a finger touch, hover or glove touch. As a result, this eliminates the need for users to manually enable ‘glove mode’ in the operating system to differentiate between hover and glove modes. Adaptive Sensing is also resistant to water and moisture and ensures superior touch performance even in these harsh conditions.

The latest family of devices support stringent automotive requirements including hover and glove support in moist and cold environments, thick lens for better impact resistance, and single-layer shieldless sensor designs in automotive center consoles, navigation systems, radio interfaces and rear-seat entertainment systems. The single-layer shieldless sensor design eliminates additional screen layers, delivering better light transparency resulting in lower power consumption along with an overall lower system cost for the manufacturer.

Glove

“More consumers are demanding high-performance touchscreens in their vehicles with capacitive touch technology,” said Rob Valiton, Senior Vice President and General Manager, Automotive, Memory and Secure Products Business Units. “Atmel is continuing to drive more innovative, next-generation touch technologies to the automotive market and our new family of automotive-qualified maXTouch T controllers is further testament to our leadership in this space. Atmel is the only automotive-qualified touch supplier with over two decades of experience in designing, developing, and manufacturing semiconductor solutions that meet the stringent quality and reliability standards for our automotive customers.”

Interested? Production quantities of the mXT641T are now available. Meanwhile, you can learn all about the entire maXTouch lineup here.

Introducing the all-new Atmel | SMART SAMA5D2 series


The latest Atmel | SMART ARM Cortex-A5-based MPU is pushing the boundaries of performance and power for industrial IoT and wearable applications.


Exciting news — a new family of Atmel | SMART ARM Cortex-A5-based microprocessors have arrived! These MPUs deliver sub 200µA in retention mode with context preserved, 30µs ultra-fast wake-up and a new backup mode with DDR in self-refresh at only 50µA. The Atmel | SMART SAMA5D2 series provides great system integration with the addition of a complete audio subsystem, lower pin-count and ultra-small package for space constraints applications, and built-in PCI-level security targeting industrial Internet of Things, wearables and point of sale applications.

SAMA5D2_Google+_1160x805_090215

Expanding the Atmel SAMA5 family, the SAMA5D2 offers just the right price-to-performance ratio for applications requiring an entry-level MPU and extended industrial temperature range (-40 to 105°C ambient temperature). These MPUs are also a great migration path for designers using ARM926-based MPUs looking for higher performance and additional features including low power, higher security, DDR3 support, smaller footprint, audio, USB HSIC and Atmel’s patented SleepWalking technology.

“As a leader in ultra-low power MCU and MPU IoT solutions, we are excited to launch the new Atmel | SMART SAMA5D2 series for designers requiring a general, entry-level MPU,” explained Jacko Wilbrink, Atmel Senior Director of MPUs. “Designers for industrial IoT, wearables and POS applications are demanding more performance, lower power, smaller form factors and additional security for their next-generation applications. The Atmel SAMA5D2 is well positioned for these demanding requirements, delivering the world’s lowest power MPU, along with low-system cost and PCI level security.”

SAM

Featuring an ARM NEON engine, the new SAMA5D2 boasts 500MHz and 166MHz of system clocking. The memory system includes a configurable 16- or 32-bit DDR interface controller, 16-bit external bus interface (EBI), QSPI Flash interface, ROM with secure and non-secure boot solution, 128kB of SRAM plus 128kB of L2Cache configurable as SRAM extension. The user interface system for the SAMA5D2 is comprised of a 24-bit TFT LCD controller, an audio subsystem with fractional PLL, multiple I2S and SSC/TDM channels, a stereo class D amplifier, as well as digital microphone support.

The robust security system in the new SAMA5D2 is even equipped with the ARM TrustZone technology, along with secure boot, hardware cryptography, RSA/ECC, on-the-fly encryption/decryption on DDR and QSPI memories, tamper resistance, memory scrambling, independent watchdog, temperature, voltage and frequency monitoring and a unique ID in each device.

Ult

To support the SAMA5D2 MPUs, a free Linux distribution has been developed and published in the mainline kernel. For non-operating system users, Atmel delivers more than 40 peripheral drivers in C. Moreover, the company also collaborates with a global network of partners, including IAR, ARM, Free Electrons, Active-Semi, Micron, ISSI, Winbond, Segger, Lauterbach, FreeRTOS, Express Logic, NuttX and Sequitur Labs, that provide development tools, PMIC, memories and software solutions.

Interested? The SAMA5 Xplained Ultra kit is currently available for just $79. The board packs an embedded debugger and programmer and a wide range of compatible extensions boards. Standalone programmer debugger solutions supporting the SAMA5 family are available, too. Early samples of the SAMA5D2 are now ready, while those wishing for an ATSAMA5D2-XULT Xplained Ultra boards will have to wait until October. First production quantities of the SAMA5D2 series will ship in December 2015.

SteadXP is a plug and play video stabilization device


SteadXP allows you to capture action shots without the bulk or hassle of a Steadicam or gimbal.


Unless you’re going for that “The Blair Witch Project” shaky cam look, keeping a camera steady has always been a chore for professional and leisure videographers alike. And while numerous ways to stabilize video have been introduced, they’re often too inaccessible for independent projects or the hobbyist. This is a problem that one French startup is hoping to solve with a drastically new approach.

601e33f846c662e96801b4b7ddd5887f_original

Introducing SteadXP, a three-axis stabilization system housed in a small, affordable box. Not only does it offer a lightweight, easy-to-use package, the add-on is compatible with nearly every digital camera on the market, including your GoPro and DSLR.

By combining custom hardware with a unique software algorithm, SteadXP allows you to capture action shots without a Steadicam, gimbal or shoulder rig. Instead, the device’s built-in accelerometer and gyroscope record the camera’s movements accurately as you shoot. When finished, SteadXP connects to your PC while its software stabilizes and reduces all of the unwanted jitters, movements and noise in the footage.

058356c98cdc07bc4382f294dbb39ce5_original

SteadXP will also let you choose between different trajectories optimized for your shot, and the results are looks from various angles. Beyond that, those seeking a particular rendering effect can take total control of framing with a complete set of semi-automatic features as well.

For its Kickstarter launch, SteadXP is available in two versions: one made specifically for GoPros, the other designed to fit on practically any other video camera. The former weighs just 34 grams, can be plugged directly into the expansion port of your GoPro camera and is powered by the host battery. Whereas the latter is a bit heavier (60 grams) and requires an accessible flash mount, a stereo microphone unit and a clean video output (AV out or HDMI). Nevertheless, both models share many of the same key components, including a powerful 32-bit ARM MCU, a three-axis gyroscope and accelerometer, a microSD slot and USB port.

steadXP-stabilization-video

Looking ahead, the team hopes to release a mobile app that will enable users to complete their workflow with a quick preview solution that validates a shot on the spot, even if that means at a lower res. What’s more, SteadXP wants to become the first gadget to automatically keep horizon stable when filming immersive virtual reality footage. Adding this to its native rolling shutter correction technology means you’ll never get sick again watching VR videos!

Intrigued? Head over to its Kickstarter page, where the SteadXP team is currently seeking $167,715. Delivery is expected to begin in March 2016.

Enhance Raspberry Pi security with ZymKey


In this blog, Zymbit’s Scott Miller addresses some of the missing parts in the Raspberry Pi security equation. 


Raspberry Pi is an awesome platform that offers people access to a full-fledged portable computing and Linux development environment. The board was originally designed for education, but has since been embedded into countless ‘real world’ applications that require remote access and a higher standard of security. One of, if not, the most notable omissions is the lack of a robust hardware-based security solution.

Zymkey_004-1

At this point, a number of people would stop here and say, “Scott, you can do security on RPi in software just fine with OpenSSL/SSH and libgcrypt. And especially with the Model 2, there are tons of CPU cycles left over.” Performance is not the primary concern when we think about security; the highest priority is to address the issue of “hackability,” particularly through remote access.

What do you mean by “hackability?”

Hackability is a term that refers to the ease by which an attacker can:

  • take over a system;
  • insert misleading or false data in a data stream;
  • decrypt and view confidential data.

Perhaps the easiest way to accomplish any or all of the aforementioned goals is for the attacker to locate material relating to security keys. In other words, if an attacker can gain access to your secret keys, they can do all of the above.

Which security features are lacking from Raspberry Pi?

Aside from not having hardware-based security engines to do the heavy lifting, there’s no way to secure shared keys for symmetric cryptography or private keys for asymmetric cryptography.

Because all of your code and data live on a single SD card, you are exposed. Meaning, someone can simply remove the SD card, pop it into a PC and have possession of the keys and other sensitive material. This is particularly true when the device is remote and outside of your physical control. Even if you somehow try to obfuscate the keys, you are still not completely safe. Someone with enough motivation could reverse engineer or work around your scheme.

The best solution for protecting crypto keys is to ensure the secret key material can only be read by standalone crypto engines that run independently from the core application CPU. This basic feature is lacking in the Raspberry Pi.

Securing Raspberry Pi with silicon and software

With this in mind, Zymbit has decided to extract some of the core security features from the Zymbit.Orange and combine them into a tiny device that embeds onto the Raspberry Pi, providing seamless integration with Zymbit’s remote device management console. Meet the ZymKey!

ZymKey for secure remote device management

ZymKey brings together silicon, firmware drivers and software services into a coherent package that’s compatible with Zymbit’s secure IoT platform. This enables a Raspberry Pi to be accessed and managed remotely, firmware to be upgraded and access rights to be administered.

Zymkey-System-Overview-5-1

Secure software services

Zymbit’s Connect libraries enhance the security and utility of Raspberry Pi in the following ways:

  • Add message authentication to egress messages to the Zymbit cloud by attaching a digital signature, which proves that the data originated to a specific Raspberry Pi/Key combination. (Meaning that it was not forged or substituted along the way).
  • Assist in providing security certificates to the Zymbit cloud.
  • Authenticate security certificates from the Zymbit cloud.
  • Optionally help to encrypt/decrypt the content of messages to/from the Zymbit cloud.

Data that is encrypted/authenticated through ZymKey will be stored in this encrypted/authenticated form, thereby preserving the privacy and integrity of the data.

Zymkey-System-Detail-1

In addition to its standard attributes, developers can access lower level features through secure software services, including general cryptography (SHA-256 MAC and HMAC with secure keys, public key encryption/decryption), password validation, and ‘fingerprint’ services that bind together specific hardware configurations.

Stealth hardware

ZymKey’s low-profile hardware plugs directly into the Pi’s expansion header while still allowing Pi-Plates to be added on top. Lightweight firmware drivers run on the RPi core and interface with software services through zymbit.connect. It should also be noted that a USB device is in the works for other Linux boards.

ZYMKEY-RPi-Annotated-2

At the heart of the ZymKey is the newly released ATECC508A CryptoAuthentication IC. Among some of its notable specs are:

  • ECC asymmetric encryption engine
  • SHA digest engine
  • Random number generator
  • Unique 72-bit ID
  • Tamper prevention
  • Secure memory for storing:
    • Sensitive key material – an important thing to point out is that private keys are unreadable by the outside world and, as stated above, are only readable by the crypto engine.
    • X.509 security certificates.
    • Temporary items: nonces, random numbers, ephemeral keys
  • Optional encryption of transmitted data across the I2C bus for times when sensitive material must be exchanged between the Raspberry Pi and the ATECC508A

Life without ZymKey

Raspberry Pi can be used with the Zymbit Connect service without the ZymKey; however, the addition of ZymKey ensures that communications with Zymbit services are secured to a higher standard. Private keys are unreadable by the outside world and usable only by the ATECC508A, thus making it difficult (if not practically impossible) to compromise.

Each ZymKey has a unique set of keys. So, if, on the off chance that a key is compromised, only that key is affected. Simply stated, if you have several Raspberry Pi/ZymKey pairs deployed and one is compromised, the others will still be secure.

Once again, it is certainly possible to achieve the above goals purely through software (OpenSSL/libgcrypt/libcrypto). However, especially regarding encryption paths, without ZymKey’s secure storage, key material must be stored on the Raspberry Pi’s SD card, exposing private keys for anyone to exploit.

Stay tuned! The ZymKey will be making its debut on Kickstarter in the coming days.

Parse for IoT launches four new SDKs


Parse for IoT has expanded its SDK lineup with four new kits built with Atmel and other industry leaders.


The Internet of Things is one of the most exciting new platforms for app development, especially as more and more people interact with connected devices every day. But it also poses a host of challenges for developers, as they must wrestle with the complex task of maintaining a backend with a whole new set of constraints. Many IoT devices also need to be personalized and paired with a mobile companion app. Cognizant of this, the Parse team is striving to make it simpler.

Phot

At F8 this year, Parse for IoT was announced — an official new line of SDKs for connected devices, starting with an SDK targeted for the Arduino Yún (ATmega32U4). Now, Parse has shared that they are expanding their lineup with four new SDKs built with Atmel, Broadcom, Intel and TI. This will make it easier than ever to use Parse with more types of hardware and a broader range of connected devices. For example, you can build an app for the Atmel | SMART SAM D21 and WINC1500 — and connect it to the Parse cloud in minutes, with nothing more than a few lines of code.

Parse

“We’ve been excited to see the creative and innovative things our developer community has built since we first launched Parse for IoT at F8. Already, hundreds of apps for connected devices have been created with the new SDKs,” explains Parse software engineer Damian Kowalewski. “Our tools have been used to build exciting and diverse products like a farm-to-table growing system that lets farmers remotely control their equipment with an app (Freight Farms); a smart wireless HiFi system that syncs music, lighting and more (Musaic); and even a smart BBQ smoker that can sense when meat is perfectly done (Trignis). Here at Parse, we had fun building a connected car and a one-click order button. And we’ve heard that our SDKs are even being used as teaching tools in several college courses.”

IMG_22661

As to what’s ahead, this lies in the hands and minds of Makers. From a garage hacker’s weekend project to a production-ready connected product, manufactured at scale — Parse can power them all. Ready to get started? You can download the new SDKs and access QuickStart guides here.

How to prevent execution surprises for Cortex-M7 MCU


We know the heavy weight linked with software development, in the 60% to 70% of the overall project cost.


The ARM Cortex-A series processor core (A57, A53) is well known in the high performance market segments, like application processing for smartphone, set-top-box and networking. If you look at the electronic market, you realize that multiple applications are cost sensitive and don’t need such high performance processor core. We may call it the embedded market, even if this definition is vague. The ARM Cortex-M family has been developed to address these numerous market segments, starting with the Cortex-M0 for lowest cost, the Cortex-M3 for best power/performance balance, and the Cortex-M4 for applications requiring digital signal processing (DSP) capabilities.

For the audio, voice control, object recognition, and complex sensor fusion of automotive and higher-end Internet of Things sensing, where complex algorithms for audio and video are needed for rich audio and visual capabilities, Cortex-M7 is required. ARM offers the processor core as well as the Tightly Coupled Memory (TCM) architecture, but ARM licensees like Atmel have to implement memories in such a way that the user can take full benefit from the M7 core to meet system performance and latency goals.

Figure 1. The TCM interface provides a single 64-bit instruction port and two 32-bit data ports.

The TCM interface provides a single 64-bit instruction port and two 32-bit data ports.

In a 65nm embedded Flash process device, the Cortex-M7 can achieve a 1500 CoreMark score while running at 300 MHz, offering top class DSP performance: double-precision floating-point unit and a double-issue instruction pipeline. But algorithms like FIR, FFT or Biquad need to run as deterministically as possible for real-time response or seamless audio and video performance. How do you best select and implement the memories needed to support such performance? If you choose Flash, this will require caching (as Flash is too slow) leading to cache miss risk. Whereas SRAM technology is a better choice since it can be easily embedded on-chip and permits random access at the speed of processor.

Peripheral data buffers implemented in general-purpose system SRAM are typically loaded by DMA transfers from system peripherals. The ability to load from a number of possible sources, however, raises the possibility of unnecessary delays and conflicts by multiple DMAs trying to access the memory at the same time. In a typical example, we might have three different entities vying for DMA access to the SRAM: the processor (64-bit access, requesting 128 bits for this example) and two separate peripheral DMA requests (DMA0 and DMA1, 32-bit access each). Atmel has get round this issue by organizing the SRAM into several banks as described in this picture:

Figure 2. By organizing the SRAM into banks, multiple DMA bursts can occur simultaneously with minimal latency.

By organizing the SRAM into banks, multiple DMA bursts can occur simultaneously with minimal latency.

For a chip maker designing microcontrollers, licensing ARM Cortex-M processor core provides numerous advantages. The very first is the ubiquity of the ARM core architecture, being adopted in multiple market segments to support variety of applications. If this chip maker wants to design-in a new customer, the probability that such OEM has already used ARM-based MCU is very high, and it’s very important for this OEM to be able to reuse existing code (we know the heavy weight linked with software development, in the 60% to 70% of the overall project cost). But this ubiquity generates a challenge: how do you differentiate from the competition when competitors can license exactly the same processor core?

Selecting a more aggressive technology node and providing better performance at lower cost are an option, but we understand that this advantage can disappear as soon as the competition also move to this node. Integrating larger amount of Flash is another option, which is very efficient if the product is designed on a technology that enables it to keep the pricing low enough.

If the chip maker has designed on an aggressive technology node for higher performance and offers a larger amount of Flash than the competition, it may be enough differentiation. Completing with the design of a smarter memory architecture unencumbered by cache misses, interrupts, context swaps, and other execution surprises that work against deterministic timing allow bringing strong differentiation.

Pic

If you want to more completely understand how Atmel has designed this SMART memory architecture for the Cortex-M7, I encourage you to read this white paper from Jacko Wilbrink and Lionel Perdigon entitled “Run Blazingly Fast Algorithms with Cortex-M7 Tightly Coupled Memories.” (You will have to register.) This paper describes MCUs integrating SRAM organized into four banks that can be used as general SRAM and for TCM, showing one example of a Cortex-M7 MCU being implemented in the Atmel | SMART SAM S70, SAM E70 and SAM V70/V71 families.


This post has been republished with permission from SemiWiki.com, where Eric Esteve is a principle blogger, as well as one of the four founding members of the site. This blog was originally shared on August 6, 2015.

“It’s not a feature, it’s a bug”


Embedded systems no longer need to be a ‘black box’ that leaves engineers guessing what may be happening, Percepio AB CEO Dr. Johan Kraft explains his latest guest blog post.


Anyone involved with software development will have most likely heard (and perhaps even said) the phrase “it’s not a bug, it’s a feature” at some point, and while its origins remain a mystery, its sentiment is clear — it’s a bug that we haven’t seen before.

connected_views

Intermittent ‘features’ in an embedded system can originate in either the software or hardware domain, often only evident when certain conditions collide in both. In the hardware domains, the timings involved may be parts of a nano second and where the logic is accessible, such as an address line or data bus — there exist instruments that can operate at high sample rates, allowing engineers to visualize and verify such ‘glitches.’ In the software domain, this becomes much more challenging.

Sequential Processing

While parallel processing is being rapidly adopted across all applications, single-processor systems remain common in embedded systems, thanks partly to the continued increases in the performance of microcontroller cores. Embedded MCUs are now capable of executing a range of increasingly sophisticated Real-Time Operating Systems (RTOS), often including the ability to run various communication protocols for both wired and wireless interfaces.

Whether in a single- or multi-processing system, combining these tasks with the embedded system’s main application, written by the engineering team, can make embedded software builds large, complex and difficult to fault-find, particularly when visibility into the code’s execution is limited. It can also lead to the dreaded intermittent fault which, if part of the system’s operation is ‘hidden’, can make solving them even more challenging.

A typical example may be an unexplained delay in a scheduled task. Of course, an RTOS is intended to guarantee specific tasks happen at specific times but this can be dependent on the task’s priority and what else may be happening at any time. In one real-world example, where a sensor needed to be sampled every 5ms, it was found that occasionally the delay between samples reached 6.5ms, with no simple explanation as to the cause. In another example, a customer reported that their system exhibited random resets; the suspected cause was that the watchdog was expiring before it was serviced, but how could this be checked? In yet another example, a system running a TCP/IP stack showed slower response times to network requests after minor changes in the code, for no obvious reason.

These are typical examples of how embedded systems running complex software can behave in unforeseen ways, leaving engineering teams speculating on the causes and attempting to solve the problems with only empirical results from which to assess their efforts. In the case of intermittent faults or system performance fluctuations, this is clearly an inefficient and unreliable development method.

Trace Tools

The use of logging software embedded in a build in order to record certain actions isn’t new, of course, and it can offer a significantly improved level of visibility into a system. However, while the data generated by such trace software is undoubtedly valuable, exploiting that value isn’t always simple.

Analyzing trace data and visually rendering it in various ways is the key function of Percepio’s Tracealyzer tools. It offers visualization at many levels, ranging from an event list to high-level dependency graphs and advanced statistics.

Over 20 different graphical views are provided, showing different aspects of the software’s execution that are unavailable with debuggers alone, and as such it complements existing software debug tools in a way that is becoming essential in today’s complex embedded systems. It supports an increasing range of target operating systems.

Figure 1(a): It appears that the ControlTask may be disabling interrupts.

Figure 1(a): It appears that the ControlTask may be disabling interrupts.

The main view in Tracealyzer, as shown in Figure 1(a) and 1(b), is a vertical timeline visualizing the execution of tasks/threads and interrupts. Other logged events, such as system calls, are displayed as annotations in this timeline, using horizontal colour-coded text labels. Several other timeline views are provided using horizontal orientation and all horizontal views can be combined on a common horizontal timeline. While much important data is created by the operating system’s kernel, developers can also extend the tracing with User Events, which allow any event or data in a user’s application to be logged. They are logged similar to calling the classic ‘printf’ C library function but are much faster as the actual formatting is handled in the host-side application, and can therefore also be used in time-critical code such as interrupt handlers. And, of course, they can also be correlated with other kernel-based events.

Figure 1(b): By changing the way ControlTask protects a critical section, SamplerTask is able to run as intended.

Figure 1(b): By changing the way ControlTask protects a critical section, SamplerTask is able to run as intended.

Tracealyzer understands the general meaning of many kernel calls, for instance locking a Mutex or writing to a message queue. This allows Tracealyzer to perform deep analysis to connect related events and visualize dependencies, e.g., which tasks communicate (see the communication flow graph, shown in Figure 3). This allows developers to quickly understand what’s really going on inside their system.

Insights

Returning to the first example, where a scheduled task was being inexplicably delayed intermittently, Tracealyzer was used to graphically show the task in question, time-correlated with other tasks. By invoking an exploded view of the task of interest, it was found that a lower priority task was incorrectly blocking the primary task from executing. It was discovered that the second task was disabling interrupts to protect a critical section unrelated to the primary task, which blocked the operating system scheduling. After changing the second task to using a Mutex instead, the primary task was able to meet its timing requirements. Figure 1(a) shows the SamplerTask being delayed by the (lower priority) ControlTask before the bug fix; Figure 1(b) confirms that SamplerTask is now occurring every 5ms as intended.

In the second example, User Events were used to not only record when the Watchdog was reset or when it expired, but also to log the remaining Watchdog timer value, thereby showing the time left in the Watchdog timer when it is reset. By inspecting the logged system calls it was found that the task in question did not only reset the Watchdog timer; it also posted a message to another task using a (fixed-size) message queue. The Watchdog resets seemed to occur while the Watchdog task was blocked by this message posting. Once realised, the question then became ‘why’. By visually exploring the operations on this message queue using the Kernel Object History view, it became clear that the message queue sometimes becomes full, as suspected. By correlating a view of the CPU load against how the Watchdog timer margin varied over time, as shown in Figure 2, it was found that Fixed Priority Scheduling was allowing a medium-priority task (ServerTask) to use so much CPU time that the message queue wasn’t always being read. Instead, it became full, leading to a Watchdog reset. The solution was in this case to modify the task priorities.

Figure 2: The CPU Load graph, correlated to the Watchdog Timer User Event, gives valuable insights.

Figure 2: The CPU Load graph, correlated to the Watchdog Timer User Event, gives valuable insights.

In the last example, where a software modification caused increased response time to network requests, using the Communications Flow view (Figure 3) it was found that one particular task — Logger — was receiving frequent but single messages with diagnostics data to be written to a device file system, each causing a context switch. By modifying the task priorities, the messages were instead buffered until the network request had finished and thereafter handled in a batch. This way, the number of context-switches during the handling of network requests was drastically reduced, thereby improving overall system responsiveness.

Figure 3: The Communication Flow reveals 5 tasks sending messages to Logger.

Figure 3: The Communication Flow reveals 5 tasks sending messages to Logger.

Conclusion

The complexity of embedded software is increasing rapidly, creating demand for improved development tools. While runtime data can be recorded in various ways, understanding its meaning isn’t a simple process, but through the use of innovative data visualization tools such as Tracealyzer it can be.

Many companies have already benefited from the many ways of using the tool to really discover what’s going on in the runtime system. Some Tracealyzer users even include it in production code, allowing them to gather invaluable data about real systems running in the field.

Embedded systems need no longer be a ‘black box,’ leaving engineers to suppose what may be happening; powerful visualization tools now turn that black box into an open box.

4 designs tips for AVB in-car infotainment


AVB is clearly the choice of several automotive OEMs, says Gordon Bechtel, CTO, Media Systems, Harman Connected Services.


Audio Video Bridging (AVB) is a well-established standard for in-car infotainment, and there is a significant amount of activity for specifying and developing AVB solutions in automobiles. The primary use case for AVB is interconnecting all devices in a vehicle’s infotainment system. That includes the head unit, rear-seat entertainment systems, telematics unit, amplifier, central audio processor, as well as rear-, side- and front-view cameras.

The fact that these units are all interconnected with a common, standards-based technology that is certified by an independent market group — AVnu — is a brand new step for the automotive OEMs. The AVnu Alliance facilitates a certified networking ecosystem for AVB products built into the Ethernet networking standard.

Figure 1 - AVB is an established technology for in-car infotainmentAccording to Gordon Bechtel, CTO, Media Systems, Harman Connected Services, AVB is clearly the choice of several automotive OEMs. His group at Harman develops core AVB stacks that can be ported into car infotainment products. Bechtel says that AVB is a big area of focus for Harman.

AVB Design Considerations

Harman Connected Services uses Atmel’s SAM V71 microcontrollers as communications co-processors to work on the same circuit board with larger Linux-based application processors. The software firm writes codes for customized reference platforms that automotive OEMs need to go beyond the common reference platforms.

Based on his experience of automotive infotainment systems, Bechtel has outlined the following AVB design dos and don’ts for the automotive products:

1. Sub-microsecond accuracy: Every AVB element on the network is hooked to the same accurate clock. The Ethernet hardware should feature a time stand to ensure packet arrival in the right order. Here, Bechtel mentioned the Atmel | SMART SAM V71 MCU that boasts screen registers to ensure advanced hardware filtering of inbound packets for routing to correct receive-end queues.

2. Low latency: There is a lot of data involved in AVB, both in terms of bit rate and packet rate. AVB allows low latency through reservations for traffic, which in turn, facilitate faster packet transfer for higher priority data. Design engineers should carefully shape the data to avoid packet bottlenecks as well as data overflow.

Figure 2 - Bechtel

Bechtel once more pointed to Atmel’s SAM V71 microcontrollers that provide two priority queues with credit-based shaper (CBS) support that allows the hardware-based traffic shaping compliant with 802.1Qav (FQTSS) specifications for AVB.

3. 1588 Timestamp unit: It’s a protocol for correct and accurate 802.1 AS (gPTP) support as required by AVB for precision clock synchronization. The IEEE 802.1 AS carries out time synchronization and is synonymous with generalized Precision Time Protocol or gPTP.

Timestamp compare unit and a large number of precision timer counters are key for the synchronization needed in AVB for listener presentations times and talker transmissions rates as well as for media clock recovery.

4) Tightly coupled memory (TCM): It’s a configurable high-performance memory access system to allow zero-wait CPU access to data and instruction memory blocks. A careful use of TCM enables much more efficient data transfer, which is especially important for AVB class A streams.

It’s worth noting that MCUs based on ARM Cortex-M7 architecture have added the TCM capability for fast and deterministic code execution. TCM is a key enabler in running audio and video streams in a controlled and timely manner.

AVB and Cortex-M7 MCUs

The Cortex-M7 is a high-performance core with almost double the power efficiency of the older Cortex-M4. It features a six-stage superscalar pipeline with branch prediction — while the M4 has a three-stage pipeline.  Bechtel of Harman acknowledged that M7 features equate to more highly optimized code execution, which is important for Class A audio implementations with lower power consumption.

Again, Bechtel referred to the SAM V71 MCUs — which are based on the Cortex-M7 architecture — as particularly well suited for the smaller ECUs. “Rear-view cameras and power amplifiers are good examples where the V71 microcontroller would be a good fit,” he said. “Moreover, the V71 MCUs can meet the quick startup requirements needed by automotive OEMs.”

Figure 3 - Atmel's V71 is an M7 chip for Ethernet AVB networking and audio processing

The infotainment connectivity is based on Ethernet, and most of the time, the main processor does not integrate Ethernet AVB. So the M7 microcontrollers, like the V71, bring this feature to the main processor. For the head unit, it drives the face plate, and for the telematics control, it contains the modem to make calls so echo cancellation is a must, for which DSP capability is required.

Take the audio amplifier, for instance, which receives a specific audio format that has to be converted, filtered and modulated to match the requirement for each specific speaker in the car. This means infotainment system designers will need both Ethernet and DSP capability at the same time, which Cortex-M7 based chips like V71 provide at low power and low cost.

BitCloud ZigBee PRO SDK achieves Golden Unit status


Compatible with the Atmel | SMART SAM R21 and ATmega256RFR2, the BitCloud ZigBee PRO Software Development Kit has achieved Golden Unit status.


Atmel has announced that the BitCloud ZigBee PRO Software Development Kit (SDK) has achieved the prestigious Golden Unit status for the ZigBee PRO R21 standard. As an approved Golden Unit, the Atmel BitCloud solution will be used by ZigBee testhouses to verify standard compliancy for all future ZigBee 3.0 products. This guarantees superior interoperability for customers designing the latest connected lighting, security and comfort control products for smart home applications.

banner-ZigBit-Modules-496x190

With improved security, interoperability and ease-of-use, the Atmel BitCloud SDK provides a comprehensive set of tools to quickly design and develop wireless products compliant to ZigBee LightLink and ZigBee Home Automation Profiles, as well as the upcoming ZigBee 3.0 standard. The BitCloud SDK includes full-featured reference applications, ZigBee PRO stack libraries and API, user documentation, and implements reliable, scalable and secure wireless solution that supports large mesh networks of hundreds of devices, and is optimized for ultra-low power consumption with up to 15 years battery life.

BitCloud ZigBee PRO SDK fully supports Atmel | SMART SAM R21 devices, a single-chip solution integrating Atmel’s Atmel | SMART ARM Cortex-M0+-based MCU and high-performance IEEE 802.15.4 RF transceiver available as a standalone component or production-ready certified modules. The Atmel BitCloud is also compatible with the AVR ATmega256RFR2 wireless MCU, an ideal hardware platform delivering the industry’s lowest power consumption at 12.5mA in active receive mode, combined with receiver sensitivity at 101dBm.

WC_256RFR2

“Intelligence, wireless connectivity and security are key elements to enable the anticipated growth of the Internet of Things market,” says Pierre Roux, Atmel director of wireless solutions. “Achieving the prestigious Golden Unit Status for our BitCloud SDK ensures designers that our wireless solutions are world class and will cater next-generation solutions for this smart, connected world. We are excited to achieve this certification again.”