Tag Archives: 32-bit ARM Cortex-M7

Digital audio recording “you” with quality and ease


Instamic wants to do for microphones what the GoPro did for cameras. 


Many analog years ago, digital recorded audio won the popularity contest. Nowadays, whether it’s from your mobile phone, infotainment system or personal audio device, every sound you hear is from digitally encoded bits.

Digital audio has eliminated all of the analog audio’s distortions and noise-related problems. Quite simply, people are shaped and drawn to recorded audio, ranging from music producers, to creative artist, to the everyday consumer. It’s in these moments for the user, high-quality audio conveys clarity in the recording moments. In today’s user interfaces, from media and podcasts to tablets, many whizzing bits are streaming a world of information including audio — readily available at every reach of a finger or ear.

The Miracle of Sound all Around US

More and more, we are seeing the prolific expansion and seamless integration of the stack. What does this all mean, though? Screen time now captivates us, while voice recognition and audio are blended into the user pathways of UX. Spurring from technology, we see popular apps like Evernote and iOS/Android natively adopting audio recording right within its inherent interface. These apps are taking in the voice user input to also drive UX — cleverly weaving experience, intention, outcome, commenting and moments.

Almost every sound you hear coming out of a speaker is digitally sampled and encoded.  Moment upon moment of keynotes stored are recorded more, albeit in the format of video or audio, we are seeing an increasing number of unique use cases to why one would want to capture a particular moment. These moments offer an on-demand periscope — referencing a historic timeline of ripples in our experience, memory, and journey through work, life, play, and what matters most to us.

referencing a historic timeline of ripples in our experience

For much of our pleasures, sound is always in digital — whether it’s on your smartphone, computer, radio, television, home theater or in a concert hall. Today, across many electronic devices, audio recording is integral transition to many advanced features applied toward enhancing old ways of doing things. Just take a look at visual voicemail, and how recording voicemails took the next leap once UX and advance playback was offered. Visual and digital voice recording meshed with non-linear play, took voice playback to the next level. I’d go so far as to point out that most people never hear analog recordings anymore.

Unless you’re a musician, or live with one, virtually all the music you hear live or recorded is digital. We now see the integration of audio and voice recording into all forms of day-to-day activity. Audio with depth is helping bring back some of those analog qualities where the shape and length of a sound wave can be more defined by bit depth and bit sampling rate. With these 24-bit audio embedded designs and digital audio recordings, we can also achieve better sound quality more akin to what our ear can register and decode, help bringing forth the finer granular details of high fidelity. But it’s not all about just emitting fidelity via the digital audio recording. The use cases and need to record audio, albeit ourselves or surrounding interactions, is helpful for many use cases (musician during creative process, senior suffering stages of memory loss, students seeking catalog of lectures, author recalling and commenting wiring plots during writing process, etc.)lectures and applications for audio recording
Why does bit depth matter, you ask? Bit depth refers to the number of bits you have when a device is capturing audio. Below is a graph showing a series of levels in how bit depth works. There are 65,536 possible levels for 16-bit audio. As for 24-bit, there are 16,777,216 levels. Now, let’s see how the depth is explained. The capturing of audio can be sliced in partitions at any moment in time such as shown in this  graph. To move to higher resolution in audio, every bit added counts toward greater resolution. The deeper the bit depth, the number of levels stack greater audio information, layering richer context to the profile of the audio being recorded. Altogether, what’s said describes a segment of audio frozen in a single slice or moment of time.

The second integral “high quality” factor is called sample rate. Together, bit depth and sample rate complete the higher resolution audio model. The sample rate represents the number of times your audio is measured or “sampled” per second. The typical standard for CDs, the sample rate is 44.1 kHz or 44,100 slices every second.
bit depth and sample rate explained

Digital audio eliminated all of analog audio’s distortions and noise-related problems. In that sense digital is “perfect.” When analog recordings are copied, there are significant generation-to-generation losses, added distortion and noise; digital-to-digital copies are perfect clones. Some recording engineers believe digital doesn’t have a sound per se, and that it’s a completely transparent recording medium. Analog, with its distortions, noise and speed variations imparts its own sound. Arguably, perfect, it is not. This is why high resolution in audio paired with the best form factor and ease and usability go hand in hand.

As to whether digital composes sounds with better quality than analog, that’s merely a moot point. Digital audio recording and its very nature of having the ability to slice into segments and layer, then import into other applications and change into enhanced or analyzed into wave forms has been remarkable and pivotal for many industries. In fact, we now see results of digital audio having a significant impact when having the ability to vector to angular and distinct wave form shapes as to help identify voices and interpret intelligent voice recognition. These encoding factors coupled with deep learning programmatic layers are ushering in a new era of digital interpretation and digital recognition.Instamic-every-day-use
Despite such a proposal of questionable technical and audible merits, founder of Instamic Michelle Baggio apparently moved ahead with the idea and recently launched a well-funded Indiegogo campaign for a new audio and player designed to revive factors of instant usability and simplicity that has been squeezed out of digital recording. Thoughts and experience can now be easily captured or reduced to a series of moments, but it is in this very reason for being captured that one can traverse thoughts by memorable experience to episode, so we as users can stitch what’s most meaningful to formulate a mosaic of audio recordings to help serve a purpose.  Whether it’s for applications in medical, academics, business, music or film, the list goes on and on… even a victim of memory impairment can find good use for Instamic.

Instamic isn’t just an ordinary microphone. It happens to be the smartest, smallest and most affordable digital audio recorder that is also easy to operate, combining usability with the smartphone. It attained over 2,500+ backers and crowdfunding exceeding 539% its original campaign goal. With that many backers and goals funded beyond expectations, there are good market/application factors yielding wider acceptance and adoption of more and more of these audio recording tools. Instamic can function as the day-to-day voice logging tool of choice.go-pro-likeness-recording-revolution
We have now leaped into the “Recording Revolution.” GoPro had an effect on the video revolution, opening up a periscope and view into so many never before seen vantage points. Previously, only a number of people had access to seeing. Adventures and passions of people, shared from around the world into showcases for all to experience what they had seen. Giving an eagle’s eye into the experience of many, providing a viewport into those that would never have seen amazing video capture. The recording revolution is upon us and will grow. Instamic is a mic build and made for everyone. Not only is this recording device at 24-bit, the sample rate matches industry high resolution standards at 96khz sample rate. That’s right, based on the aforementioned bit sampling description, that puts the recording at high resolution of 96,000 slices of audio sampled per second.

Instamic Pro and Instamic

Instamic records at 96khz/24-bit, having both mono and dual-mono while its Pro version even boasts stereo recording. This simple but advance digital recorder features omnidirectional polar pattern. Omnidirectional polar pattern records and performs ideally based on its small form factor. A peek inside reveals the architecture of quickly including minimal-phase digital filtering, zero-feedback circuitry, one of the “best sounding” DAC -nabled chips available with dual 2Msps, 12-bit DAC and analog comparator, and an all-discrete output buffer.

Instamic has the ideal form factor — it’s tiny and can be virtually attached to anything. As a standalone recorder, given the right price and origin of this idea, it can very well replace conventional handheld and lavaliere microphones. Packed with mounting options (magnet, velcro and tape) and a quick release clip, the super portable gadget can register hours of 48khz/24-bit sound in mono and dual mono mode, as well as in stereo quality with its Pro variant. A built-in, rechargeable battery allows for roughly four hours of uncompressed audio recording, with duration varying slightly depending on charge time, temperature and storage conditions.

Instamic has a frequency response of 50 to 18,000Hz. Try doing this with current smartphones or other devices, and batteries will drain quick. Then, recording is sensitive having a frequency response of 50 to 18,000Hz. Instamic crams big recording power into a small form factor which is highly usable because it can be tucked into anything. Simplicity seems to always rule the day especially when it comes to electronic devices looking to shape or better the way we do things in a day to day basis. What the GoPro did for cameras, this gadget wants to do for microphones.

What the GoPro did for cameras, this gadget wants to do for microphones

Given its compact design and minimal setup, Instamic is the perfect accessory for filmmakers, journalists and musicians as they will no longer need to lug around all that bulky, obtrusive equipment. Eliminating the need for cables, the wearable unit connects to its accompanying app over Bluetooth and enables users to control it remotely within a 30-foot radius, as well as simultaneously record with multiple Instamics. What’s more, the mic has been designed with the latest Atmel | SMART SAM 70S MCU, comprising 2GB to 8GB internal memory.

Turning on the pocket-sized device requires a single tap of its logo, while another touch will begin the recording. From there, Instamic will automatically adjust the gain on its own in the first 10 seconds and will ensure that it remains at the optimal level. Tap and hold again for a second and it will stop. If paired with a smartphone, Instamic can also be controlled through its app. When a user needs to transfer a recording to their desktop, its microUSB charging port doubles as the file transfer system. Instamic comes in two models: Pro and Go. The Pro version’s waterproof, black shell makes it a suitable instrument for indoor filming sets, darker environments and even in five feet of water. Meanwhile, the splash-resistant, white Instamic exterior of the Go can remain inconspicuous in most bright, day-lit settings. Both can camouflage easily with custom design covers and handle the most windy conditions wearing Instamic Windshield.Easy USB Charging and 4 hour use and recording
How is this being done inside? Intrigued? You can head over to its Indiegogo page to delve a bit deeper. This Bay Area-based startup has already met its crowdfunding goals and now quickly developing their products with the Atmel SMART | SAM S70, a high-performance ARM Cortex-M7 core-based MCU running up to 300MHz. The MCU comes with analog capability, fitting 12-bit ADCs of up to 24 channels with analog front end, offering offset error correction and gain control, as well as hardware averaging up to 16-bit resolution. SAM S70 also includes 2-channel, 2Msps, 12-bit DAC.

But that’s not all. It’s combined with high-capacity memory with up to 2MB Flash and 384kB SRAM and DSP encoding capabilities (DSP functionality that can be further grown into its roadmap). DSP features can be broadly extended well into its product roadmap. Even more is to happen, inclusive in the roadmap is the SAM S70 MCU doing the encoding and decoding of the audio signals, enhanced with its ability to process deterministic code execution and truly expand on the stereo quality functionality packed with Omnidirectional polar pattern, providing the best quality mapping and single processing for an mcu, outputting workhorse processing power of an MPU.  This 32-bit ARM Cortex M7 processor also features a floating point unit (FPU).  Now with quality mapped to bit depth and bit sampling, the number crunching math required to compute an enormous layers of bits is astounding

The FPU further bolsters high quality audio by executing float point processing to render audio temporarily in a 32 bit floating point format. The recorders will render audio temporarily while the extra bits are added onto the file after recording to allow generous headroom for audio mathematics in the digital domain in memory.  Before the file is output it will go through the 24 bit converters. “Floating point” scales the decimal point in a calculation and processing even more so. Furthermore, having 32 rather than 24 registers for calculations is going to render increasingly accurate result. With strings of only 24 numbers, it would be theoretically impossible to allow for other extensive calculations. Yet, when the data hits the 24-bit converter 8 bits are “truncated” or cut off.  The said mathematical result is simply more accurate and as a result, we get high resolution output of the audio.

Instamic’s MEMS microphones offer a breakthrough innovation in sound sensing. Having sound recorded with an omnidirectional microphone response (similar to sound studio environments) is generally considered to be a perfect sphere in three dimensions. The smallest diameter gives the best omni-directional characteristics at high frequencies. Yes, indeed there’s always something new to learn. This is the compelling reason that makes the MEMS microphone the best mmni-directional microphone. Industry wise, MEMS microphones are entering new application areas such as voice-enabled gaming, automotive voice systems, acoustic sensors for industry and security applications, and medical telemetry. What was once unthinkable early on, the unique construction of the MEMs microphone combined with performance and form factor make it all possible.

Instamic Pro Features and Functionality

instamic-pro-spec

MEMS Microphone Specifications

instamic-mems-microphone

Recorder Specifications

spec-recorder-instamic

Frequency Response Specifications

spec-frequency-instamic

Comparison Specifications

spec-specification-table-instamic-comparison

Comparisons at Scale

spec-comparisons-scale

Once again, Instamic originally stems from the well-funded pool of contributing patrons. The community has supported and validated this product’s potential for an ideal application to market fit. With this said, the demand is real. Shoot for the stars, right? Powered by Atmel’s latest Cortex-M7, Instamic is looking to become a household name when it comes to capturing high-quality sound anywhere, at anytime, on anything.

ARM Keil ecosystem integrates the Atmel SAM ESV7


Keil is part of the ARM wide ecosystem, enabling developers to speed up system release to the market. 


Even the best System-on-Chip (SoC) is useless without software, as well as the best designed S/W needs H/W to flourish. The “old” embedded world has exploded into many emergent markets like the  IoT, wearables, and even automotive, which is no more restricted to motor control or airbags as innovative products from entertainment to ADAS are being developed. What is the common denominator with these emergent products? Each of these require more software functionality and fast memory algorithm with deterministic code execution, and consequently innovative hardware to support these requirements, such as the ARM Cortex-M7-based Atmel | SMART SAM ESV7.

AtmelChipLib Overview

ARM has released a complete software development environment for a range of ARM Cortex-M based MCU devices: Keil MDK. Keil is part of ARM wide ecosystem, enabling developers to speed up system release to the market. MDK includes the µVision IDE/Debugger and ARM C/C++ Compiler, along with the essential middleware components and software packs. If you’re familiar with Run-Time Environment stacked description, you’ll recognize the various stacks. Let’s focus on “CMSIS-Driver”. CMSIS is the standard software framework for Cortex-M MCUs, extending the SAM-ESV7 Chip Library with standardized drivers for middleware and generic component interfaces.

By definition, an MCU is designed to address multiple applications and the SAM ESV7 is dedicated to support performance demanding and DSP intensive systems. Thanks to its 300MHz clock, SAM ESV7 delivers up to 640 DMIPS and its DSP performance is double that available in the Cortex-M4. A double-precision floating-point unit and a double-issue instruction pipeline further position the Cortex-M7 for speed.

Atmel Cortex M7 based Dev board

Let’s review some of these applications where SAM ESV7 is the best choice…

Finger Printer Module

The goal is to provide human bio authentication module for office or house access control. The key design requirements are:

  • +300 MHz CPU performance to process recognition algorithms
  • Image sensor interface to read raw finger image data from finger sensor array
  • Low cost and smaller module size
  • Flash/memory to reduce BOM cost and module size
  • Memory interface to expand model with memory extension just in case.

The requirement for superior performance and an image sensor interface can be seen as essential needs, but which will make the difference will be to offer both cheaper BOM cost and smaller module size than the competitor? The SAM S70 integrates up to 2MB embedded Flash, which is twice more than the direct competitor and may allow reducing BOM and module size.

SAM S70 Finger Print

Automotive Radio System

Every cent counts in automotive design, and OEMs prefer using a MCU rather than MPU, at first for cost reasons. Building an attractive radio for tomorrow’s car requires developing very performing DSP algorithms. Such algorithms used to be developed on expansive DSP standard part, leading to large module size, including external Flash and MCU leading obviously to a heavy BOM. In a 65nm embedded Flash process device, the Cortex-M7 can achieve a 1500 CoreMark score while running at 300 MHz, and its DSP performance is double that available in the Cortex-M4. This DSP power can be used to manage eight channels of speaker processing, including six stages of biquads, delay, scaler, limiter and mute functions. The SAM S71 workload is only 63% of the CPU, leaving enough room to support Ethernet AVB stack — very popular in automotive.

One of the secret sauces of the Cortex-M7 architecture is to provide a way to bypass the standard execution mechanism using “tightly coupled memories,” or TCM. There is an excellent white paper describing TCM implementation in the SAM S70/E70 series, entitled “Run Blazingly Fast Algorithms with Cortex-M7 Tightly Coupled Memories” from Lionel Perdigon and Jacko Wilbrink, which you can find here.


This post has been republished with permission from SemiWiki.com, where Eric Esteve is a principle blogger as well as one of the four founding members of the site. This blog first appeared on SemiWiki on October 23, 2015.

4 designs tips for AVB in-car infotainment


AVB is clearly the choice of several automotive OEMs, says Gordon Bechtel, CTO, Media Systems, Harman Connected Services.


Audio Video Bridging (AVB) is a well-established standard for in-car infotainment, and there is a significant amount of activity for specifying and developing AVB solutions in automobiles. The primary use case for AVB is interconnecting all devices in a vehicle’s infotainment system. That includes the head unit, rear-seat entertainment systems, telematics unit, amplifier, central audio processor, as well as rear-, side- and front-view cameras.

The fact that these units are all interconnected with a common, standards-based technology that is certified by an independent market group — AVnu — is a brand new step for the automotive OEMs. The AVnu Alliance facilitates a certified networking ecosystem for AVB products built into the Ethernet networking standard.

Figure 1 - AVB is an established technology for in-car infotainmentAccording to Gordon Bechtel, CTO, Media Systems, Harman Connected Services, AVB is clearly the choice of several automotive OEMs. His group at Harman develops core AVB stacks that can be ported into car infotainment products. Bechtel says that AVB is a big area of focus for Harman.

AVB Design Considerations

Harman Connected Services uses Atmel’s SAM V71 microcontrollers as communications co-processors to work on the same circuit board with larger Linux-based application processors. The software firm writes codes for customized reference platforms that automotive OEMs need to go beyond the common reference platforms.

Based on his experience of automotive infotainment systems, Bechtel has outlined the following AVB design dos and don’ts for the automotive products:

1. Sub-microsecond accuracy: Every AVB element on the network is hooked to the same accurate clock. The Ethernet hardware should feature a time stand to ensure packet arrival in the right order. Here, Bechtel mentioned the Atmel | SMART SAM V71 MCU that boasts screen registers to ensure advanced hardware filtering of inbound packets for routing to correct receive-end queues.

2. Low latency: There is a lot of data involved in AVB, both in terms of bit rate and packet rate. AVB allows low latency through reservations for traffic, which in turn, facilitate faster packet transfer for higher priority data. Design engineers should carefully shape the data to avoid packet bottlenecks as well as data overflow.

Figure 2 - Bechtel

Bechtel once more pointed to Atmel’s SAM V71 microcontrollers that provide two priority queues with credit-based shaper (CBS) support that allows the hardware-based traffic shaping compliant with 802.1Qav (FQTSS) specifications for AVB.

3. 1588 Timestamp unit: It’s a protocol for correct and accurate 802.1 AS (gPTP) support as required by AVB for precision clock synchronization. The IEEE 802.1 AS carries out time synchronization and is synonymous with generalized Precision Time Protocol or gPTP.

Timestamp compare unit and a large number of precision timer counters are key for the synchronization needed in AVB for listener presentations times and talker transmissions rates as well as for media clock recovery.

4) Tightly coupled memory (TCM): It’s a configurable high-performance memory access system to allow zero-wait CPU access to data and instruction memory blocks. A careful use of TCM enables much more efficient data transfer, which is especially important for AVB class A streams.

It’s worth noting that MCUs based on ARM Cortex-M7 architecture have added the TCM capability for fast and deterministic code execution. TCM is a key enabler in running audio and video streams in a controlled and timely manner.

AVB and Cortex-M7 MCUs

The Cortex-M7 is a high-performance core with almost double the power efficiency of the older Cortex-M4. It features a six-stage superscalar pipeline with branch prediction — while the M4 has a three-stage pipeline.  Bechtel of Harman acknowledged that M7 features equate to more highly optimized code execution, which is important for Class A audio implementations with lower power consumption.

Again, Bechtel referred to the SAM V71 MCUs — which are based on the Cortex-M7 architecture — as particularly well suited for the smaller ECUs. “Rear-view cameras and power amplifiers are good examples where the V71 microcontroller would be a good fit,” he said. “Moreover, the V71 MCUs can meet the quick startup requirements needed by automotive OEMs.”

Figure 3 - Atmel's V71 is an M7 chip for Ethernet AVB networking and audio processing

The infotainment connectivity is based on Ethernet, and most of the time, the main processor does not integrate Ethernet AVB. So the M7 microcontrollers, like the V71, bring this feature to the main processor. For the head unit, it drives the face plate, and for the telematics control, it contains the modem to make calls so echo cancellation is a must, for which DSP capability is required.

Take the audio amplifier, for instance, which receives a specific audio format that has to be converted, filtered and modulated to match the requirement for each specific speaker in the car. This means infotainment system designers will need both Ethernet and DSP capability at the same time, which Cortex-M7 based chips like V71 provide at low power and low cost.

6 memory considerations for Cortex-M7-based IoT designs


Taking a closer look at the configurable memory aspects of Cortex-M7 microcontrollers.


Tightly coupled memory (TCM) is a salient feature in the Cortex-M7 lineup as it boosts the MCU’s performance by offering single cycle access for the CPU and by securing the high-priority latency-critical requests from the peripherals.

Cortex-M7-chip-diagramLG

The early MCU implementations based on the ARM’s M7 embedded processor core — like Atmel’s SAM E70 and S70 chips — have arrived in the market. So it’d be worthwhile to have a closer look at the configurable memory aspects of M7 microcontrollers and see how the TCMs enable the execution of deterministic code and fast transfer of real-time data at the full processor speed.

Here are some of the key findings regarding the advanced memory architecture of Cortex-M7 microcontrollers:

1. TCM is Configurable

First and foremost, the size of TCM is configurable. TCM, which is part of the physical memory map of the MCU, supports up to 16MB of tightly coupled memory. The configurability of the ARM Cortex-M7 core allows SoC architects to integrate a range of cache sizes. So that industrial and Internet of Things product developers can determine the amount of critical code and real-time data in TCM to meet the needs of the target application.

The Atmel | SMART Cortex-M7 architecture doesn’t specify what type of memory or how much memory should be provided; instead, it leaves these decisions to designers implementing M7 in a microcontroller as a venue for differentiation. Consequently, a flexible memory system can be optimized for performance, determinism and low latency, and thus can be tuned to specific application requirements.

2. Instruction TCM

Instruction TCM or ITCM implements critical code with deterministic execution for real-time processing applications such as audio encoding/decoding, audio processing and motor control. The use of standard memory will lead to delays due to cache misses and interrupts, and therefore will hamper the deterministic timing required for real-time response and seamless audio and video performance.

The deterministic critical software routines should be loaded in a 64-bit instruction memory port (ITCM) that supports dual-issue processor architecture and provide single-cycle access for the CPU to boost MCU performance. However, developers need to carefully calibrate the amount of code that need zero-wait execution performance to determine the amount of ITCM required in an MCU device.

The anatomy of TCM inside the M7 architecture

The anatomy of TCM inside the M7 architecture.

3. Data TCM

Data TCM or DTCM is used in fast data processing tasks like 2D bar decoding and fingerprint and voice recognition. There are two data ports (DTCMs) that provide simultaneous and parallel 32-bit data accesses to real-time data. Both instruction TCM and data TCM — used for efficient access to on-chip Flash and external resources — must have the same size.

4. System RAM and TCM

System RAM, also known as general RAM, is employed for communications stacks related to networking, field buss, high-bandwidth bridging, USB, etc. It implements peripheral data buffers generally through direct memory access (DMA) engines and can be accessed by masters without CPU intervention.

Here, product developers must remember the memory access conflicts that arise from the concurrent data transfer to both CPU and DMA. So developers must set clear priorities for latency-critical requests from the peripherals and carefully plan latency-critical data transfers like the transfer of a USB descriptor or a slow data rate peripheral with a small local buffer. Access from the DMA and the caches are generally burst to consecutive addresses to optimize system performance.

It’s worth noting that while system memory is logically separate from the TCM, microcontroller suppliers like Atmel are incorporating TCM and system RAM in a single SRAM block. That lets IoT developers share general-purpose tasks while splitting TCM and system RAM functions for specific use cases.

A single SRAM block for TCM and system memory allows higher flexibility and utilization

A single SRAM block for TCM and system memory allows higher flexibility and utilization.

5. TCM Loading

The Cortex-M7 uses a scattered RAM architecture to allow the MCU to maximize performance by having a dedicated RAM part for critical tasks and data transfer. The TCM might be loaded from a number of sources, and these sources aren’t specified in the M7 architecture. It’s left to the MCU designers whether there is a single DMA or several data loading points from various streams like USB and video.

It’s imperative that, during the software build, IoT product developers identify which code segments and data blocks are allocated to the TCM. This is done by embedding programs into the software and by applying linker settings so that software build appropriately places the code in memory allocation.

6. Why SRAM?

Flash memory can be attached to a TCM interface, but the Flash cannot run at the processor clock speed and will require caching. As a result, this will cause delays when cache misses occur, threatening the deterministic value proposition of the TCM technology.

DRAM technology is a theoretical choice but it’s cost prohibitive. That leaves SRAM as a viable candidate for fast, direct and uncached TCM access. SRAM can be easily embedded on a chip and permits random accesses at the speed of the processor. However, cost-per-bit of SRAM is higher than Flash and DRAM, which means it’s critical to keep the size of the TCM limited.

Atmel | SMART Cortex-M7 MCUs

Take the case of Atmel’s SMART SAM E70, S70 and V70/71 microcontrollers that organize SRAM into four memory banks for TCM and System SRAM parts. The company has recently started shipping volume units of its SAM E70 and S70 families for the IoT and industrial markets, and claims that these MCUs provide 50 percent better performance than the closest competitor.

SAM-E70_S70_BlockDiagram_Lg_929x516

Atmel’s M7-based microcontrollers offer up to 384KB of embedded SRAM that is configurable as TCM or system memory for providing IoT designs with higher flexibility and utilization. For instance, E70 and S70 microcontrollers organize 384KB of embedded SRAM into four ports to limit memory access conflicts. These MCUs allocate 256KB of SRAM for TCM functions — 128 KB for ITCM and DTCM each — to deliver zero wait access at 300MHz processor speed, while the remaining 128KB of SRAM can be configured as system memory running at 150MHz.

However, the availability of an SRAM block organized in the form of a memory bank of 384KB means that both system SRAM and TCM can be used at the same time.The large on-chip SRAM of 384KB is also critical for many IoT devices, since it enables them to run multiple communication stacks and applications on the same MCU without adding external memory. That’s a significant value proposition in the IoT realm because avoiding external memories lowers the BOM cost, reduces the PCB footprint and eliminates the complexity in the high-speed PCB design.

Video: Pat Sullivan talks ARM Cortex-M7 at ARM TechCon

As reported on Bits & Pieces, ARM recently unveiled a new 32-bit Cortex-M7 microcontroller (MCU) targeted at high-end, next-gen embedded applications.

After being named one of the early lead licensees of the processor, we announced a new family of Atmel | SMART ARM Cortex-M7-based MCUs, which are well positioned between our existing ARM Cortex-M-based MCUs and Cortex-A-based MPUs. The new devices will address high-growth markets including the Internet of Things (IoT) and wearables, as well as automotive and industrial applications that require both high performance and power efficiency.

unnamed

During ARM TechCon 2014, Atmel’s Pat Sullivan had the chance to catch up with Dominic Pajak of ARM to discuss the company’s newly-introduced Atmel | SMART ARM Cortex-M7-based processor.

“We are proud to be a lead partner in the Cortex-M7 product. We think it’s a great device and really like the performance of it. It actually sits really well between the M4 and A5/A7 portfolios, ” Sullivan told Pajak. “I see this as a really nice filler for us. It allows our customers working in both areas to have a bridge product and a really nice roadmap moving forward.”

As to which IoT segments the Atmel Cortex-M7 processors will be used, “We see it in mid-range wearable applications, as well as healthcare devices in that area,” Sullivan notes.

Shortly thereafter, Sullivan joined fellow industry heavyweights (ST Micro and Freescale) for a standing-room only panel on the microcontroller. During the session, Sullivan said he sees the Cortex-M7 also succeeding in networking and gateway arenas.

“We see it addressing a lot of the system integration, performance issues, and power issues that we have. We also see it working in networking, Internet of Things and smart energy. We think this particular core is well suited for the areas where we see the highest growth rate.”

By5N3hAIcAAISUA

“Consistent architecture with high-performance is one of the most important things we see in ARM Cortex-M7.” He later added, “Huge data is driving a connected home and it’s coming sooner than we think.”

Sullivan concluded, “We’re all going to be in a more connected world in the future, good and bad. We may not even recognize it.”

think

While sampling to select customers is currently underway, general availability of the Xplained kit is expected in early 2015. Stay tuned!