Category Archives: Application Highlights

3D printing for sheet metal, sort of

Incremental_sheet_forming

Incremental sheet forming makes a single sheet metal part by pushing a polished ball against the metal while under CNC control.

My mechanical engineer buddy Dave Ruigh came across a Ford Motor video of how they can prototype a single sheet metal part using CNC (computer numerical control). It’s technically called “Two Point Incremental Sheet Forming.”

Dave noted: “I see a Faro logo on the stylus head (they make 3D digitizers). Looks like they are generating the toolpath in Catia V5. These are Fanuc hexapod robots. Pretty damned slick.”

Then audio guru Steve Williams chimed in: “Is this truly 3D printing? Is there a class of this that involves plastic sheet deformation as an alternative to sheet metal stamping, which was sort of what they were comparing. What is the plastic and how common is the sheet deformation (presumably through heat) method, compared to depositing layers of material as in normal 3D stuff?” To this Dave replied:

“They are forming metal sheets with this process, not plastic. 3D printing is just a made up buzzword that broadly covers any rapid prototyping technique. I guess we could call it “unconventional fabrication technology,” or UFT, if you would prefer. That said, you might do a similar process with plastic sheet using heat. Plastics tend to deform nonlinearly though (they stretch a lot, then spring back), which makes predicting their formed shape difficult.”

“Guess we’re gonna have to call it “Incremental Sheet Forming.” Specifically, “Two Point Incremental Sheet Forming.” Ford claims this tech is patented, but I’ve yet to find it. This work at the Computer and Automation Research Institute of the Hungarian Academy of Sciences does seem to predate the Ford work.”

This is a slap-my-forehead, “why didn’t I think of that” technology. When I was in the auto biz we did short-run prototyping with Kirksite dies. Instead of H3 tool steel, the die was machined out of a high-strength zinc + 4% aluminum alloy that had a brand name of Kirksite. It was invented in 1929 and called Zamak by the Germans. Thing is, how often do you want just one prototype part? I always said you need three. One to hold, one to install and compare against the old part, and one that gets shipped to the show in Duluth so the sales guys can peddle it before its ready to sell.

So Dave Ruigh was the guy that told me how modern tool and die folks just carve the male form in carbon with a 5-axis machine and then EDM (electrical discharge machine) the tool steel to near-net shape. Polish it up and stamp away. So now I assume you could just high-speed machine (another thing Dave taught me) the Kirksite, mount it into a press and bang out 10 to 500 parts depending on how rude the die had to get with the sheet metal.

The major problem with this incremental forming is that it will not show if the die is manufacturable or if the shape of the sheet-metal can be made in high volume with a die. When you prototype something you should also be prototyping whether you can make more than one. So if Ferrari wants to make some goofy fighter-plane-looking chin spoiler, this “incremental sheet forming” is ideal. They are only going to make 5 parts total. Better yet, when some rich yuppie prangs the car as he drives home from the dealership, the fine folks at Ferrari can slooooooowly make another one for him and charge him the requisite $10 or $20 grand of machine time it takes.

What do you figure Dave? A big expensive machine like in the video needs to make $150 per hours of spindle time? A stylus, a spindle, either way you have to pay for the machine. So I wonder if a part that you can incrementally form cost $10k, could you make 100 parts for $20k using Kirksite?

Oh, I suspect that Ford claim of “first” is because they have a lower cup that follows the stylus whereas the Hungarians just pushed the sheet metal into a female die.

And here is 26 glorious minutes melting steel and stamping it out the old-fashioned way in the 1936 Flint Michigan GM plant.

Aaaarrrrrgggg matey, that thar is real sheet metal work,….

Hot August Nights Fever? Atmel Automotive Infographic

People love their cars. It’s one of those near universal facts. Whether they live in big cities or small rural hamlets, drive a mini or a hummer, there is just something about the sexy vroom vroom of an engine that excites people on a primal level.

Perhaps it’s the destructive force in us that is drawn to what is basically a controlled explosion on wheels. Perhaps it’s something to do with an automobile’s sleek and contoured chassis – or the human need for speed.

Or maybe, it’s because there is a certain zen to be found in tinkering with an engine. Of souping up and optimizing an already lean, mean machine, and making it purr. Somewhere in all of us is an engineer who simply wants to solve puzzles – and what greater puzzle to solve than the many moving parts to be found under the hood?

We at Atmel are especially passionate about the automotive space, having been one of the first semiconductor companies to enter the market, embracing both the productive and the creative passion from the get-go.

Atmel_August Auto_Final

Telefunken (the pre- predecessor of Atmel Automotive) was founded as early as 1903, while the Heilbronn fab in Germany, acquired by Atmel in the 1980’s, was founded way back in 1960.

Atmel’s first success in automotive was (rather fittingly) the electronic ignition IC which, in 1979/1980, was installed in every Volkswagen car.

Another early milestone along Atmel’s automotive roadmap was, ironically, braking. A start-to-stop scenario, so to speak.

The market for connected vehicles is expected to grow to a whopping $53 billion by 2018, with consumers demanding more and more connectivity each year.

A study by Deloitte in 2011 determined that 46% of people between the ages of 18-24 cited connectivity as being “extremely important” to them when it came to cars, with 37% wanting to stay as connected as possible while in their vehicles. A resounding 65% identified remote vehicle control as an important feature in their next automotive purchase; while 77% favored remote diagnostics minimizing dealer visits. And let’s face it, who can blame them?

A 2013 study by Cisco went even further, positing that Vehicle-to-vehicle (V2V) communications could enable cars to detect each other’s presence and location, helping avoid accidents, lower road costs and decrease carbon emissions. The report also found that intelligent cars would lead to 7.5% less time wasted in traffic congestion and 4% lower costs for vehicle fuel.

With over 1 billion passenger cars careening through the world’s streets already, increased digitization can’t come fast enough!

Today, Atmel supplies all 10 of the top 10 tier 1 automotive electronic suppliers in the world, not only with microcontrollers (MCUs), but with touch sensor technology too. Indeed, Atmel’s latest touch innovation, the bendable, flexible, printed wonder that is Xsense, has now been fully qualified and is ready to ramp, meaning sexy curved glass dashboards are closer than you’d imagine… Not bad for a feature originally developed as a piece of wood attached to the front of a horse drawn carriage to prevent mud from splattering the driver!

Atmel is also renowned for being a leading car access supplier, meaning we make the chips that enable cool remote keyless entry (RKE) systems with immobilizers, to reduce the risk of anyone stealing your steel beauty away from you. In fact, Atmel has already delivered over 250 Million ICs for this specific application, so that’s a whole lot of key fobs! Speaking of key fobs, here’s a fun fact; holding a remote car key to your head doubles its range because the human skull acts as an amplifier.

Moving from cool keyfobs to total hotness, it’s also worth noting that Atmel sells some of the highest temperature resistant parts in the market, some of which can handle heat of up to 200°C.

Last, but certainly not least, Atmel boasts the world’s largest portfolio of Local Interconnect Network (LIN) devices, for communication between components in vehicles. The firm’s devices have OEM approvals from all major car manufacturers worldwide, which is certainly something to be proud of.

So next time you find yourself on that long and winding road, kicking into high gear and hugging those curves, spare a thought for the components, because when it comes to cars, the devil really is in the details.

Car-to-car communication

There are a lot of great things on the horizon for MCU makers like Atmel. The Internet of Things (IoT) is going to be a huge boon for companies like us that make both microcontrollers and radio chips. Just last week I read that you can consider an automobile just another “thing” in the IoT. So it was with great interest that I read an article about how the American National Traffic Safety Board (NTSB) is encouraging manufacturers to design cars that communicate with each other to make them safer.

Car-to-X_Daimler

The car-to-x system warns of road works, congestion, obstacles and dangerous weather (courtesy Daimler).

This is based on observations and research of accidents that could have been avoided if vehicles can communicate without driver intervention. Needless to say, the US automakers are not pushing it. “Mitch Bainwol, the [Alliance of Automobile Manufacturers] president and chief executive, raised doubts that such systems could be feasible in the near term.” I sent this article to Susanne, a co-worker that works with Atmel’s automotive group. She notes: “…not that long way off as you may think: Daimler will launch this year the first car ever with intelligent drive function including car-to-car communication.” The Daimler Car-to-X system is the wireless exchange of information between vehicles and between vehicles and transport infrastructure. Daimler has been testing a system since the Spring of 2012.

Car-to-X_2_Daimler

In the Daimler Car-to-X system, obstacles are shown on the vehicle’s display (courtesy Daimler).

A little research shows that the European automakers are out ahead of this technology. There is a consortium of Mercedes Benz/ Daimler, BMW, Audi, Volkswagen, Ford, and Opel involved with testing real world systems. They call it Sim TD (Safe intelligent mobility Testfield). Volkswagen and BMW independently came up with smart intersection technology back in 2011.

When you look at the tragic train accident in Spain, most likely caused by operator negligence, you can see how smart transportation can offer immense benefits to the public. If rail corners had wireless transmitters, the curve could override the irresponsible or incompetent throttle input of the human driving the train. That is independent of the internet of things, where a car can look up real-time road conditions. At the SAE Convergence show a few years back, I saw one automaker talk about how the car can connect to the Internet to see the grade of a highway is it on. That will help it plan the shift-points of the transmission for best safety and fuel economy.

It won’t take many instances of showing we can save the lives of innocent passengers, or children on school busses, before the public will demand car-to-X communication. An added benefit will be the fuel economy and convenience benefits. When the auto industry is ready, Atmel will be there to enable the technology.

An introduction to Kevin Ashton’s recent IoT keynote

Recently, a number of industry heavyweights have taken a keen interest in the Internet of Things (IoT). Essentially, the IoT involves various nodes collectively generating a tremendous amount of data.  We know there is a strong emphasis now for the “Things being connected”.  In a small scale, a Formula 1 constructor such as McLaren uses a cluster of sensor nodes to transmit vital telemetry from the pit crew to garage, then to race engineers and ultimately back to R & D centers. During the races, this all happens in realtime. Of course, the customer in this scenario is the driver and engineering team – converging machine logs and other relevant data to ensure a vehicle runs at optimal speed.  During the races, this happens realtime; converging decisive machine log and digital data together to formulate decisive actions toward minor setting adjustments; this results in balancing the force of physics to the engine and car to produce fractions of a competitiveness in seconds.  This equates to a win in the race and competitiveness on the circuit.  Comparatively as a smaller micro-verse, this is the world of Industrial Internet and Internet of Things.

Now let’s imagine this same scenario, albeit on a global scale. Data gathered at crucial “pressure points” can be used to optimize various processes for a wide variety of applications, scaling all the way from consumer devices to manufacturing lines. To be sure, an engine or critical component like a high efficiency diesel Spark Plug is capable of transmitting information in real-time to dealerships and manufacturers, generating added value and increasing consumer confidence in a brand.

Sounds like such a scenario is years away? Not really, as this is already happening with GE and other larger Fortune 500s. Then again, there are still many frontiers to continually innovate. Similar to aviation, its more about building smarter planes, rather than aspiring to a revolution in design. Meaning, building planes capable of transmitting data and implementing actions in real-time due to evolved processes, automation and micro-computing.

Likewise, applications combined with embedded designs also yield improved output. Given the multitude of various mixed and digital signals, efficiency and computing quality factors also play vital roles in the larger system. The GE jet engine featured in one particular plane has the ability to understand 5,000 data samples per second. From larger systems down to the micro embedded board level, it’s all a beautiful play of symphony, akin to the precision of an opera. To carry the analogy further, the main cast are the architects and product extraordinaires who combine intelligent machine data, application logic, cloud and smartly embedded designs to achieve the effect of an autonomous nervous system.

Remember, there are dependencies across the stack and layers of technology even down to the byte level. This helps planes arrive at their destination with less fuel – and keeps them soaring through the sky, taking you wherever you want to go. Ultimately, a system like this can save millions, especially when you take into account the entire fleet of aircraft. It is truly about leveraging intelligent business – requiring connectivity states concerted in a fabric of communication across embedded systems. Clearly, the marriage of machine data and operational use-cases are drawing closer to realization.

“When you’ve got that much data, it had better be good. And reducing the CPU cycles cuts energy use, especially important in applications that use energy harvesting or are battery powered. And that is why Atmel offers a wide range of products mapping to more than the usual embedded design ‘digital palette’ of IoT building blocks. The market needs illustrations and further collaboration; diagrams that show what plays where in the IoT and who covers what layers,” says Brian Hammill, Sr. Atmel Staff Field Applications Engineer.

“Something like the OSI model showing that we the chip vendors live and cover the low level physical layer and some cover additional layers of the end nodes with software stacks. Then, at some point, there is the cloud layer above the application layer in the embedded devices where data gets picked up and made available for backend processing. And above that, you have pieces that analyze, correlate, store, and visualize data and groups of data. Showing exactly where various players (Atmel, ARM mbed (Sensinode), Open Platform for IoT, Ayla Networks, Thingsquare, Zigbee, and other entities and technology) exist and what parts of the overall IoT they cover and make up.”

Atmel offers a product line that encompasses various products that give rise to high end analog to digital converter features.  For example in Atmel’s SAM D20 an ARM based Cortex-M0+, the hardware averaging feature facilitates oversampling.  Oversampling produces sample rates at high resolution.  The demand for high resolution sampling runs congruent to many real-world sensor requirements.  In the world of engineers and the origin of the embedded designs, achieving lean cost by ensuring no extra software overhead – competitive with benefits.  In the design and mass fulfillment of millions of components and bill of materials used to create a multi-collage of global embedded systems, there exist strong ledger point of view – even for engineers, designers, architects, and manufacturing managers.  Ultimately, augment business line directives to fullest ROI.  Expanding the design/experience envelope, Atmel microcontrollers have optimized power consumption.  Brian Hammill concurs, “Atmel offers several MCU families with performance under 150 microamperes/MHz (SAM4L has under 90 uA/MHz, very low sleep current, and flexible power modes that allow operation with good optimization between power consumption, wakeup sources, wakeup time, and maintaining processor resource and memory.”

Geographically, there seems to be a very strong healthcare pull for IoT in Norway, Netherlands, Germany, Sweden and this follows into Finland and other parts of Asia as well as described in Rob van Kragenburg’s travels of IoT in Shanghai and Wuxi. Therein lies regional differences mixed with governance and political support. It is also very apparent that Europe and Asia place an important emphasis on IoT initiatives.

Elsewhere, this is going to happen from bottom-up (groups akin to Apache, Eclipse for the early web, open source, and IDE, and now IoT-A, IoT Forum) in conjunction with top-down (Fortune 500’s) across the span of industry. But first, collaboration must occur to work out the details of architecture, data science and scalability. This is contingent on both legacy systems and modern applications synchronizing and standardizing in the frameworks conceived by open and organizing bodies (meant to unify and standardize) such as IoT-A and IoT-I. Indeed, events like IoT-Week in Helsinki bring together thought leaders, technologist and organizations – all working to unify and promote IoT architecture, IP and cognitive technologies, as well as semantic interoperability.

In the spirit of what is being achieved by various bodies collaborating in Helsinki, Brian Hammill asserts: “The goal of a semiconductor company used to be to provide silicon. Today it is more as we need development tools as well as software stacks. The future means we need also to provide the middleware or some for of interoperability of protocols so that what goes in between the embedded devices and the customers’ applications. I think an IoT Toolkit achieves that in its design.  Atmel also offers 802.15.4 radios, especially the differentiation of the Sub-GHz AT86RF212B versus other solutions that have shorter range and require and consume more power.

We also must provide end application tools for demonstration and testing, which can then serve as starter applications for customers to build upon.”

There will be large enterprise software managing data in the IoT. Vendors such as SAS are providing applications at the top end to manage and present  data in useful ways, especially when it comes to national healthcare. Then there are companies which already know how to deal with big data like Google and major metering corporations such as Elster, Itron, Landis+Gyr and Trilliant. Back in the day, meter data management (MDM) was the closest thing to big data because nobody had thought about or cared to network so many devices.

We tend to think of IoT as a stereotype of sorts – forcing an internet-based interaction onto objects. However, it is really trying to configure the web to add functionality for “things,” all while fundamentally protecting privacy and security for a wide range of objects and devices, helping us shift to the new Internet era. Currently, there a number of organizations and standards bodies working to build out official standards (IETF) that can be ratified and put into engineering compliance motion. Really, it’s all starting to come together, as illustrated by the recent IoT Week in Helsinki which is also working to bring Internet of Things together. Here is IoT’s very own original champion, a leader whom has been working toward promoting the Internet of Things (IoT) for 15 years: Kevin Ashton’s opening talk for the Internet of Things Week in Helsinki (video).

iot-week-partners

Remarks at the opening of Third Internet of Things Week, Helsinki, June 17, 2013:

Thank you, and thank you for asking me to speak at the Third Internet of Things Week. I am sorry I can’t be with you in Helsinki. This is a vibrant and growing community of stakeholders. I am proud to have been a part of it for about 15 years now.

One of the most important things that is going to happen this week is the work on IOT-A.  It is really important to have a reference model architecture for the Internet of Things. And one of the reasons is that for most of those 15 years, we’ve been talking about the Internet of Things as something in the future, and, thanks to amazing work by this community — I would particularly like to recognize  Rob van Kranenburg and Gérald Santucci and the work of the European Union, which has been amazing for many, many years now — the Internet of Things is not the future anymore. The Internet of Things is the present. It is here, now.

I was with an RFID company a month ago who told me that they had sold 2 billion RFID tags last year and were expecting to sell 3 billion RFID tags this year.
rfid-tags

So, just in 2 years, this one company has sold almost as many RFID tags as there are people on the planet. And, of course, RFID is just one tiny part of the Internet of Things, which includes many sensors, many actuators, 3-D printing, and some amazing work in mobile computing and mobile sensing platforms from modern automobiles, which are really now sensors on wheels, and will become more so as, as we move into an age of driverless cars, to the amazing mobile devices we all have in our pockets, that I know some of you are looking at right now. Then there are sensor platforms in the air. There is some really amazing work being done in the civilian sector with drones, or “unmanned aerial vehicles.: that are not weapons of war or tools of government surveillance but are sensor platforms for other things.

And all this amazing technology, which is being brought to life right now, is connected together by the Internet, and we can only imagine what is coming next. But one thing I know for sure is, now that the Internet of Things is the present and not the future, we have a whole new set of problems to solve. And they’re big problems. And they’re to do with architecture, and scalability, and data science. How do we make sure that all the information flowing from these sensors to these control systems is synchronized and harmonized, and can be synthesized in a way that brings meaning to data. It is great that the Internet of Things is here. But we have to recognize we have a lot more work to do.

It is not just important to do the work. It is important to understand why the work is important. The Internet of Things is a world changing technology like no other. We need it now more than ever. There are immeasurable economic benefits and the world needs economic benefits right now. But there is another piece that we mustn’t lose sight of. We depend on things. We can’t eat data. We can’t put data in our cars to make them go. Data will not keep us warm.

And there are more people needing more things than ever before. So unless we bring the power of our information technology — which, today, is mainly based around entertainment, and personal communication, and photographs, and emails — unless we bring the power of our information technology to the world of things, we won’t have enough things to go around.

The human race is going to continue to grow. The quality of our lives is going to continue to grow. The length of our lives is going to continue to grow. And so the task for this new generation of technology and this new generation of technologists is to bring tools to bear on the problems of scaling the human race. It is really that simple. Every generation has a challenge, and this is ours. If we do not succeed, people are going to be hungry, people are going to be sick, people are going to be cold, people are going to be thirsty, and the problems that we suffer from will be more than economic.

I have no doubt that we have to build this network and no doubt [it] is going to help us solve the problems of future generations by doing a much more effective job of how we manage the stuff that we depend on for survival. So, I hope everyone has a great week. It is really important work. I am delighted to be a small part of it. I am delighted that you all are in Helsinki right now. May you meet new people, make new friends, build great new technology. Have a great week.

 

Francis Lau and Wayne Yamaguchi on ADC tips

My buddies Francis Lau and Wayne Yamaguchi read my recent article about the 12-bit ADCs in the new Atmel SAM D20 ARM chip.

Wayne_Yamaguchi_Francis_Lau

Wayne Yamaguchi (left) and Francis Lau admire the CNC milling machine in Wayne’s garage.

 

They both had some good tips. Francis has worked for a famous Silicon Valley brain-wave sensor company. He writes:

I was trying to squeeze out more resolution out of a 10-bit adc on an Atmel AVR in the brainwave sensing module. Along with power synchronization you can do couple other firmware level tricks. One thing is to over sample and then do a simple average. You could do curve fitting but then that’ll take more computing power rather than just drop some bits. If the signal is repeating, you can also just sample it multiple times over different cycles and average those.

The Atmel datasheet also tell you that in order to get those 10-bits rather than the 7.5bits ENOB (effective number of bits) you typically get, you really need to turn off the other digital clocks in the chip and let it rest a bit before doing the acquisition. I tried this and it does make these little 2 bits of noise quiet down.

My buddy Wayne, who I met at HP years ago before they split off Agilent, has worked on all kinds of analog signal chains. He designs LED lighting that is controlled by Atmel AVR parts. Wayne writes:

For the ATtiny series the A/D resolution is not that bad and noise in general should not be an issue.  I have several boards that the uP is on the bottom side or adjacent to the DC/DC driver that drives up to 3A to the LED off a battery pack.

Of special note is the bit resolution.  If you use the internal 1.1V reference, then the resolution is 1.1V/1024 or approximately 1mV.  By using a precision voltage reference for VCC of the MCU you can use the supply as the A/D reference instead of the internal reference.  A 2.5V reference changes the bit resolution to 2.5mV and is less noise sensitive.  I have tested the single reading under these conditions and have found that the voltage read back is very stable with just one reading with the DC-DC converter running.  On my GDuP board the uP is directly under the converter and switching inductor.  The converter is on the top side of the board (components) and the bottom side components are for the uP.  The board is 0.55″ round and can drive up to 1.5A constant current to the LED.  The GDuP is a 2 layer board with standard 1 oz copper foil.  Of course the final code does not rely on a single reading. Personally, I typically average 4 readings. For one customer I was consulting for, I had to increase the resolution to monitor the battery drain more accurately. There I averaged more readings to extend the range to 12 bits.

So there you have some good examples from experienced engineers who battle such analog issues every day. The important thing to remember is that if you really want 10 or 12 bits, you have to do the system design to remove noise, and maybe oversample or average or dither or a bunch of tricks to get the nominal bit depth of the ADC. Please understand, Atmel and our competitors are not lying when we say we have 10 bit ADCs yet they only give 7.5 ENOB. You really do get 10 bits at low sample rates under noise-free conditions. We cannot estimate how fast you are sampling in your particular system nor how much noise you have in your environment. All a chip manufacturer can do is measure the performance under ideal conditions, both at DC and at a sample rate where we can tell you the ENOB. It is up to you to make sure you design has the margins and error budget to deliver the accuracy you need. Remember, the rated ADC bit are a resolution, not an accuracy. It is not trivial to insure a measurement accuracy, just read the Keithley low-level measurement handbook.

Wayne Yamaguchi talks burning fuses and setting lock bits in source

I had lunch with my pal Wayne Yamaguchi last week, who has several products he makes based on Atmel AVR parts. Wayne mentioned that he has found products where someone has forgotten to burn the fuse and lock bits, and he could read the code inside the part. He admits that it is easy to get confused if you are switching between developing some code for one project and then burning some chips for products that are shipping. You have to click down a few menus to insure the bits get set. Instead he says that he puts the instructions to set the fuse and lock bits in his source code, and then he can program the parts with the .ELF file which will set the bits in the parts.

Wanye_Yamaguchi_Francis_Lau_texting_sfw

Here is Wayne Yamaguchi (left) and my crack protégé Francis Lau exchanging numbers. Frances was showing Wayne his FreedomPop piggyback phone that lets him make free calls. We all worked together in a startup 12 years ago.

So Wayne dropped me a note and said:

 Here’s some info on embedding fuse settings in the source code.

 #include <avr/fuse.h>

#include <avr/lock.h>
/*
FUSES =
{
//    .low = (unsigned char)(FUSE_CKDIV8 & FUSE_SUT0 & FUSE_CKSEL3 &
FUSE_CKSEL2 & FUSE_CKSEL0),
//    .low = (unsigned char)(FUSE_CKDIV8 & FUSE_SUT0 & FUSE_SUT1  &
FUSE_CKSEL3 & FUSE_CKSEL2 & FUSE_CKSEL0),
     .low = LFUSE_DEFAULT,
     .high = HFUSE_DEFAULT,
//    .high = (unsigned char)(FUSE_BODLEVEL0 & FUSE_SPIEN),
     .extended = EFUSE_DEFAULT,
};
*/

LOCKBITS = (LB_MODE_3 );
/*
To extract the fuse/lock bits from the elf file.  From the command prompt type the following.
avr-objdump -s -j .fuse <ELF file>
avr-objdump -s -j .lock <ELF file>
*/

You should be able to find more info from winavr documentation.  It took me forever to find it.  Like I mentioned I use the .ELF file to program the part which includes the fuse and lock bit settings.

If you are like Wayne and in a mixed design and manufacturing small business environment, you too might prefer to put the fuse and lock bit instructions in your source code. Give it try and let us know what you think.

Dave Mathis with more on FCC certification

My buddy Dave Mathis learned a lot about FCC certification of products when he consulted for Aerielle, a Silicon Valley company that makes wireless audio devices. He shared with me some of his hard-learned knowledge this week.

His strongest point is that when the FCC tells you a section of spectrum is unlicensed, it does not mean it is unregulated. What unlicensed means is that the end-user of your product does not have to fill in forms and send it to the FCC in order to use the product. Things like wireless microphones and radio stations do need a license from the FCC. But realize that your unlicensed wireless gizmo still requires certification. That is where you or a lab measures the RF coming off your product and gives you results to the FCC showing the product does not exceed radiated power limits and does not have excessive harmonic spurs in the broadcast signal. You are supposed to measure out to the tenth harmonic. For a 2.4GHz product, that is quite an expensive spectrum analyzer you need.

Most labs charge about $10,000 to certify a device. But if you cheat and the FCC decides to prosecute you, the penalty is $10,000 per device that you have sold. You are supposed to get a conditional license that lets you have 5 devices to test and prototype. When you pass your lab test or give the FCC your test results, they assign you a certification number. What infuriates Dave is that many cheap imports just copy a number off a different product or invent a number. There are some open-source and kit vendors who don’t even bother to do that, they just invent a number or leave it off.

Open-source_RF_module

This RF digital module on the Open Source RF board is FCC Certified as evident by the ID number printed on the case.

Now there is an interesting wrinkle in the FCC rules. The RF portion of your device is an intentional radiator. If that radiator is certified by the maker, then the microprocessor you add to it is considered an incidental non-intentional radiator, so you don’t have to do testing and certify the module-MCU combination. If you start with an Atmel ZigBit 802.15.4/ZigBee module, the micro is included. An Anaren AIR module is just a radio, but you can connect it to an MCU and still be covered by the FCC ID on the module. Same goes for the RF Digital module. And best yet, Dave thinks you might even be covered for a switching power supply in your product (check with your test lab, don’t trust us) as long as you have a certified RF module.

Also be aware that the FCC has slightly different rules for RF kits that plug in. One important principle is that the modules are certified with a known antenna. You are not allowed to lengthen or change the antenna in any way. So don’t think you can cheap-out and just plop a wireless chip on a board and guess at an antenna. You might get away with it, but one day there will be some ISM (industrial medical scientific) band interference that will inconvenience a politician, and then the FCC will come down on us like a ton of bricks. So you may be far better off paying more for a pre-certified RF module that going through the hassle of having it tested or testing it yourself. If your design budget allows for $10k to do testing great, but any little change to the PCB or antenna will require a re-test, if you play by the rules.

The value of microcontrollers (MCUs) with dual-bank flash

Written by Brian Hammill

Atmel, along with a number of other industry heavyweights, recently introduced a slew of Cortex-M microcontrollers (MCUs) equipped with a dual-bank flash feature.  While single bank flash is sufficient for numerous applications, the dual-bank feature offers significant value in specific scenarios. So let’s discuss the added benefit of dual-bank flash.

Fig 3: Dual bank flash provides a fail safe method of implementing remote firmware upgrades

Dual bank flash provides a fail safe method of implementing remote firmware upgrades

First, we need to understand the role of flash in a MCU.  Just under 100% of the time, the flash memory in your MCU is in read mode.  The processor core is almost always fetching instructions to execute out of the flash. Exceptions? When code is being run from RAM, internal or external, or ROM.  Meaning, with typical flash memory, you cannot read while you are writing to it.  As such, during firmware upgrades and data storage operations, the processor core cannot execute code from the flash.  Either the processor has to wait for the write operation to complete, or the core can continue to execute from other physical memory such as RAM or ROM.

In Atmel’s single bank SAM3 and SAM4 family flash MCUs, this problem has been solved in a somewhat novel manner by providing flash programming code in the factory programmed ROM.  This means that whenever the firmware engineer wants to write the flash, it will buffer the data to be written and make a call to a routine in ROM.  The processor core will then be executing from ROM while the flash is being written.  Since flash erase and programming operations can take milliseconds (a very long time for a MCU core running at up to 150 MHz), the ROM routine may have to sit in a do nothing loop while the flash operation completes.

Admittedly there are limitations, but this method generally works just fine for systems with external storage such as serial flash – retaining downloaded firmware images until they can be written to the internal flash.  It also works well in systems which infrequently write a few bytes of data to the flash.

Firmware upgrades can be risky, especially in applications where firmware images are downloaded across slow unreliable wireless links – or where systems are prone to power failures. In a single bank flash system, ensuring a reliable firmware upgrade means there is a part of the flash that you never erase or write over. The code contained in that part of the flash knows how to detect corrupted code in the rest of the flash.

Using a checksum, CRC, or even a digital signature are common ways to determine the validity of the flash image on boot or reset.  If the check comes out bad, the code in the part of flash that is never over-written knows to look for a backup image and attempt to reprogram the application.  The backup image can be located in an external memory such as a serial flash or if there is enough space, in an unused part of the internal flash.

Managing backup images in internal flash or external serial flash can be done reliably in a well planned system with single bank flash.  The key is well-planned, although the firmware engineer has to jump through some hoops because changing the interrupt table ordinarily means you have to change the very lowest flash addresses.  Plus, you cannot keep that part of flash unchanged over the life of the product in the field.  So it is necessary to have the fixed interrupt vectors point at defined locations where the actual interrupt service routines are located.  And finally, the actual ISRs can be changed when the application is changed by a firmware update, although this can lead to size restrictions or wasted flash space between the ISRs.

Zigbee Smart Energy Profile

The much anticipated Zigbee Smart Energy Profile 2.0 was recently released. Representing an effort spanning more than three years, this milestone includes contributions from NIST, IETF and the Zigbee Alliance. Various companies also participated in the initiative, including utility, meter, silicon and software stack vendors.

Smart Energy – the application profile that drove the Zigbee Alliance development of the Zigbee IP (ZIP) –  is the first public profile requiring ZIP instead of the current Zigbee and Zigbee PRO underlying stacks. Zigbee IP (ZIP) and SEP 2.0 offer TCP/IP based interoperability for smart energy networks, thereby facilitating participation in the Internet of Things (IoT) without the need for special gateways. In fact, ZIP is designed to be physical layer (phy) agnostic and is capable of running across various platforms including 802.15.4 Wireless, WiFi, Power Line Carrier Ethernet and more.

SEP 2.0 is built using numerous mainstream protocols such as TLS/HTTPS, XML, EXI, mDNX  and REST. Each SEP 2.0 device boasts an optimized HTTP server serving up and responding to data objects defined by an XML schema. Security is ensured by familiar HTTPS with strong authentication, while an RFC compliant IPv6 stack provides the network with specific routing and translation layers for the wireless PHY.  The SEP 2.0 presentation from the Zigbee Alliance is available here [PDF].

Two recommended implementation strategies for SEP 2.0 in devices are Single Chip and Multi-Phy. Single Chip implementations use a dedicated microcontroller and RF transceiver (or a combined SoC) running a dedicated stack. This strategy works particularly well when adding Zigbee SEP 2.0 support where there is no other network or TCP/IP stack in low to mid range devices. A good example might be a thermostat or load control device, both of which require communications with other smart energy devices – even if they are equipped with a small processor dedicated to the control and UI functions of the device.

The Multi-Phy implementation –  a new way of looking at Zigbee – offers advantages in devices equipped with multiple network interfaces and/or a capable processor such as an Atmel SAM4, SAM9, or SAMA5 MPU or MCU. In such cases, the 802.15.4 transceiver (like the AT86RF233) becomes the network interface PHY layer underneath the IPv6 stack and SEP 2.0 layers running on the processor. Since the IPv6 stack is a compliant implementation, other network PHYs are also supported by the stack. Running two or more physical interfaces with a single processor is certainly not an issue, as devices that communicate via Zigbee, WiFi, PLC, and Ethernet can be designed. Because a single processor and IPv6 stack are used, the cost will ultimately be lower than duplicating these functions in a separate chip dedicated to Zigbee SEP 2.0.

Single Chip and Multi-Phy implementation

Single Chip and Multi-Phy implementation

The multi-phy implementation is also ideal for gateway devices bridging different physical layers. And since SEP 2.0 is built using standard web protocols, once you bridge the smart energy network to the Internet, managing your home energy devices from a tablet or smartphone is no stretch at all and brings us closer to the reality of the Internet of Things (IoT).

Atmel, along with software stack partner Exegin Technologies, offers robust and compliant solutions for Zigbee IP and SEP 2.0. There is already interest from leading networking and utility companies, with deployment of certified devices expected before the end of 2013. The critical design decision most of us have to consider? Whether to dedicate the cost and complexity of a single chip Zigbee solution – or optimize it and lower cost with a software stack and radio transceiver solution that offers shared resources and the possibility of multiple networks.

Getting real in a virtual world

We recently released the first simulator for our ARM-based SAM microcontrollers – allowing users to observe a cycle accurate simulation of Atmel’s new ARM Cortex-M0+ based SAM D20 MCU.

Essentially, it offers a cycle-accurate simulation of the entire MCU, not just the core but the peripherals as well (the digital ones, not the analog ones). The simulator – which includes all processor and I/O registers – is available as debug target just like a real MCU in the Atmel Studio development environment.

Yes, running code while watching the I/O registers certainly sounds sweet indeed. But how useful is it when nothing is connected to the pins of the MCU? Well, the simulator actually supports external file stimulus, meaning every pin of the MCU model can be read and written to based on a simple text file with full cycle accuracy. Perhaps most importantly, the stimuli is non-intrusive, allowing users to debug a system in “slow motion” – as the MCU and stimuli stop and start completely in synch.

Don’t feel like writing your own stimuli file or want to collaborate on using file stimuli? We’ve set up a project on Atmel Spaces – the collaborative workspace – with example stimuli files here.

Atmel Spaces

Atmel Spaces

Still, one can get the real SAM D20 on an Xplained Pro eval kit for $39 – so why bother with a virtual model?

For starters, a full-featured (time limited) trial version of the SAM D20 simulator is available for instant download in the Atmel Gallery. To try out the SAM D20, you don’t need to wait for hardware to be shipped.

SAM D20 simulator is available for instant download in the Atmel Gallery

SAM D20 simulator is available for instant download in the Atmel Gallery

The Xplained Pro board is populated with the largest device – the SAMD20J18 in a 64-pin package – whereas the simulator supports all SAM D20 device variants.

In addition, there are a few things you can’t – or don’t want – to do with the real device. With cycle accurate, non-intrusive file stimuli, you can run and debug the entire system in “slow motion.” On real hardware, when you hit a breakpoint, the MCU stops. However, any external component on your system continues to run. On the simulator with file stimuli, the entire system stops – and resumes – in synch. This gives you new debugging capabilities in application that can be destructive to the hardware, such as motor control or high current power switching.

Other key benefits of the simulator over real hardware include precise measuring of execution times (based on clock cycles), use in regression testing as well as easy and early custom board availability.

As noted above, the SAM D20 simulator is the first ARM simulator to be released by Atmel, but it certainly won’t be the last. To be sure, we plan on providing fully accurate simulator models of new chips even before physical engineering samples go live.

In an industry where everyone is angling for an advantage by bringing their products to market faster, being able to kick off development with a new MCU weeks or months before its physically available can be invaluable. So try it out – the  SAM D20 simulator is available here in the Atmel Gallery