Category Archives: Application Highlights

The home lab of Bo Lojek

I was touring Atmel’s fab in Colorado Springs, so I made a point of contacting Bo Lojek, the author of the great book, the History of Semiconductor Engineering. Although Bo is now a professor at University of Colorado, he worked at Atmel for 15 years. I was honored that he asked me to his home in Colorado Springs. Well, I have a pretty good home lab, but Bo’s lab just blew me away. Bo said he wanted to be an engineer from the time he was 7 years old. It runs in the family, his dad was an engineer too.

So Bo told me that he built his house in Colorado Springs. If one of my Silicon Valley buddies says this he means that he had a custom floor plan home built by a homebuilder. For Bo, it means he had an engineer design the house to his specs, using metal studs, and Bo himself constructed the house, driving all 37,000 self-tapping drywall screws. I think he said it was 3600 square feet. Yes, it’s an engineer’s paradise.

KONICA MINOLTA DIGITAL CAMERA This is what meets you at the foyer just inside the front door of Bo’s house. Bo said if I came back at daytime I could check out his collection of Dumont scopes in the garage.

KONICA MINOLTA DIGITAL CAMERA Every engineer worth his salt needs a Data General Eclipse computer in the hallway, just for data processing emergencies. Bo has arranged for all his stuff to go to the University of Colorado when he dies. It will be great to keep this museum together. It will also be a great excuse to visit Colorado Springs, other than to meet the space aliens that the Stargate people have inside the NORAD mountain.

KONICA MINOLTA DIGITAL CAMERA Bo has some early computer boards nicely framed on the wall.

KONICA MINOLTA DIGITAL CAMERA Lojek has a huge collection of voltmeters, including this Cubic model V-46A. It uses telephone stepper relays and a handful of transistors to measure voltage. Pretty cool for 1960.

KONICA MINOLTA DIGITAL CAMERA On Bo Lojek’s bookshelf are propped up some vacuum tube modules from a very early computer.

KONICA MINOLTA DIGITAL CAMERA And let’s enjoy Bo checking out the whole bookshelf. His house is not only engineer paradise, its college professor paradise.

KONICA MINOLTA DIGITAL CAMERA While Bo does not have the disorganization of dear departed Bob Pease, he does have a few things littering the floor. I used to use the same Data IO programmers to program the microcontrollers I designed into my consulting work.

KONICA MINOLTA DIGITAL CAMERA It does not disturb me that Lojek has a stack of early Tektronix mainframe scopes. What bothers me is I have several friends that have the same sort of stack.

KONICA MINOLTA DIGITAL CAMERA How about these early 2N1302 transistors from honored competitor Texas Instruments?

KONICA MINOLTA DIGITAL CAMERA Lojek has drawer after drawer full of electronic components, including these vacuum tube computer boards.

KONICA MINOLTA DIGITAL CAMERA Bo told me that when Bob Pease visited his house, he could not tear him away from these two analog computers. I should mention that I knew of Bo because Pease told me what a cool guy he was. Bob knew Bo because Bob edited Bo’s book. Since English is Bo’s second language that was a lot of work, but Pease was happy to do it since it was such an important contribution from such a cool guy.

KONICA MINOLTA DIGITAL CAMERA Here is a close-up of the analog computer that so entranced Bob Pease.

KONICA MINOLTA DIGITAL CAMERA All this cool stuff above is just stacked like cordwood all over the house. This is where we finally got to Bo Lojek’s lab bench.  Bo told me he likes to write or read for a while, but then he has to go to the bench to do some experimentation. It reminds me so much of my mentor Bob Pease, who had an equal love for working with his hands a soldering iron.

KONICA MINOLTA DIGITAL CAMERA Every surface in Bo Lojek’s house is a treasure trove of memorabilia and electronic equipment.

KONICA MINOLTA DIGITAL CAMERA Here is a very early computer board that used “air gap” integrated circuits. Analog Devices’ Barrie Gilbert told me that he got into electronics because surplus WWII magnetrons were so beautiful to look at he had to learn how they worked.

KONICA MINOLTA DIGITAL CAMERA And how about this, a Bob Widlar business card? I love the title “ROAD AGENT”. Widlar had style.

KONICA MINOLTA DIGITAL CAMERA And when your engineer friend tells you he has a walk-in closet— this is what he means.

KONICA MINOLTA DIGITAL CAMERA Lojek has an artistic streak. Amongst the pretty glass are a handful over very early galvanometers, some from the 1800s.

KONICA MINOLTA DIGITAL CAMERA More cool galvos and such. I wonder if the founder of Digi-Key has that same telegraph key? Ronald Stordahl started out Digi-Key by selling electronic telegraph key kits to Ham radio operators.

KONICA MINOLTA DIGITAL CAMERA Here Bo Lojek admires a framed set of Minuteman missile circuit boards. Jim Williams had an interconnected set on his living room. Check the Minuteman missile PCBs and Jim Williams out in this video.

KONICA MINOLTA DIGITAL CAMERA OK, so I lied. That picture earlier, the one I called Bo Lojek’s lab bench. That was just the emergency downstairs lab bench useful of quick jobs. Here is the real lab bench. Next time I get to his house, I will fire up that big soldering iron and put it down right before the picture, so there will be a wisp of smoke coming off of it, like a Cowboy’s 6-shooter.

KONICA MINOLTA DIGITAL CAMERA That main bench above has a side bench on another wall.

KONICA MINOLTA DIGITAL CAMERA And books, boy do college professors love books.

It was a real treat to see Bo. He said he is going to try and make it to the next Analog Aficionados party, so I will remind him so he can be among like-minded souls out here in Silicon Valley. The party will be Feb 8 2014, the Saturday before the IEEE ISSCC conference.

DC distribution in your house and 42-volt cars

I spotted an article in Electronic Products about ac-to-dc converters that fit inside a wall plug. At least that was the intent of the article. Unfortunately it started with a comment about how so much of the stuff in houses run off dc, yet according the author, we waste energy by distributing ac and then having every gizmo make dc inside of it. He noted, “A far more efficient solution would be a central dc-grid supply that would power all of your home electronics appliances, as one large PSU wastes less energy than many separate ac/dc converters.” It’s the old Edison versus Tesla/Steinmetz argument over a century later. Edison wanted to distribute dc, since he thought it safer. In fact the problem with ac is losses. Steinmetz and Tesla wanted to distribute ac, since it is easier to convert up and down. You can step it up to kilovolts to transport across long distances and then step it down to run you toaster or iPod. Now lets examine the argument is that it would be simpler to make one big batch of dc, and wire it to all the gizmos in your house.

RECOM_power-supply

First, lets pay the rent and give credit where credit is due. These RECOM power supplies are really neat. Rather than making you read 1000 words, here is the deal. 3 Watts, universal input, isolated, fits in a wall plug, CE, UL. So that is a few syllables more than a Haiku, but still most of what you need to know. Oh I forgot the most important spec, 20 bucks in single quantity at Digi-Key.

OK, back to dc versus ac power distribution. So Edison thought ac was more dangerous for electrocution, but that is not true. In fact the danger with electricity is in starting fires. That is why dc distribution is so tricky. When you get an arc in ac, it is self-quenching. 100 or 120 times a second, 50 and 60Hz power goes to zero. That really helps extinguish the arc.

Electric_arc

Lesson 1: I designed a 48-Volt 200W power supply. I was testing it’s short-circuit capability. I took the output and ran it across a metal file, like my mentor showed me. He maintains that dragging the wires across a bastard-cut file is even more effective at finding control loop problems than just touching the wires together. Then the wires did touch together, and when they parted, I got a nice ¼-inch arc that just stayed there, melting the copper wire strands. See, 48 volts is a nice arc-welding voltage. Once you start an arc it just burns and burns.

Lesson 2: When I worked at GMC Truck and Coach, we made trucks and buses with 24V dc power. All the relays and switches would fail much quicker. We could not use the dirt-cheap relays and switches used on 12V cars, they would fail within a year. As a note, the 24V headlamps and tail lamps failed more often too, since the filaments were twice as long and hence much more delicate and prone to breakage. They also sagged and were hard to aim or focus.

Lesson 3: 42 volt cars. There has been this MIT professor that has been pushing 42 volt system in cars for over two decades. At first it was supposed to save cost because you make the wires thinner. But we used 18ga wire in cars even for milliamp signals since 18ga wires did not break when dragged through a hole in the body during assembly. So then the rational was because 42-Volt systems could run electrically-operated intake and exhaust valves in the engine. Well we still don’t have electrical valves, although I think they use them in Formula 1 racing. And it turns out you can operate them with 12V if you have to. The real reason 42V cars are not here gets back to that arcing in relays and switches. With 42-V cars, every single load has to be switched with transistors, you just can’t use relays or contacts. That might still pay out, many loads these days are handled with FETs anyway. But the deal is, you can use 30V FETs with a 12V car, but you need 200V FETs to handle 36V cars. (The charging voltage is 42, the system just uses three 12-V batteries, so the uncharged voltage is 36.) But the die size of FETs goes up as the square of the voltage. So tripling the voltage makes the FET die nine times bigger. So you don’t get any real cost savings with 42 Volt cars, if you still need 18ga wires and can’t use relays or switches. And worse yet, all the loads you control with FETs have to be 9 times the cost. Sorry, engineering is science crossed with economics, and college professors never appreciate that cost is king to an engineer.

So now think about distributing dc in your house. If you use 12 Volts you need copper 10 times thicker and more expensive than the wires that carry 120V. If you up the voltage, even a little, you can’t use any mechanical switches or relays anywhere. On top of that, you still have this incredibly lethal wiring that can kill you in a flash, or will arc like crazy if it gets shorted. Heck even with ac distribution, the NSA data center keeps exploding since they can’t quench the arcs.

Put on the headphones for this little video:

It has taken a while to kill Edison’s dc distribution. The last bit of it was only decommissioned in 2007. The 120-Volt ac power in your house is darn-near perfect. Our good-ol-American power is much safer than 240V European power. I have read that 60Hz has less chance to screw up your heart when you get shocked than 50Hz. And old time CRT (cathode ray tube) televisions were much brighter since the American TV refreshed 60 times a second instead of 50. And the frame rates of our video are 60, which is smoother. No, give me 120V 60Hz. It is not perfect for any one thing, but it is darn near perfect compromise for everything it has to do. If you want to improve power in the home lets go to 400Hz like the airplane people.

Now don’t think that dc is all bad. It makes sense to distribute dc if you have to send power to an island or through a single cable. With the wires so close together, the ac losses go way up, and it makes sense to distribute dc. I hear that semiconductors are almost cheap enough that it might make sense to transmit dc over long distances over land, but you will have to run it into an inverter so that you end up with ac that you can distribute to homes.

And I saw a Fairchild presentation where they claim their SuperFET has broken the square law relationship between die size and breakdown voltage. So with that and a way to reliably run thin-gauge wires in automobiles, maybe it does make sense to go to 42V cars. But remember, you now need to have 42V bulbs and such. Maybe with LEDs the bulbs last forever, and who cares what voltage they are. The economics could all change in a few years. But it is economics, not “neat” that determines what will happen.

So please don’t take any of this as a fixed absolute statement. After all, the trucks and buses I worked on were 24V because we needed that much to run a starter motor that could turn over a Diesel engine. But that higher voltage was a pain in every other part of the truck, including when the circuit breakers would arc, catch fire, and burn down the truck, killing some poor guy in the sleeper cab. The world is full of specialists that only think about one small aspect of a problem. To be a good systems engineer you have to look at the whole picture, all while keeping cost, service, and reliability in mind.

Batteries with potential 40-year life

I just saw an ad for a Tadiran battery that claims a 40-year life. This is for a primary battery, not a rechargeable. That is based on the 1% per year self-discharge rate. So the math is pretty basic— 40 years at 1% per year and that is more than 50% charge remaining to do your bidding. Now the ad, being marketing and all, does not say if its 1% of rated capacity per year, or 1% of remaining capacity per year. You should have plenty of charge left if you figure your power budget with a factor of two over rating to allow for that self-discharge.

Tadiran-lithium-thionyl-chloride-battery

Tadiran’s previous lifetime champ was also (Li/SOCl2 ) cells. They would claim 15-year lifespans for those. SAFT makes lithium thionyl chloride cells too. I assume Tadiran have made further improvements to get to such a low self-discharge rate for this line, which they call lithium inorganic. But I note the Tadiran ad has the words “…in certain applications.”  You see, they can’t tell where or how you use the batteries. If you leave flux all over the board so that there are leakage paths, you won’t get the 40 year life. If you run them at hot or cold temperatures, you won’t get the 40 year life. If you take out the current in high pulses instead of a gentle steady current, you won’t get the 40-year life. It is not Tadiran’s fault. They have to give you the optimum spec— that is for a battery with no leakage paths other than its own case. And measured in a comfortable temperature in a dry environment.

When I was at EDN I wrote about the 15-year batteries. An alert reader notified me of a scandal in Houston Texas since the gas meters needed new batteries much sooner than expected. Once again, it was not the battery maker’s fault. Houston Texas is extremely humid, almost tropical. The batteries in the meters were exposed to this humidly and high temperature and their life was much shorter.

I designed the power system for an automotive diagnostic tool when I consulted at HP. I thought I had all the battery quiescent currents figured out in a neat little spreadsheet. Then I prototyped the design. The leakage current was much higher than my spreadsheet showed. Turns out that battery voltage was flowing through the body diode of a back-to-back FET and then into a gate pull-down resistor. I used a 1meg resistor, but 12 volts into 1 MΩ resistor is still 12μA. That is way more than the 200nA memory retention current of an AVR XMEGA in shutdown, so don’t let some power supply leakage path screw up your battery life calculations like I did.

In 2007 I did a follow-on post about smart meter batteries. The broken first link in it is the EDN article I linked above. So just remember, it is your job, not Tadiran’s, to insure that the battery life is what you expect in a smart meter. Tadiran can give you the battery, and Atmel can give you the MCU and smart meter ICs, but you have to verify the leakage and current consumption in your exact application, running your exact code, with your exact manufacturing methods. My buddy Eric Schlaepfer, now at Google, was over at Maxim when some customer contacted him and called Maxim liars since the customer was getting much greater power consumption on one of Maxim’s micro-amp supervisor chips. It turns out the customer was letting the PCB get contaminated with sweaty conductive fingerprints in assembly. The leakage current through those fingerprints on the PCB was passing way more current than the integrated circuit.

So brush up on the Keithley low-level measurement handbook (pdf), so you can measure those nanoamperes. And be sure to test your system in temperature and humidity chambers that simulate the real world. And then take measurements in the field to validate all your assumptions. Then and only then will you get 40-year battery life in your products.

Automotive circuit design headaches

I wrote an article for Electronic Design magazine about Bob Pease and his solenoid driver circuit. Former National Semiconductor employee Myles H. Kitchen was nice enough to drop me an encouraging note.

“Thanks for your great article on Bob Pease and the solenoid drivers. Having worked with Bob in the late 1970s and early 1980s at National Semiconductor, I came to appreciate his wisdom and simplicity for addressing issues that seemed simple, but were really quite involved. As someone who’s worked on automotive electronics my entire career, an issue such as a solenoid driver is critical. I recall when testing early automotive product designs at one company, we would put the module under test in a car, and then turn on the 4-way flashers to see if operation was affected, or if it stopped working completely. The combination of multiple inductive and high-current resistive loads operating on and off at several hertz would play havoc with the power supply, and immediately point out design deficiencies in module power supplies, regulation, protection, and noise immunity…. some of which could be traced to poor relay or solenoid driver circuits.  Surviving the 4-way flasher test was only a quick way to see how robust the new design might be, but it was a quick indicator if we had things right up to that point. I miss Bob and his ramblings in ED, but hope to see more of your work in the future.  Loved it.”

Well, having been an automotive engineer at both GM and Ford before moving out to Silicon Valley, Myles’s note sparked a flood of memories. His four-way flasher story was prophetic. When I was in college at GMI (General Motors Institute) one of my pals worked at Delco. They were just coming out with the integrated electronic voltage regulator in the back of the alternator, circa 1973. So all the executives were standing around at a demo and after they ohhhh and ahhhh, and congratulate themselves, my buddy gets in the car, and knowing what Myles knows, he cycles the air conditioning switch a few times. The “Charge” light promptly came on.

Auto-warning-lights

I asked my fellow student if he was in trouble or if they hated him for causing the failure, and to GM’s credit, he told me “No, they were actually glad I found it before it went into production.” It must have been some serious egg on some faces, though. After that, survival after repeated AC clutch cycling became part of the spec for the voltage regulator. I bet four-way flashers are included as well.

I later worked on anti-lock brakes for GMC heavy duty trucks. This was way before anti-lock brakes on cars, about 1975. We dutifully shielded all the wires to the sensors with expensive braided cable. When we pulled the truck out on the road, the brakes started modulating, with the truck just sitting there. We realized that the entire 24V power system was a pretty nice antenna and that noise can get into a module from the power side as easy as from the sensors. We begged the government to give us more time, and they did. Indeed, I don’t know if they ever put in antilock brakes on heavy trucks. Let me check, yeah, wow, it’s still called MVSS 121 (motor vehicle safely standard) and it finally went into effect in 1997. That was at least a 20-year delay in getting it working.

I told Bob Reay over at Linear Tech that automotive design was the toughest, because you had a military temperature and vibration, but consumer cost. He added another factor, the chips for automotive have to yield well, since you need to ship millions. What a crazy challenge.

When I thanked Myles Kitchen for his kind words and told him the above stories, he responded with a great story about load dump. The phenomena called load dump is usually caused by a mechanic who is troubleshooting the battery and charging system of a car. You get the car running, rev it up a bit, and yank off the battery cable. If the car keeps running, that means the alternator and regulator are OK, it is just a bad battery. Thing is, the alternator is often putting full output into this bad battery. And when you yank the cable off the battery, the voltage regulator controlling the alternator cannot react instantly. So there is this huge overvoltage spike as all the stored energy in the alternators magnetic field has to dissipate into whatever loads are still connected, like your radio. A load dump can put over 100 volts on electrical system. And it is not a fast spike; it can last for hundreds of milliseconds. Smart mechanics just leave the battery cable on and hook up a voltmeter to see if the alternator is putting 13.75 to 14.2 volts on the battery. So Myles recounts:

“Thanks for your email.  Yes, sounds like we’ve run up against many of the common automotive issues in our time.  I’ll add one brief anecdote here.  When I worked at Motorola’s automotive division, I certainly learned all about what a load dump is, but I’d never really heard of anyone experiencing one first-hand and what it could do.  One day, our admin complained that her 70’s vintage Plymouth Duster wasn’t running right, and that her headlamps and radio quit working.  She had been driving it the night before when something went wrong.  We brought it into the garage at Motorola, and found that she had a very discharged battery with very loose battery connections. You could just lift them off with your hand.  As a result, her battery was discharged, and when she hit a Chicago pothole it all went bad.  The resulting load dump had blown out every light bulb filament in the car, along with the radio.  Only the alternator/regulator had survived.  The ignition was still a points and condenser system, or that would have probably died as well.  A new battery, tight connections, and a bunch of replacement bulbs got her back on the road again.  And, I’ve never doubted the need for a load-dump-tolerant design since!”

Those are wise words from someone who has been there and seen it first-hand. And I wonder if the voltage regular in that old Duster was a mechanical points type. In the early days we automotive engineers would try to protect each individual component for load dump. The radio would have a Zener diode clamp, so would the cruise control module. Then manufactures put a big Zener clamp right in the voltage regulator that clamps the voltage on the whole car. Maybe that was too low an impedance to clamp, because now I see there are a lot of smaller distributed TVS (transient voltage suppressor) clamps that you use to protect the circuitry of your module.

There are two other approaches. One, you can just disconnect your circuit with a high-voltage FET when the load dump happens:

Overvoltage-cut-out-circuit

I used this circuit to keep automotive overvoltage from destroying an LT1513 chip I used as a battery charger. When the DC Bus voltage exceeds the 24V Zener plus the base-emitter drop of Q10, it turns Q10 on and that turns Q12 off and protects downstream circuitry from overvoltage.

Alternative two, you can put a high-voltage regulator in front of your circuit that will maintain power to your circuit through the load dump, at the risk that the pass transistor will overheat since it is dropping a lot of voltage while passing current during the load dump. Linear Tech makes such a part.

There is one more tip for every engineer regarding automotive electronics. Remember that there are laws that make auto manufacturers offer service parts for 10 or 15 years. So no matter what your application, you might consider using an automotive part like Atmel’s line of MCUs, memory, CAN/LIN bus, and RF remote controls. We state that we will be making many of these parts for over a decade. If you design them into your industrial, medical or scientific application (ISM) you can have some assurance you can still get the part for years, or at least a pin-for-pin compatible part. That means no board spins. On top of that assurance, most of the parts have extended temperature range, which might help in your application as well. Since we make the parts for high-volume automotive customers, they are usually priced very reasonably.

Boot Linux in a second

When I worked at EDN Magazine I wrote up a story about MontiVista Software. They had gotten a real-time Linux to boot in under a second. This was for an automotive dashboard, the Linux was displaying a gauge so it had to start working as soon as you turned the key. Since I just fired up two Atmel MPU (microprocessor unit) demo boards that could support Linux, I thought it would be cool to bring the article to the attention to our MPU group.

It turns out that Atmel 3rd party partner Timesys was way ahead of me. Frederic in our MCU group pointed me to a video where you can see our Atmel SAM5D33 eval board in booting in a couple seconds (mp4). Note that this eval board is not just a passive display like an instrument cluster. It also has a full user interface that takes touch, mouse, and keyboard inputs. Frederic noted: “An application without a UI will certainly boot in less than a second.”

Linux-fast-boot_Atmel-SAMA5D33

Timesys can get a real-time Linux to boot in less than 3 seconds. It would be even faster if you don’t need a user interface like touch, keyboard, or mouse.

Speaking of big-iron MPUs with external memory, be sure to check out ARM Techcon this week in Silicon Valley. Atmel will be there, and I see MontiVista is an exhibiter as well. I will be at the Atmel booth on and off, as well as checking out some of the conference.

Precision resistors and tolerance stackup in general

This must be the season for great graphics. After seeing the solar cell output over temperature graph a couple days ago, today I see this great article about the reality of using precision resistors. It is from the great folks at Vishay, by way of my former co-workers at ECN Magazine.

Resistor-tolerence

Vishay shows what can happen to their beautiful resistors once you and your customers get your grubby hands on them. TCR means temperature coefficient of resistance.

The same chart got used in an article in EDN, where I worked. The graph also saw use in an Electronic Design article about foil and thin-film resistors. The mother lode was from a Vishay app note by Yuval Hernik.

If you are using a resistor to measure current you should not trivialize the accuracy problems that come with the real world. You can see in the chart that the ±0.05% resistor you buy from Vishay can end up being a ±1% resistor after a few years in the field. It’s not Vishay’s fault. They did not stress the resistor soldering into the board. They didn’t expose it to humidity and temperature gradients that damage the device. They didn’t drop it and shock it and over-voltage it.

The point of this is that you can’t build a product that specs ±0.05% accuracy if you start with ±0.05% resistors. You customers don’t care what you buy from Vishay and they don’t care what you built. They care about they use, perhaps years later, at some horrible temperate in some inhospitable humidity over some astronomical altitude. When I was at Analog Devices they had a test for voltage references that was running for years. Years! This was to evaluate the long-term drift that the parts would exhibit. I am happy to say that the ADI parts seemed better than most.

And here is the thing— when it comes to these drift problems, no one can tell you what is going on. We simply don’t understand the physics of it. I contend we really don’t understand noise either, but that is an argument for another day. But drift, which you can think of as “dc noise” if you want mess with your head, is a universal problem. We older folks that used to wait for tube radios to “warm up” seem more comfortable with the concept. But op-amps and maybe even discrete components have to settle in as well. This is not the few microseconds it takes for the internal circuits to start working. It is the minutes or days it takes for the amplifier to come to its final dc offset error.

I have several pals that are trying to make their own test equipment to save money or just build things like a Maker movement. That is fine if you don’t really have to trust it. Believe me, Fluke and Agilent and Tektronix earns every penny they ask you for. This is why I am wary of cheap knock-off test equipment. I would rather buy used name-brand equipment that I can trust to keep accurate over their lifetime.

As to these resistor tolerance issues, one answer is that you calibrate the product every time it’s turned on, or even more often. When I did automotive test equipment at HP (before Agilent split off) my solution was to use the best voltage reference that money can buy. Back then it was Thaler. Since then (1998) I found out that the Thaler part I used was a National Semiconductor part that was hand-selected by Thaler. No matter where you get it, you have to have a low-drift and low TC (temperature coefficient) part. I also used very good initial accuracy parts, since I did not want to have to calibrate the board the first time in the factory.

This way, I had the acquisition system measure its own reference. That way I could calibrate any errors or drift in the attenuator resistors. The other aspect was using a very good crystal. This way you know voltage and time. Most everything else you can derive in firmware. I called it “a rock and a ref,” since rock was slang for the quartz crystals. I still remember Bob Shaw asking me what pots had to be adjusted on the board for manufacturing. I told him there were no trim pots or trim capacitors. He was astonished. I told him about a rock and a ref. I joked that if he really wanted pots I could add them back in. He told me no, and thanked me for designing something that did not need factory calibration, since it just calibrated itself. The other horrible thing about pots is that they are terribly unreliable components. Only electrolytic and tantalum capacitors are worse. If you have vibration, pots are a really bad idea.

OK, product pitch time, these accuracy problems are why you should think about using Atmel AFE (analog front ends). We make them for the smart power meters. And I don’t mean to imply that Atmel is the only outfit. All the semiconductor makers make AFEs for various tasks. If it can offload your accuracy problems with calibration or the precise accuracy that comes with semiconductor processes, it is always a good deal to pay for an integrated solution rather than build it yourself. For years I told National Semi that people would pay for precise ratiometric resistors. It took Linear Technology to actually make the parts.

Benchmarks for embedded processors

Crack applications engineer Bob Martin was walking by just now and we got to talking about people we both knew from our National Semiconductor days. One name that came up was Markus Levy. Bob told me about EEMBC® — the Embedded Microprocessor Benchmark Consortium.

EEMBC

When I read up on the organization, I was delighted to see that Markus started work on embedded benchmarks when he worked at EDN magazine, where I also worked as an editor for 5 years. Back in 1996, it was clear that the old Dhrystone MIPS benchmark was not really meaningful to embedded systems. So Markus got a bunch of industry companies together and proposed the new benchmarks. They got 12 members right off the bat and got funding to establish real-world benchmarks that would be suitable for phones, tablets, routers and other embedded systems. As their about page explains:

“EEMBC benchmarks are built upon objective, clearly defined, application-based criteria. The EEMBC benchmarks reflect real-world applications and have expanded beyond processor benchmarks, also heavily focusing on benchmarks for smartphones/tablets and browsers (including Android platforms) and networking firewall appliances.”

I was glad to see that not only is Atmel a member, but so is ARM, who invented the cores used in Atmel’s 32-bit SAM line of microprocessors and microcontrollers. When you look at Atmel’s benchmark results, You can see our original 8051 processors get a score of 0.1. An AVR 8-bit MCU like the ATmega644 will get a benchmark score of 0.54. In contrast our ARM-core SAM3 and SAM4 chips will get a benchmark score up to 3.3. When I looked at a competitor’s ARM4 offering, I was delighted to see they ranged from 2.0 to 2.8, significantly slower than Atmel’s ARM4 SAM4 chips.

This is congruent with what I hear in the hallways here at Atmel. We just didn’t slap some counter-timers on an ARM core and release it. We took the time to do it right, adapting and improving the really cool peripheral system from our XMEGA 8-bit micros. I assume these benchmarks are just for raw speed, but the cool thing about Atmel’s peripheral event system is that you can have peripherals interact and do DMA without waking up the CPU core and sucking up a lot of power. Still it’s nice that the benchmark shows us as faster. This might mean you can get some chunk of code to execute faster and then get the micro put to sleep, saving power overall. This can be non-intuitive. If the micro’s compiler has more efficient code creation, you can get way more done with the same amount or less power. I know this is true for AVR 8- and 32-bit processors. The AVR was invented and crafted by hardware engineers that understood the importance of C and computer science in general. Although the entire AVR line did not spring fully-formed from the head of Thor, there were some really crafty Norwegians involved.

While the ARM-core SAM chips run ARM instruction sets, they too are optimized for compiling. After all, AVR showed the world how to do this in 1996. And with Atmel peripheral concepts, the SAM chips are really something. Check out the new SAM D20 Cortex M0+ micro for a nice inexpensive chip that can do a whole lot on minimal power.

Current sensing for smart meters and solar panels

In the recent edition of Electronic Products there was a fantastic I/V (current / voltage) diagram of a solar panel. It may have originated at Allegro, where, the authors of the article work. It confirmed something I suspected for a long time. The power output of a solar panel falls as it gets hotter. I will put a low-res version of the graph below but you really need to look at the EP article.

Solar-cell_I-V_curve_sfw

This diagram shows how you get less power out of a hot solar cell. Dotted lines are power out, equivalent to the area under the I/V operating point.

This connects with my realization that a solar cell is like any other photodiode. The forward voltage goes down as it gets hotter. But I was not sure what happened in reverse mode I/V. But with what we call a photodiode, you are usually trying to measure light, not draw power from it. So with many photodiode amplifiers, you short the diode into a virtual ground. With no voltage across it, it is not making any power. But the current output is very linear with respect to the light falling on the diode. And note that this current is a reverse current in the diode. You can think of it as a reverse leakage current that gets way worse when light hits the diode. Indeed, the baseline leakage is called dark current.

Photodide_I-V_curve

A photodiode I/V curve.

So here is the diode I/V curve you might see published in a photodiode amplifier book. Note that you can short the diode and its output has to fall on the –I axis. If you put a negative bias on the diode, and still keep it working into a virtual node so there is no voltage generated across it, then it is like the leftmost response. The negative (aka reverse) bias does not materially change the output, but it does greatly lower the diodes capacitance, since a photodiode is also a varactor. If you hook a photodiode, which is pretty much any diode there is, to a resistor, it will make current but the current into the resistor will also make a voltage. That gives you the output of a resistive load in the chart. The value of the resistor sets the slope of that load line. Note that the output is no longer linear. Doubling the light does not double the output current.

Realize the Allegro solar cell curve is showing you the bottom right quadrant of the generalized photodiode curve above. What the solar cell folks do is re-define positive current as what really comes out of the cell, as opposed to having positive current be defined as a forward diode current. So if you can image flipping photodiode curve up around its x-axis, and then tossing out the left side and the whole bottom half as well, you get the Allegro curve. Note that shining light on a solar cell or photodiode will never make forward diode current, but it will affect the operating point if you are putting forward current into the diode.

And note that you can’t get any power out of a cell unless you get both current and voltage at the same time. You short the solar cell and you will get the most current, but no power. If you leave the cell open circuit, you get the most voltage, but with no current flowing you are not getting power. So what you want to do is change the load on the cell until its operating point on the I/V curve has the most area under it.

Solar-cell_I-V_MPPT

The red rectangle is smaller since it does not have enough voltage. The blue rectangle is smaller because it does not have enough current. The green rectangle has the maximum area and hence is the MPP (maximum power point) of the solar cell.

So that is what the MPP (maximum power point) or MPPT (maximum power point tracking) concepts are all about. You get no power if you short the cell or leave it unconnected. What you are trying to do is maximize the area under the operation point. That is because power is current times voltage, just like area is X times Y. So the MPP chart I hacked up above shows three different operating points. You can see that the big dot corresponds to the rectangle with the greatest area. If your magnificent Atmel microcontroller multiplies out the voltage and current in real time, it can dither the operating point by changing the operating point of the dc-dc converter that is taking the solar cell power and putting into a battery or onto the ac line. This is the “T” in MPPT. By tracking the maximum power point, you get the most power you can for any particular solar cell, at any particular temperature, at any give illumination.

Now please read that Electronic Products article about measuring current, since you may want to use those Allegro current sensors in your MPPT inverter, or smart meter or other application. Atmel makes the microcontrollers with security and some have integrated power line communications (PLC) modems. We also have parts that integrate the AFE, so you don’t need these external parts in your smart meter. So if you need to measure and log and report and control current, keep Atmel in mind.

Atmel_Smart-energy-metrology-platform

Oh and in case somebody hasn’t thought of it yet, it seems obvious to one skilled in the arts that you can combine the shaded evaporative cooling systems that spray water on your roof beneath shutters, with solar panels as the shutters, so now you are cooling both the roof and your panels. Step C, more power.

Evaluating the SAM9N12 and SAMA5D3 MPUs

I was lucky enough to catch a presentation on our big-iron MPU (microprocessor unit) chips. Atmel is rightly famous for our MCUs, microcontroller units that have flash memory inside the chip. That includes our 8- and 32-bit AVRs and our ARM-core SAM D20 and SAM3 and SAM4. Indeed, one of the cool things Atmel did was license the ARM7 TDMI Thumb MPU core and make it into our SAM7 series MCUs. But Atmel makes MPUs as well, microprocessor units. These have external memory. These parts, such as the SAM9N series and the newer SAMA5D3 are much more powerful than the average microcontroller of any make.

SAM9N12-EK_SAM5D3x-MB

The SAM9N12 eval board (left), and the SAMA5D3 eval board. These are complete computers that sip a few Watts of power.

SAM9N12-EK_SAM5D3x-MB_backside

The SAM9N12 eval board (left), and the SAMA5D3 eval board from the back.

SAM5D3x-MB_jumpers

You can tell Atmel has experienced hardware folks. We put the SAMA5D3-EX jumper settings right on the silkscreen. Nice.

You could use the parts to make a human-machine interface (HMI) for industrial control, or a kiosk, or one of those super-fancy thermostats. Bar code scanners or gateways and routers can be fashioned from the A5, since it has good on-board communication. The SAMA5D3x can run Linux just fine. It can even do Android, but there it is better for “headless” applications, since the Android interpreted language overhead makes it hard on the A5 to both run the code and do the LCD display plus touch interface at the same time.

SAM9N12-block-diagram

The SAM9N12 block diagram. The eval board has even more functions, including a Zigbee module socket. There is also a pot or volume knob on the board not shown here.

And be sure to consider the older SAM9N12. It’s not as powerful, but as you would expect, uses even less power to do its thing. Right now (2013) the SAM9N12-EK eval kit is discounted and you can pick one up for $199 bucks from the Atmel store. I could not find a power spec on the eval kit, so I brought in my handy Kilowatt meter.

SAM9N12-EK_boot-screen

The Kilowatt never goes above 2W as the SAM9N12-EK boots and runs. Ignore that old Atmel logo—this was an old board laying around, although we still use this logo on our chips as a distinctive mark.

I was delighted to see the KiloWatt never got above 2 Watts. And that is 2W from the wall outlet, including the losses in the wall wart transformer. This just astonishes me. The pre-loaded app on the SAM9N12-EK runs Linux and boots into a slide show. You can select Qt display driver demos and several graphics displays to show off the capabilities of the chip. There is a resistive touch screen on the LCD. It does not work near as well Atmel’s capacitive touch screens, but it comes with the LCD module.

I fired up the SAM5AD3-EX as well, and was pleased to see the KiloWatt only showed 3W coming from the wall outlet. For as powerful as the SAM5 is, this is an amazing achievement.

SAMA5D3-EK_launch-screen

The SAMA5D3 uses 3 Watts while providing a full operating system and Ethernet connectivity.

A quick check at the Atmel Store shows the SAMA5D3-EK to go for $595. That is not pocket change, but remember this thing has the power of the desktop computer you used a few years ago. And we give you the schematics, the design files and sample applications to get you started. One great thing about the SAMA5D3x board is that the CPU and memory is on its own module. When I talked to the head of the business unit, he explained that we thought that this was the best way to give customers a leg-up on their development. Now you don’t have to worry about touchy PCB layout of the CPU and memory system, you can buy it as a module, even in higher volumes, from Atmel’s 3rd party partners.

So I just wanted to give an overview of these powerful Atmel chips, this time. Next I will fire up and show each board in more detail. And stay tuned, Atmel has some more powerful chips and systems coming, and I will be sure to tell you all about that.

Made in Space 3D printing startup speaks at Atmel

Friday saw quite a buzz here at Atmel when founders of the start-up Made in Space participated at a speaking event.

Made-in-Space_Atmel-sponsor

Atmel hosted start-up Made in Space to talk about their 3-D printer.

The first-floor training room was packed. In attendance was the Mayor of Mountain View, a retired astronaut and people from NBC News. Made in Space founder Jason Dunn talked about how useful it would be to have a manufacturing method in space. In keeping with the recent craze for 3D printing, Made in Space is well along the way to sending a 3D-printer to space.

Made-in-Space_Jason-Dunn

Jason Dunn expands and explains his rationale for putting a 3D printer in space.

At first the team tried to adapt an existing 3D printer for space use. They rented time on those parabolic flights where you are weightless for a minute or two. Every 3D printer they tried had severe limitations. Indeed a recent review in Product Design and Development indicates that many 3-D printers don’t work on Earth, much less in orbit. You can see how if a 3D printer needs to be precisely leveled in order to not damage itself, there is little chance it would ever work in space. And don’t forget a 3D printer intended for space use will need to withstand the G-force of launch.

Made-in-Space-diagram

There was a definite startup vibe in the room. I’ve been to those edgy companies that scribble directly on the wall. I guess brown paper serves when you are on the road.

Now last time I checked it was $10,000 a pound to put something into orbit. So the business case for 3D printing in space is that you make parts that you need as you need them. Jason maintains that 3D printing could make 30% of the spare parts on the Space Station. I find that a little hard to believe. Let’s face it, 3D printing makes inferior structural components that have nowhere near the properties of injection molded or machined parts. The space program uses Delrin and polyamide and thermoset high-performance engineering plastics. To my knowledge the “additive string” type of printer cannot use these high-zoot engineering thermoplastics. Even if they did, the resulting parts are never as strong as an injection molded part.

Made-in-Space-crowd

There was a healthy crowd at the Atmel-sponsored function.

Still, you can see how compelling it is to be able to manufacture in space. You can check out Jason’s TEDx talk to see his vision. The second he started his presentation here at Atmel, I could not help but think of the Apollo 13 disaster. If only those astronauts had a 3D printer, they could have easily made a part to adapt the Command module CO2 scrubber canisters to the Lunar module design. Sure enough, the Made in Space people also thought of this scenario. So they gave an intern the job to design and build a part that would have done the job. It took him less than an hour to design the part and the printer had the part built in a few hours more. That would sure have lowered the blood pressure of those three stranded astronauts. And Jason noted that it is the ground crew that can be designing the parts, further offloading the astronauts so they can concentrate on the space-based tasks that they need to get done.