Tag Archives: engineering

This engineer is building a 45-foot-long “mega processor” in his home

As the race for the fastest and tiniest chip continues, one engineer has gone in the entirely opposite direction. 

While most companies and Makers are racing to build the smallest microprocessors, UK-based engineer James Newman has decided to go the opposite route by developing 45-foot (14-meter) “mega processor” inside his home. The computer chip — which is intended to make the hardware inside the smallest of MCUs more visible — will consist of 14,000 individual transistors and 3,500 LED lights, while standing approximately 6.5 feet high.

(Source: James Newman)

(Source: James Newman)

First reported by BBC, Newman says that the 16-bit chip will operate the same as any other standard microprocessor found in today’s computers. However, the engineer reveals that the mega-processor will only be capable of operating at speeds no greater than 25kHz. To put that into perspective, the speed of an average desktop PC is around 2.5 GHz.

With hopes of completing the project by the end of 2015, the first programs he intends to run on his unit will likely be Tetris, Pong, Noughts and Crosses and John Conway’s Game of Life. 

(Source: James Newman)

(Source: James Newman)

“When it’s set up and running in the living room, there won’t be much space for living,” the engineer told BBC News. “One of the fantasies is to line the hallway with it.”

To date, the project has been quite the undertaking, having already consumed over three years of development time and $32,000 in supplies.

(Source: James Newman)

(Source: James Newman)

Intrigued? You can follow along with the engineer’s entire build on his website.

These kirigami-based, stretchable batteries may power future wearables

Stretchable batteries inspired by origami could one day power smartwatches and other wearable devices, researchers say.

Researchers at Arizona State University have created a battery that can stretch up to 150% of its original size, opening the door to a wide range of potential applications in wearable technology. Based on the origami variant kirigami, the team was able transform a larger battery into several smaller ones through a series of folds and cuts. As smart fabrics, watches and other devices continue to emerge, companies will surely be eager to embrace such an easily integrated, flexible power supply opposed to its much more rigid siblings.


Led by associate professor Hanqing Jiang, the battery was developed using slurries of graphite and lithium cobalt dioxide, then coating them onto sheets of aluminum foil to make positive and negative electrodes. From there, they added bends and cuts to establish the patterns. The result? A battery that could stretch while still maintaining full functionality.

Despite engineers having used origami as inspiration for foldable batteries that can flex in the past, the team says this marks the first time a lithium-ion battery has been made stretchable.

“Energy-storage device architecture based on origami patterns has so far been able to yield batteries that can change only from simple folded to unfolded positions. They can flex, but not actually stretch,” the researchers explain.

In order to test its efficiency, the kirigami-driven prototype battery was sewn into an elastic wristband attached to a Samsung Gear 2 smartwatch. Even as the strap was stretched in various ways, the battery was able to continue powering the watch and its functions, including playing video.

You’ll want to see it in action below!

Percepio Trace: Increasing response time

Discover how a developer used Tracealyzer to compare runtime behaviors and increase response time.

With time-to-market pressures constantly on the rise, advanced visualization support is a necessity nowadays. For those who may be unfamiliar with Percepio, the company has sought out to accelerate embedded software development through world-leading RTOS tracing tools. Tracealyzer provides Makers, engineers and developers alike a new level of insight into the run-time world, allowing for improved designs, faster troubleshooting and higher performance. What has made it such a popular choice among the community is that it works with a wide-range of operating systems and is available for Linux and FreeRTOS, among several others.


When developing advanced multi-threaded software systems, a traditional debugger is often insufficient for understanding the behavior of the integrated system, especially regarding timing issues. Tracealyzer is able to visualize the run-time behavior through more than 20 innovative views that complement the debugger perspective. These views are interconnected in intuitive ways which makes the visualization system powerful and easy to navigate. Beyond that, it seamlessly integrates with Atmel Studio 6.2, providing optimized insight into the run-time of embedded software with advanced trace visualization.

Over the next couple of months, we will be sharing step-by-step tutorials from the Percepio team, collected directly from actual user experiences with Tracealyzer. In the latest segment, how a developer used Tracealyzer to solve an issue with a randomly occurring reset; today, we’re exploring how the tool can increase response time.

In this scenario, a user had developed a networked system containing a TCP/IP stack, a Flash file system and an RTOS running on an ARM Cortex-M4 microcontroller. The system was comprised of several RTOS tasks, including a server-style task that responds to network requests and a log file spooler task. The response time on network requests had often been an issue, and when testing their latest build, the system responded even slower than before. So, as one can imagine, they really wanted to figure this out!

But when comparing the code of the previous and new version, they could not find any obvious reason for the lower response time of the server task. There were some minor changes due to refactoring, but no significant functions had been added. However, since other tasks had higher scheduling priority than the server task, there could be many other causes for the increased response time. Therefore, they decided to use Tracealyzer to compare the runtime behaviors of the earlier version and the new version, in order to see the differences.

They recorded traces of both versions in similar conditions and began at the comparison at the highest level of abstraction, i.e., the statistics report (below). This report can display CPU usage, number of executions, scheduling priorities, but also metrics like execution time and response time calculated per each execution of each task and interrupt.


As expected, the statistics report revealed that response times were, in fact, higher in the new version — about 50% higher on average. The execution times of the server task were quite similar, only about 7% higher in the latter. Reason for the greater response time, other tasks that interfere.

To determine out what was causing this disparity, one can simply click on the extreme values in the statistics report. This focuses the main trace view on the corresponding locations, enabling a user to see the details. By opening two parallel instances of Tracealyzer, one for each trace, you can now compare and see the differences — as illustrated below.


Since the application server task performed several services, two user events have been added to mark the points where the specific request are received and answered, labeled “ServerLog.” The zoom levels are identical, so you can clearly see the higher response time in the new version. What’s more, this also shows that the logger task preempts the server task 11 times, compared to only 6 times in the earlier version — a pretty significant difference. Moreover, it appears that the logger task is running on higher priority than server task, meaning every logging call preempts the server task.

So, there seems to be new logging calls added in the new version causing the logger task to interfere more with the server task. In order to observe what is logged, add a user event in the logger task to show the messages in the trace view. Perhaps some can be removed to improve performance?


Now, it’s evident that also other tasks generate logging messages that affect the server task response time. For instance, the ADC_0 task. To see all tasks sending messages to the logger task, one can use the communication flow view — as illustrated below.


The communication flow view is a dependency graph showing a summary of all operations on message queues, semaphores and other kernel objects. Here, this view is for the entire trace, but can be generated for a selected interval (and likewise for the statistics report) as well. For example, a user can see how the server task interacts with the TCP/IP stack. Note the interrupt handler named “RX_ISR” that triggers the server task using a semaphore, such as when there is new data on the server socket, and the TX task for transmitting over the network.

But back to the logger task, the communication flow reveals five tasks that sends logging messages. By double-clicking on the “LoggerQueue” node in the graph, the Kernel Object History view is opened and shows all operations on this message queue.


As expected, you can see that logger task receives messages frequently, one at a time, and is blocked after each message, as indicated by the “red light.”

Is this a really good design? It is probably not necessary to write the logging messages to file one-by-one. If increasing the scheduling priority of server task above that of the logger task, the server task would not be preempted as frequently, and thus, would be able to respond faster. The logging messages would be buffered in LoggerQueue until the server task (and other high priority tasks) has completed. Only then would the logger task be resumed and process all buffered messages in a batch.

By trying that, these screenshot below demonstrates the server task instance with highest response time, after increasing its scheduling priority above the logger task.


The highest response time is now just 5.4 ms instead of 7.5 ms, which is even faster than in the earlier version (5.7 ms) despite more logging. This is because the logger task is no longer preempting the server task, but instead processes all pending messages in a batch after server is finished. Here, one can also see “event labels” for the message queue operations. As expected, there are several “xQueueSend” calls in sequence, without blocking (= red labels) or task preemptions. There are still preemptions by the ADC tasks, but this no longer cause extra activations of the logger task. Problem solved!

The screenshot below displays LoggerQueue after the priority change. In the right column, one see how the messages are buffered in the queue, enabling the server task to respond as fast as possible, and the logging messages are then processed in a batch.


These tiny robots can carry loads 100 times their weight

Inspired by a gecko, one tiny bot can pull objects that are nearly 2,000 times heavier than itself. 

Whoever said big things can’t come in small packages has surely never seen these robots. That’s because Stanford University engineers have built miniature bots capable of hauling things that weigh over 100 times more than themselves.


Impressively, the strongest of the bots — which are aptly named MicroTugs — weighs only 12 grams yet is capable of pulling objects that are nearly 2,000 times its weight. While another one, a 9-gram climbing robot, can carry over a kilogram vertically up glass. To put these into perspective, co-creator David Christensen says that is the equivalent of a person dragging a blue whale and climbing up a skyscraper while lugging an elephant, respectively. Even a 20-milligram bot can tote up to 500 milligrams, which is roughly the size of a paper clip.

How can this be, you ask? The robots borrow techniques from inchworms and geckos as they traverse their terrain. Inspired by the gecko, the engineers covered the robots’ feet with tiny rubber spikes that bend when pressure is applied and straighten out when the robot picks its foot back up. The team of researchers also adopted the inchworm’s method of locomotion: while one half of its body moves forward, the other stays in place to support the heavy load being pulled. This allows the bot to climb walls without losing its grip, New Scientist explains.

“This work demonstrates a new type of small robot that can apply orders of magnitude more force than it weighs. This is in stark contrast to previous small robots that have become progressively better at moving and sensing, but lacked the ability to change the world through the application of human-scale loads,” the pair of engineers write.


Just think: A robot bringing your coffee across your desk when out of reach or picking up a pen that was dropped on the floor? That’s not the end-game, though. In the future, the team hopes that machines like these could prove to be useful in factories, on construction sites, and even in emergency scenarios. For instance, one might carry a rope ladder up to a person trapped on a high floor in a burning building.


The mighty bots will be presented next month at the International Conference on Robotics and Automation in Seattle. Intrigued? Delve deeper into the Stanford engineers’ research and development here, and be sure to watch them in action below!

Robot Garden hopes to make coding more accessible for everyone

This robotic garden demonstrates distributed algorithms with more than 100 origami robots that can crawl, swim and blossom.

Created by MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and the Department of Mechanical Engineering, the aptly named Robot Garden is a defined as “a system that functions as a visual embodiment of distributed algorithms, as well as an aesthetically appealing way to get more young students, and particularly girls, interested in programming.”


At its core, the project is a tablet-operated system that illustrates MIT’s cutting-edge research on distributed algorithms using robotic sheep that were created through traditional print-and-fold origami techniques, origami flowers (including lilies, tulips and birds of paradise) that are embedded with printable motors enabling them to ‘blossom’ and change colors, as well as magnet-powered robotic ducks that fold into shape by being heated in an oven.

“Students can see their commands running in a physical environment, which tangibly links their coding efforts to the real world. It’s meant to be a launchpad for schools to demonstrate basic concepts about algorithms and programming,” explains Lindsay Sanneman, a lead author on the recently-accepted paper for the 2015 International Conference on Robotics and Automation.

The project is comprised of 16 different tiles, each connected to an Atmel based Arduino board and programmed using search algorithms that explore the space in different ways. The garden itself can be controlled by any Bluetooth-enabled device, either through clicking on flowers individually or a more advanced “control by code” feature that calls for users to add their own commands and execute sequences in real-time. In fact, users can interact with the garden through a computer interface, allowing them to select a tile and inflate/deflate the flower or change the color of its pedals.


“The garden tests distributed algorithms for over 100 distinct robots, which gives us a very large-scale platform for experimentation,” says CSAIL Director Daniela Rus, who is also a co-author of the paper. “At the same time, we hope that it also helps introduce students to topics like graph theory and networking in a way that’s both beautiful and engaging.”

The project was recently displayed at CSAIL’s “Hour of Code” back in December, where it surely did its part in inspiring kids to get interested in STEM-related disciplines. In the near future, the researchers hope to make the garden operable by multiple devices simultaneously, and may even experiment with interactive auditory components such as microphones and music that would sync to movements.

Interested? Head over to MIT’s official page here, and be sure to watch the garden in action below.

Video: Vegard Wollan reflects on life and innovation

In the final segment of my interview with AVR microcontroller creator Vegard Wollan, I asked about his background and innovation at Atmel.

In response to my question of how he views his expertise, Vegard noted that he started out as a computer architect and digital designer. It’s simple to see the ease-of-use DNA in the AVR product line when Vegard then noted that he soon saw himself as someone that could make life easy for embedded designers. I think this focus on the customer pervades all of Atmel to this day.


Vegard Wollan reflects on his history of innovation at Atmel.

I went on to ask Vegard what he does in his spare time. His response? Exercising and boating off the beautiful, dramatic Norwegian coastline. I think physical activity is a key thing. In fact, I wish someone had warned me as a young man that engineering has an occupational hazard. You can make a good living sitting at a desk. This was less true when I was an automotive engineer, as I had to go the experimental garage and walk around Ford’s giant complex in Dearborn, Michigan. Nowadays, we all seem chained to a computer, and stuck in a chair all daylong. So, exercise and boating sounds like a great way to stay active and balance our lives a little bit!

As I pictured Vegard sailing around Norway looking at beautiful sunsets, I wondered if that was inspired him to be so innovative. He responded that the primary source of innovation at Atmel is working with a team of creative innovative people. I think this is true in most human endeavors. When I asked my dad why some restaurants had really good service, he noted that good people like to work with other good people. That is why Vegard is spot-on, and quite humble in noting that innovation comes from a team, not any single person.

Want to learn more about the backstory of AVR? You can tune-in to the entire 14-part series here.

When it comes to firmware, when in doubt don’t leave it out!

Product design teams endeavor to plan the safe launch of electronics products to prevent re-discovering issues that should have been learned from the previous project. Many Serial Electrically Erasable Programmable Read-Only Memory (SEEPROM) users have never utilized such components and therefore may not have knowledge of potential issues. Here is a personal story from several years ago when I was asked to support a customer working on an issue on a weekend. (You may have already guessed that the call came to me that weekend was from my boss’s boss’s boss.)


Here’s the issue that was described to me over the phone by the customer engineers (hardware and firmware) while they were in their laboratory troubleshooting:

We exchanged emails with DSO (digital storage oscilloscope) captures of the serial protocol after which I would request another DSO capture or two. Once we were drilling down to the issue, a customer firmware engineer held the phone line while the customer hardware engineers made more measurements. The customer firmware engineer asked me, “Why would someone drive the SEEPROM /CS signal low (true) and then back high (false) with no clocks or data in?”  I quickly whipped out, “That is a chip select toggle that is utilized to recover from power interruption of the host microcontroller or from a protocol violation, and we have a Juraasic period FAQ about that buried deep in our website.”  The customer firmware engineer said, “Uh oh, I didn’t know why anyone would do that, so I took it out.” Soon, the hardware engineers emailed me a DSO capture showing a protocol violation and then no communication from the SEEPROM. I announced that the firmware engineer has the solution to this issue and should be able to produce a new firmware build to mitigate this situation in the future.

Several product lines were brought to a standstill because the task to reduce firmware lines of code took precedence over why the code was there to begin with. Numerous engineers (including myself) have worked weekends unnecessarily. The moral to the story is that if you have product firmware that communicates properly with an Atmel SEEPROM and you do not know why a few lines of code exist, then you may want to ask yourself about the expected benefit of modifying that code before you throw the baby out with the bath water. Sometimes things are there for a reason that may not be all that obvious.

Stick to the adage: “When in doubt, don’t leave it out.”

Oh, and one more thing… Please comment your firmware source files adequately to help the next firmware developer. Remember that person may just end up being a future version of you!

This blog was written by Clay Tomlinson, Atmel Staff Applications Engineer