Tag Archives: Google

Google and Levi’s to make smart clothing that controls your devices


Google wants to turn your jeans into actual smarty pants.


Controlling a tablet from the sleeve of your jacket or answering a call with a tap of your jeans aren’t something you can normally do — yet at least. However, if Google’s ATAP division has their way, you will. That’s because at I/O 2015, the company revealed its grand plan for making clothing much more connected, ultimately ushering in the first wave of “smarty” pants.

producing-at-scale-2

During their presentation, Google announced their initiative of weaving touch and gesture connectivity into any textile, such as denim and wool, using standard, industrial looms. Project Jacquard will enable everyday garments, and even furniture, to be transformed into interactive surfaces that can be used as trackpads and buttons to control existing apps, phone features and more.

While wrist-adorned devices have stolen most of the wearable spotlight as of late, the emergence of less invasive devices hold the true potential to disrupt the space — so much so that the number of electronic textiles has been projected to skyrocket over the next five years, with more than 10 million articles of smart clothing shipped annually. Undoubtedly, this figure will be possible with a little help from Google.

embedding-electronics-2

These wearable objects will work by receiving information directly from the conductive material and then transmitting the data to a nearby device or computer over low-powered Wi-Fi. Meanwhile, LEDs, haptics and other embedded outputs provide feedback to the user, seamlessly connecting them to the digital world.

In order to make this a reality, the first thing the team had to do was create a yarn that could be produced and woven into clothes on a mass scale. And so, ATAP has been working on a textile that combines ultra-thin metallic alloys and common synthetic yarn such as cotton, polyester or silk. The end result is a fabric that’s strong enough to be employed in common pieces of clothing and home interior items, all while looking good enough for actual use.

making-clothes-intelligent-1

Keep in mind, this isn’t the first time conductive material has been woven into fabrics. If you recall, scientists in Shanghai recently developed battery technology that could be woven into cotton. However, Google has much bigger aspirations of making these sort of products less of a novelty and more of an everyday thing. And so, the tech giant has tapped Levi’s as Project Jacquard’s launch partner.

Levi’s believes that smart clothing could one day allow people to interact more with physical world around them instead of constantly staring down at their mobile screens. Not to mention, having a big name fashion brand behind them will surely help Google expedite the process in bringing smart clothing mainstream.

Whether this comes to fruition or not, one thing is for certain: the success of wearables hinges on unobtrusiveness. Take Tappur, for example, who is turning the human body into a musical instrument, electronics controller and gamepad. Or, Maker Katia Vega who introduced James Bond-like technology that lets users discreetly open applications, send preset messages and broadcast their location through a stroke of their hair.

Perhaps Google’s Ivan Poupyrev sums up the future of this technology best, “You would not call it a wearable, you would call it a jacket.”

making-clothes-intelligent-2

Intrigued? Head over to the project’s official page here.

Google patents customizable robot personalities


Newly-patented system would allow users to download the personality of a celebrity or a deceased loved one to a robot.


Google has been granted a patent that would allow the company to develop downloadable personalities for robots drawn from the cloud, such as your favorite celebrity or even a deceased loved one.

google-granted-patent-robots-personalities

“The robot personality may also be modifiable within a base personality construct (i.e., a default-persona) to provide states or moods representing transitory conditions of happiness, fear, surprise, perplexion (e.g., the Woody Allen robot), thoughtfulness, derision (e.g., the Rodney Dangerfield robot), and so forth,” the filing reveals.

Just as you would download an app, Google’s patent details how a user could download various actions and personalities. The robot would use information from a person’s mobile devices, such as calendar information, emails, texts messages, call logs, web browsing history and TV viewing schedule, to determine a personality to take on that would suit the user. Beyond that, friends will even be able to clone their robots and exchange aspects of its personality.

“The personality and state may be shared with other robots so as to clone this robot within another device or devices. In this manner, a user may travel to another city, and download within a robot in that city (another ‘skin’) the personality and state matching the user’s ‘home location’ robot. The robot personality thereby becomes transportable or transferable,” the document continues.

Google also outlines a number of examples where the robot can learn human behavior and adapt accordingly, whether that’s knowing a user is grumpy when it’s raining outside, in need of coffee before heading off to work, or even being unable to consume particular meals due to food allergies.

“For example, the user may be allergic to mangos and may update a user-profile to include such information. Simultaneously, a robot may update the user-profile to include that the user is also allergic to peanuts. When the dining fare is French cooking, the robot may be queued to adopt the persona of Julia Child.”

Based on the information in its user-profile, the robot can even adopt a butler persona and offer up suggestions. Meanwhile, users can interact with the robot and tell it if it has done something wrong, as well as be programmed to provide a desired look.

Robots that mimic humans are still very much in their infancy, and truthfully there’s no telling where this technology can go — especially when backed by giants like Google. And while there’s no guarantee that this patent will ever come to fruition, it may very well be the next step in making human-robot relationship a reality. Intrigued? You can read the entire patent filing here.

Google patents a wearable odor-sensing (and masking) device


Like a Glade Plug-in for your armpits? 


While most of the wearable devices on the market today have been geared towards tracking activity levels, monitoring sleep habits or even analyzing fitness routines, we may be on the cusp of a new era in body-adorned gadgetry. That’s because Google has received a patent for a movement-tracker that activates a web-connected air freshener to emit a fragrance to mask any offending odors caused by physical activity. In other words, you’ll no longer need to smell your armpits to ensure that you’re free of B.O.

f1b9e15efa2ae01167678b335c670d8fb4b770d3

How it works is pretty self-explanatory. If the device, shall find you on the smelly side, it will give off a nice-smelling fragrance to deoderize you. What’s more, the gizmo also plugs into your social media accounts to help steer clear of any friends who may be in your vicinity. If someone is nearby, the device will send the not-so-fresh-smelling wearer a map with a route to navigate around those folks. The one-of-a-kind gadget is also equipped with a tiny fan to ensure the fragrance gets to the right place.

“When a user is wearing the fragrance emission device and begins to exert himself or herself, an activity module within the device may detect the physical exertion. The activity module may detect a rise in sweat levels, an increase in body odor or body temperature, or any other parameter that may indicate the user is exercising or otherwise exerting themself,” Patent No. 8,950,238 reads.

When the activity module determines that the user is performing a physical activity, it alerts the device’s built-in predictor. This predictor then uses the information provided by the activity module to predict when the user will generate body odor in the future, and when a fragrance will need to be applied to the user.

“For the purposes of brevity, the material applied to the user will be described as a fragrance, however, the material applied may also be an odor neutralizer, which would serve to neutralize or eliminate the body odor generated by the user instead of covering it up with a fragrance. In some embodiments, the predictor may also use information stored within the device regarding past instances where a fragrance was emitted, combine that information with the current information supplied by the activity module, determine when body odor will be generated by the user, and dispense an appropriate amount of fragrance at an appropriate time,” the patent document explains.

Once the predictor determines when the user will begin to generate body odor, an optional alert module located within the device may alert the user of the situation and let the user know when the fragrance will be emitted. The user will then have the opportunity to override the impending fragrance emission, based on the current circumstances of the user. This will particularly come in handy if, say, you are planning on showering immediately after a high-intensity workout.

“Should the user choose to reject the fragrance emission, a suppressor located within the device will cancel the scheduled fragrance emission such that the material dispenser will not dispense the fragrance at the scheduled time. Should the user choose to accept the fragrance emission instead, the suppressor will not cancel the scheduled fragrance emission, and the material dispenser will dispense the fragrance at the scheduled time.”

Of course, this doesn’t mean that the Google concept will actually make it to product form. Though, given the proliferation of fitness-focused apps and wearables, it does mean that the world is becoming more active and therefore will get a bit stinkier, too. Interested in learning more? You can find the entire patent here.

It’s a bird.. It’s a plane… No, it’s a Google drone!

Google’s top research laboratory is hard at work developing a fleet of drones that will be able to take to the skies to deliver packages to consumers’ front steps. The Mountain View, California-based company is the latest to announce the testing of delivery drones, following the likes of Amazon, UPS and Domino’s Pizza.

Project+Wing+-+Still+02-960

The project is being developed at Google X, the company’s clandestine tech research arm, which is also responsible for its self-driving car. Project Wing has been running for two years, but was kept secret until now. Google said a 5-foot-wide single-wing prototype had carried supplies including candy bars, dog treats, cattle vaccines, water and radios to farmers in Queensland, Australia earlier this month.

Standing at 2.5-feet-tall and boasting four propellers that move into different positions for different stages of flight, packages are placed into an opening located in the middle of the wing. The company said that its long-term goal was to develop drones that could be used for disaster relief by delivering items such as medicines and batteries to folks in areas that conventional vehicles cannot reach.

“Even just a few of these, being able to shuttle nearly continuously could service a very large number of people in an emergency situation,” explained Astro Teller, Captain of Moonshots – Google X’s name for big-thinking projects.

Google began working on drones in 2011 and said it expected it would “take years to develop a service with multiple vehicles flying multiple deliveries per day,” the Wall Street Journal writes. While the technology may be ready, the legal logistics may not be. The FAA has mostly outlawed the commercial use of drones, reserving the rights to fly these unmanned vehicles to hobbyists and researchers.

Though you may not receive a drone-delivered package this year, that may soon all change. A number of companies, including Amazon, 3D Robotics, Parrot and DJI Innovations, recently came together to devise a UAV coalition in hopes of facilitating development.

drones02

“Self-flying vehicles could open up entirely new approaches to moving goods, including options that are cheaper, faster, less wasteful and more environmentally sensitive than what’s possible today,” Google notes.

Google hopes the helicopter-like vehicles will be able to drop-off items generally weighing less than 5 pounds within a 10-mile radius of its warehouses in about 30 minutes, with visions that the drones will fly programmed routes at altitudes of 130 feet to 200 feet with the push of a button.

Expedited, more efficient delivery is just one of many applications UAVs could offer society. In fact, according to ex-Wired editor and 3D Robotics CEO Chris Anderson, the (AVR-powered) DIY drone community will soon have more than 15,000 drones flying, compared to some 7,000 drones in use worldwide by military forces.

In the future with a global drone fleet, Google anticipates that it will be able to convey goods to consumers on the same day an order was placed. Talk about speedy delivery!

ATmega328P inside the Nexus Q

Talking to one of my Google buddies at the eFlea, he mentioned that there is an ATmega328P inside the Google Nexus Q media streaming device. I asked what it did and he explained there is a row of LEDs around the device and Google wanted those LEDs to light and flash in sequence the second you applied power. A perfect application for a Flash microcontroller that boots in microseconds.

I was concerned that this was a Google secret until a quick check on the Internet showed a post over at the great folks from iFixit. It verifies that there is an ATmega328P inside the Nexus Q, and you can even see the Atmel logo in the picture.

ATmega328P-Nexus-Q

The Atmel ATmega328P is used to flash the LEDs around the periphery of the Google Nexus Q. It’s the bigger chip at the top right (courtesy iFixit).

Atmel talks ARM, IoT and sensors on Google+



On Tuesday, February 18, ARM hosted a live Google+ Hangout panel with executives from Atmel, Freescale and Sensor Platforms.

Participating panelists included:

  • 

Will Tu – Director of Embedded Segment Marketing at ARM
  • Diya Soubra – CPU Product Marketing Manager for Cortex-M ARM Processors at ARM
  • 
Adrian Woolley – Director of Strategy and Business Development at Atmel’s Microcontroller Business Unit
  • Mike Stanley – Manager of Freescale’s Sensor Solutions Division
  • Kevin A. Shaw – CTO of Sensor Platforms

As you can see in the video above, the panelists discussed various software and hardware design techniques to help IoT developers achieve a precise balance between low power sipping and high software complexity for sensor-enabled devices.

“When Atmel designs its microcontrollers, we make sure we have a very good understanding [of particular] applications. [We] optimize the hardware and peripherals [accordingly], developing ICs around the software and [specific] implementations,” Wooley explained.

“[We] understand how software algorithms work, how sensors work and optimize our microcontrollers to operate at extremely low power levels. Atmel puts a lot of intelligence around peripherals in both mobile and IoT, so we don’t need to wake them up anymore than is absolutely necessary. When activated, our MCUs efficiently process data with a minimal amount of battery power.”

Interested in learning more about Atmel’s comprehensive ARM-based MCU and MPU portfolios? You can check out our official ARM product page here.

Ready-to-deploy Android ports with Atmel



Writing for DigiKey, Maury Wright notes that Google’s flagship Android operating system is typically associated with the smartphone and tablet markets. However, says Wright, the software platform’s surging popularity has created opportunities for innovative design teams.

“First, designers can develop companion products for the Android ecosystem that rely on low-cost microcontrollers (MCUs) and provide value-added functionality,” he explained.

 “Second, designers can adapt the Android platform as a basis for their own system designs, [as] the smartphone experience has raised the expectation for user interfaces in specialty embedded systems.”

Indeed, Wright recommends developers consider choosing Android as the OS of choice for an embedded system design, simply because most people have become quite comfortable interacting with an intuitive touch-based user interface.

“While you might not think a specialized embedded system – say a portable data-acquisition system or an industrial controller – needs the sophistication of the Android interface, users accustomed to these systems may prefer Android,” he continued.

“Moreover, Android comes with features such as an intuitive GPS application that could come in handy in an embedded system. Design teams can quickly develop an intuitive interface for custom applications. Additionally, Android may reduce development time and deliver a more compelling end product.”

As such, designers might want to consider porting Android to any number of high-end MPUs.

“Atmel already offers some [MPUs] with a ready-to-deploy Android port,” said Wright. “The AT91SAM9G45 and AT91SAM9M10 [MPUs] are based on the ARM926 processor core, [with] Atmel offering support for Android, along with Linux and the embedded version of Microsoft Windows.”

As Wright points out, both the AT91SAM9G45 and AT91SAM9M10 boast a robust peripheral set as depicted in the image above. In terms of connectivity, the MPUs integrate a high-speed 480-Mbit/s USB interface that can operate in host or device mode, a 10/100-Mbit/s Ethernet MAC, along with multiple UART, SPI and TWI (two-wire interface such as I²C) ports. 

The ICs include other typical MCU peripherals, such as a 10-bit A/D converter, four 16-bit PWM controllers, six 32-bit timers and general-purpose I/O.

On the memory side, there is an integrated boot ROM and a small on-chip SRAM array. As expected, the MPUs also include a number of features that will come in handy in a touch-based system, such as an integrated LCD controller that supports screens with resolutions to 1280 x 860 pixels with 2D graphics acceleration and an interface for resistive touch screens. 

Meanwhile, the AT91SAM9M10 adds camera and audio interfaces, along with a video decoder capable of handling D1 720 x 576- or WVGA 800 x 480-pixel streams at 30 frames per second.

“For design teams who want to jump-start an Android project, Atmel also offers the AT91SAM9M10-G45-EK Evaluation Kit,” Wright added. “The kit includes an AT91SAM9M10 processor, a 480×272-pixel LCD with a resistive touch panel and easy interface to all on-chip peripherals. The kit and [MPUs] come with support for Android 2.1.”

Interested in learning more about Atmel’s microprocessors? You can check out the AT91SAM9G45 here, the AT91SAM9M10 here and our Atmel ARM-based portfolio here.