Using the GPU in Android to process live camera images

Updated 18th May 2011 with a description of Android’s Renderscript feature.

There has been a lot of research done in using GPUs (the Graphics Processing Units used for acceleration in video cards ) to build super computers.

However there was an interesting demo in the keynote at IO 2011 the Google developers conference. They use the GPU in an android tablet to process the live camera data to provide realtime image processing.

http://www.youtube.com/watch?v=OxzucwjFEEs#t=16m30 (starts at 16m30)

This has interesting implications not only for virtual reality processing but also for say for decoding 1D and 2D barcodes and document recognition in real time – with performance like an expensive 2D imager but with less power and cost.

Update: The APIs for accessing the GPU are available in Honeycomb in Renderscript. This is described in the Android developers blog here. It enables access to the GPU using a language called C99. If however the GPU is not available C99 will be run on the main CPU  this decision is made at runtime, allowing programs relying on Renderscript to run on a device without a GPU (albeit slowly)

 

This entry was posted in Uncategorized and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published.