Monday, 21 November 2016

Google's Project Tango

How Google's Project Tango will change your smartphone view through its camera


What is Tango exactly? Its AR features can enhance your everyday life, according to Lenovo, which says that AR and VR are about to boom in the next five years. But before that happens, both need to become relevant and meaningful. The features of Tango are supposed to disappear in the background of your everyday life.

Tango is supposed to make this happen with the ability to go beyond simple navigation via Google Maps. It's meant to understand the room you're inside in three ways: motion tracking, depth perception, and area learning.

It could have a profound impact on designing and measuring prototypes virtual object while standing in a real-world environment. The phone was used like a window to accurately size up adding furniture to a room.

Lenovo Phab 2 Pro is the very first "Project Tango", Lenovo and Google started working on it in January of 2015.

Project Tango: what does it do?

Project Tango is a new way of identifying where your phone is without using GPS technology or any other external signals. It's all done by your phone itself, using sensors paired with the Tango specific camera on your phone. It means indoor navigation is now possible on a phone or tablet - something we've never seen done on mobile devices. Google Maps could really be everywhere soon.

It'll also make 3D mapping easier than ever before as well as measuring up physical spaces. On top of that, it will be a big step in the right direction for augmented reality and 3D worlds.

Project Tango: how does it work?

Three types of technology are used to make Project Tango and the first is motion-tracking. It'll use the camera to track the visuals of the environment as well as the accelerometer and gyroscope data to track where you are in a room. If you move, Project Tango will know where and how you've moved.

Second is area learning, where Project Tango will take information it has had in the past and enhance it with other elements such as notes or points of interest in a location.

Then there's depth perception as well to track how far away surfaces are. All of these then combine to make a 3D image of the environment you're in.

1 comment:

  1. It sounds like an interesting project. Recently, I heard about a wearable artificial vision device that helps partially sighted people achieve greater independence. Thanks to this discreet blind assistive technology, people who are blind or visually impaired can "read" on their own again!

    ReplyDelete