All Case Studies Design Development Interviews Our Way Project Management

Augmented Reality in Mobile Devices (with App Tutorial for Android!)

What the heck is ”augmented reality”? It’s become quite a popular term in the past few years thanks to Google Glass, but the idea is older than the first Android Phone. Do you remember the Terminator movie? Our hero had vision which mapped the nearby area and displayed additional information about objects or people.

s3

And I think this is the best way to show you what AR really is. The definition of AR came from the guru Ronald Azuma in late 1997 when he established the three main elements that define augmented reality:

  • it connects real and virtual worlds
  • it’s interactive in real time
  • it allows movement in 3D

It is very important that you do not confuse augmented reality with virtual reality because these technologies are just not the same. The crucial idea is to overlay digitised information on the real (offline) world. Sounds great, doesn’t it? This might seem like science fiction... Is it even possible with today’s technology to display all the necessary details? Actually, it is - check out Project Tango or Intel’s RealSense camera. Of course, these are very sophisticated technologies and are not available for everyone to use, but it doesn’t have to be like this. Let me show you some cool examples that you can try in your daily life:

There are devices on the market which, when installed in your car, transform the front windscreen into a heads-up display which shows navigation information, safety alerts, etc.

s3 Who hasn't heard about Google Glass? It’s a popular wearable made by Google. You can also think about it as a kind of personal heads-up display. But what’s really great about it is that its SDK enables you to create your own AR apps. For example, if you need some motivation to start jogging every day, you can choose the “Zombie Chase” mode on your RaceYourself app. Every time you look back, you can see how close you are to being devoured by the horde of zombies chasing you!

s3

But let's move on to technology that is already commonplace and talk about some readily available Android apps. Smartphones are always by our side, and can provide important information in real time. The most interesting experiences are when you can connect what you see or hear with digitally processed data. For example, Google recently implemented an AR feature in their mobile Translate app. It automatically translates text visible via the camera view. And what’s most amazing is that everything happens in real time!

s3

 Given what we’ve learned so far about augmented reality, you might be inclined to think that it’s quite difficult. Either it requires modern hardware technologies or powerful image processing algorithms combined with sophisticated software. But does it have to be like this? Let’s take a look at Yelp. It’s an app that provides information about nearby restaurants (reviews, etc). What’s really interesting is its Monocle feature, which overlays information about places of interest on your camera view.

s3

 Now let's ask how difficult it would be to implement an application similar to Yelp’s Monocle. The most important question is how to detect points of interest and present them on the screen. Image recognition algorithms might be what comes to mind, but let’s be realistic - they’re almost impossible to implement on a mobile device. So we look to other sciences for an answer. It turns out that geodesy offers us a simple solution.

So let's get to work! In the following tutorial, I will show you how to make a very simple app that identifies our point of interest in CameraView. For reference, please check this video:

A bit of theory

(Geodesy theory is based on the book: A. Jagielski, Geodezja I, GEODPIS , 2005.)

Wikipedia explains the azimuthal angle as:

“The azimuth is the angle formed between a reference direction (North) and a line from the observer to a point of interest projected on the same plane as the reference direction orthogonal to the zenith”

s3

 So basically what we will do is attempt to recognize the destination point by comparing the azimuth calculated from the basic properties of a right-angle triangle and the actual azimuth the device is pointing to. Let’s list down what we need to achieve this:

  • get the GPS location of the device
  • get the GPS location of destination point
  • calculate the theoretical azimuth based on GPS data
  • get the real azimuth of the device
  • compare both azimuths based on accuracy and call an event

Now the question is how to calculate the azimuth. It turns out to be quite trivial because we will ignore the Earth’s curvature and treat it as a flat surface:

s3

 

As you can see, we have a simple right-angle triangle and we can calculate the angle ᵠ between points A and B from the simple equation:

s3

 The table below presents the relationship between the angle in grads and the azimuth A(AB):

s3

Implementation time!

In this tutorial, I will not describe how to get the location and azimuth orientation of the device because this is very well documented and there are a lot of tutorials online. Mainly for reference, please check Sensors Overview (especially TYPEORIENTATION and TYPEROTATION_VECTOR) and Location Strategies.

Once you’ve prepared data from the sensors, it’s time to implement CameraViewActivity. The first and most important thing is to implement SurfaceHolder.Callback to cast an image from the camera to our layout. SurfaceHolder.Callback implements the three methods responsible for this: surfaceChanged() surfaceCreated() surfaceDestroyed().

The next step is to create an XML layout file. This is quite simple: 

Once we have the whole presentation, we need to connect it with sensor data providers. In this case, this just means implementing and initialising proper listeners from OnLocationChangedListener and OnAzimuthChangedListener.

When we’ve gotten the current location we can calculate the theoretical azimuth based on the theory presented earlier.

 

When the orientation of the phone changes, we need to calculate accuracy, compare both angles and if the current angle is within accuracy we can show a pointer on the screen. Of course this approach is quite trivial but it illustrates the idea.

 

A method that implements calculation of the azimuth between points might look like the following:

 

To compare the calculated and theoretical azimuth we will use the isBetween() and calculateAzimuthAccuracy() methods:

 

Things to consider

Of course, this tutorial is quite basic. My goal was to show you that it doesn’t take much to make a simple AR app. I would like to point out some things that should be considered during further implementation:

  • The accuracy of sensors - unfortunately not perfect, mainly because of magnetic field emitted by device itself.
  • It would probably be necessary to implement a simple low-pass filter to stabilise indicators on the screen. A very nice example is provided here: http://phrogz.net/js/framerate-independent-low-pass-filter.html. The formula is written in JavaScript, but it’s quite language agnostic.
  • You will probably also need to implement a distance filter to show only the closest points. To achieve this, simply calculate the distance between points:

s3

And that’s all you need to know to start your Augmented Reality journey. I really encourage you to use your imagination and creativity to deliver life changing apps. The potential is huge, and it’s our job as developers to make it happen! Good luck!

You can find the whole Android project with AR implementation on my GitHub.

Why it is impossible to hire quality engineers in London and how to deal with it
Follow Netguru
Join our Newsletter

Chorus
READ ALSO FROM Mobile
Read also
Need a successful project?
Estimate project or contact us