Develop your Virtual Reality App with Google VR (Practical exercise)

To create virtual reality experiences in our apps, Google has made the Google VR platform available to developers.

Lately we have been immersed in image processing, but in this post, and to change the subject a bit as a parenthesis, we will introduce ourselves to virtual reality. Thanks to affordable glasses like the CardBoard, the most powerful ones like the Oculus or HTC Vive or the intermediate but entertaining PlayStationVR, in 2016 the expansion of Virtual Reality was enormous. But don’t worry, we won’t be left behind thanks to Google VR.

Google CardBoard Virtual Reality

To create reality experiences in our applications, Google has made the Google VR platform available to developers. This platform provides everything you need to develop applications including libraries, examples, guidelines, etc. All this available thanks to the following APIs:

  • Unity: API that allows you to add VR support to a Unity3D application or create one from scratch.
  • Android: Allows you to create applications that display 3D photos and videos, spatial audio, detect head movements, etc.
  • iOS: Allows you to create virtual reality experiences natively using Objective-C.
  • Unreal Engine: Natively supports Google VR for virtual reality experiences in mobile applications.

For each API we have documentation and examples that we can use to familiarize ourselves with this technology. We, as in all previous posts, will focus on Android versions.

In this case, the API will make it easier for the developer to:

  • Correct lens distortion.
  • spatial audio.
  • Head tracking.
  • 3D calibration.
  • Side-by-side viewing.
  • Stereo geometry setup.
  • User event handling.

In the examples that can be consulted, we have different applications that can be very useful for adding virtual reality content to our application. They include the most used ones that are the visualization of photos and videos of 360.

Next, we’ll create a really simple app where we’ll display a photo. The user will be able to interact by moving the phone, dragging the image or using the Virtual Reality glasses. Let us begin!

Download content to display

The first thing we need to do is get valid content for display. In order not to use the example provided by Google, the ideal is to use one of ours. Creating panoramic or 360 photos can be difficult (mainly to get quality images) so we’ll take advantage of Google Street View images.

Thanks to streetviewdownload.eu we can download any image captured by Google and use it in our application in a very simple way. You can see the steps to follow in this video.

In our case, we have downloaded an image close to Everest Base Camp.

Close-up image of Everest Base Camp

Playing with the code

  1. We create a new application with the “Empty Activity” template and set the minimum SDK to 25 (it is mandatory).
  2. We make sure that the jcenter is in the project’s gradle.
    allprojects {
        repositories {
  3. In the gradle of the application, in the dependencies section, we add the following and synchronize:
    compile 'com.google.vr:sdk-panowidget:1.70.0'
  4. It’s time to add the jpg that we want to show in the Project. In the project tree, right-click on app and select New/Folder/Assets Folder and copy the file. In our case it is everest.jpg.
  5. In the activity layout we empty the main layout and add the following view:

    This view is in the library added in step 3 and is the 360 ​​degree image viewer.

  6. We open the MainActivity and add the variables that we are going to use:
    private static final String TAG = MainActivity.class.getSimpleName();
    private VrPanoramaView panoWidgetView;
    public boolean loadImageSuccessful;
    private Uri fileUri;
    private Options panoOptions = new Options();
    private ImageLoaderTask backgroundImageLoaderTask;

    We will use the tag in the logs.
    We will have the variable of the panorama view of the layout, a boolean to know if the image has been loaded correctly, a Uri for the file to load, an Options to set the characteristics of the panorama and a thread to load the image in parallel, avoiding blocking the main thread.

  7. In the onCreate method we add this code:
    panoWidgetView = (VrPanoramaView) findViewById(R.id.pano_view);
    panoWidgetView.setEventListener(new ActivityEventListener());

    We initialize the panorama variable and add a method that will be called on activity start and rotation changes.

  8. We add the handleIntent method
    private void handleIntent(Intent intent) {
        if (Intent.ACTION_VIEW.equals(intent.getAction())) {
            Log.i(TAG, "ACTION_VIEW Intent recieved");
            fileUri = intent.getData();
            if (fileUri == null) {
                Log.w(TAG, "No data uri specified. Use "-d /path/filename".");
            } else {
                Log.i(TAG, "Using file " + fileUri.toString());
            panoOptions.inputType = intent.getIntExtra("inputType", Options.TYPE_MONO);
            Log.i(TAG, "Options.inputType = " + panoOptions.inputType);
        } else {
            Log.i(TAG, "Intent is not ACTION_VIEW. Using default pano image.");
            fileUri = null;
            panoOptions.inputType = Options.TYPE_MONO;
        if (backgroundImageLoaderTask != null) {
        backgroundImageLoaderTask = new ImageLoaderTask();
        backgroundImageLoaderTask.execute(Pair.create(fileUri, panoOptions));

    If the intent has a file to load, it is prepared and loaded in a separate thread with specific options.

  9. We add the methods we want to override.
    protected void onNewIntent(Intent intent) {
        Log.i(TAG, this.hashCode() + ".onNewIntent()");
    protected void onPause() {
    protected void onResume() {
    protected void onDestroy() {
        if (backgroundImageLoaderTask != null) {

    We will control the display of the panorama and whether the operation of the thread in charge of loading the image.

  10. We add the thread that will be responsible for loading the image:
    class ImageLoaderTask extends AsyncTask<Pair<Uri, Options>, Void, Boolean> {
        protected Boolean doInBackground(Pair<Uri, Options>... fileInformation) {
            Options panoOptions = null;  // It's safe to use null VrPanoramaView.Options.
            InputStream istr = null;
            if (fileInformation == null || fileInformation.length < 1
                    || fileInformation[0] == null || fileInformation[0].first == null) {
                AssetManager assetManager = getAssets();
                try {
                    istr = assetManager.open("everest.jpg");
                    panoOptions = new Options();
                    panoOptions.inputType = Options.TYPE_MONO;
                } catch (IOException e) {
                    Log.e(TAG, "Could not decode default bitmap: " + e);
                    return false;
            } else {
                try {
                    istr = new FileInputStream(new File(fileInformation[0].first.getPath()));
                    panoOptions = fileInformation[0].second;
                } catch (IOException e) {
                    Log.e(TAG, "Could not load file: " + e);
                    return false;
            panoWidgetView.loadImageFromBitmap(BitmapFactory.decodeStream(istr), panoOptions);
            try {
            } catch (IOException e) {
                Log.e(TAG, "Could not close input stream: " + e);
            return true;

    Load the image from the file and wait for it to load into the panorama.

  11. To finish we must add the panorama listener class. We will use it to know if the image has been loaded correctly or not.
    private class ActivityEventListener extends VrPanoramaEventListener {
        public void onLoadSuccess() {
            loadImageSuccessful = true;
        public void onLoadError(String errorMessage) {
            loadImageSuccessful = false;
                    MainActivity.this, "Error loading pano: " + errorMessage, Toast.LENGTH_LONG)
            Log.e(TAG, "Error loading pano: " + errorMessage);
  12. We run and enjoy the 3D image.

Everest base camp video

As you can see, using these resources facilitates visualization in different ways. If we put the image to full screen we can change the view to Viewer mode (Cardboard) without having to worry about its programming.

In this way, and in a very simple way, we can add virtual reality components to our native applications.

In future posts we will see more options that Google VR offers us. In the meantime, keep programming hard!

You can download the code of this example.