ARCore Cloud Anchor Testing

Mat Wright
4 min readFeb 4, 2020
Google ARCore Cloud Anchor Testing

Introduction

I recently decided to test drive Google’s ARCore Extension for the Unity game engine to find out how Cloud Anchors work. Cloud anchors allow AR (Augmented Reality) apps to create virtual objects within a physical space and persist them across multiple user sessions. This means that different users should be able see and interact with the same virtual objects at the same time from an AR app.

ArCore Cloud Anchor Setup

To get started, I followed a short ARCore cloud anchors course in Google Codelabs . This helped me to build a barebones AR cloud anchor app. To my pleasure, everthing worked on the first build. The instructions provided within the Codelabs course were extremely clear and helpful.

Once deployed to my phone (Google Pixel 3 XL), I could create AR objects in the physcial space of my room from within my test app. I could then close and reopen the app to wipe my previously created objects before calling Google’s Cloud Anchor API to reinstate my objects again.

The process worked correctly; despite closing/restarting my app, objects would reappear again in their inital position. Well, actually, I found that the objects would move about quite a bit unless the device is positioned and pointed at the exact same place where the anchor was first created. In fact with or without cloud anchors I generally found my AR test experiences to be quite jittery.

It would be totally unfair to compare my barebone test app AR experience with the kind of equipment being developed for the medical and manufacturing industries. Although, a recent 2019 study concludes that while there is a growing interest in commercial AR for high precision manual tasks, attention should be paid to the limitations of the available technology.

ARCore Testing Feedback

I was pleasantly surprised by the ease of setup. The documentation and CodeLabs course were very clear and helpful. However, there are still serious limitations. For example, cloud anchors only last 24 hours before they are deleted. But, Google are working towards allowing permanent cloud anchors in the future.

Firstly, before an object anchor can be persisted to the cloud, ARCore needs to gather information about the scene. This requires 30 seconds of data gathering. ARCore uses a technique similar to photogrammetry 3D modelling; where large numbers of points of interest are identified within the scene. These points of interest are mapped across different perspectives as the user moves about the scene and ARCore is able to construct a 3D model of the scene made up of horizontal and vertical planes.

I found this process was not producing totally accurate modelling of my room. It could determine a large tabletop and the floor plane quite well. But, couldn’t identify smaller surfaces with much consistency. When the user poses a 3D object into the scene, the object remains more-or-less in the same place. Although, the object does jitter somewhat.

To save the 3D object’s position to the cloud, an API call is made which sends data containing the modelling and object pose position.

Next, the API returns an ID for the newly persisted cloud anchor. In my tests, this process was taking anywhere between 5 and 20 seconds. Once the ID is returned, it can be used to share the 3D object within the scene across different user sessions. To test this, I copied the ID and closed my test app. I fired it back up again and my scene was now empty. I walked around in the scene for 30 seconds to recreate the mapping. I then sent a request to the Google Cloud Anchor API to fetch the anchor with my saved ID. This return trip was generally quite quick, usually just a second or two using my office wifi connection. The 3D object is then recreated in the scene! Wow! That’s cool! I would stress, though, that it only appeared in the correct position (or at all) when I was in the same position as where it was originally created. When I did try to call the API from the other side of the table, the cloud anchor hosting returned 0 matches.

Overall Conclusion

So, I’m impressed with the progress being made with cloud based AR anchors. But, I have to concede that I feel reluctant to get too excited, just yet, about rolling out an app for which the central premise is a shared multiscene AR experience. In my view, a shared-AR-experience app would need to be limited to a small number of AR objects in a well defined physical space where the users are actively searching for the objects that they know to be there and where there are limited vantage points from which the scene could be visualised. It wouldn’t be a trivial matter, for example, to reveal previously persisted 3D objects to a user as they walk down the street because the scene would be continuiously changing; but, if you know where to look in advance (as well as from which position, etc.) it works pretty well.

Video Walkthrough

The video below presents my testing of ARCore Cloud Anchors.

References

Perceptual Limits of Optical See-Through Visors for Augmented Reality Guidance of Manual Tasks (2019)Shared AR Experiences With Cloud Anchors
https://ieeexplore.ieee.org/document/8707062

Google Cloud Anchors Overview
https://developers.google.com/ar/develop/java/cloud-anchors/overview-android

Originally published at https://blog.matwright.dev on February 4, 2020.

--

--

Mat Wright

Creative Technologist & Developer working with Augmented Reality | Flutter | Unity | Web — Twitter : @_MatWright_