Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
67,646
38,077



iOS 12, set to be unveiled at the Worldwide Developers Conference on Monday, will include ARKit 2.0, an upgrade to the existing ARKit 1.5 SDK that's available to developers to allow them to build augmented reality experiences in their apps.

ARKit 2.0, according to a new report from Reuters, will include a feature that's designed to let two iPhone users share an augmented reality experience, with the same objects displayed on multiple screens.

augmentedreality-800x434.jpg

This is in line with previous rumors from Bloomberg that have said Apple is working on both multiplayer augmented reality gameplay and object permanence, which would allow a virtual object to remain in place across multiple app sessions.

Apple is aiming to allow two people to share data so they can see the same virtual object in the same space via each individual device, with Apple designing the feature in a privacy-friendly way.

Apple's multiplayer system, unlike similar offerings from Google, does not require users to share scans of their homes and personal spaces, working via a phone-to-phone system.
Apple designed its two-player system to work phone-to-phone in part because of those privacy concerns, one of the people familiar with the matter said. The approach, which has not been previously reported, differs from Google's, which requires scans of a player's environment to be sent to, and stored in, the cloud.
Full details on how Apple's multiplayer augmented reality system will work are unknown, and it's not yet clear if it works with three or more players. Apple will share more information on the feature on Monday.

Augmented reality has been a major focus for Apple over the course of the last two years, with Apple CEO Tim Cook calling AR "big and profound." "We're high on AR in the long run," Cook said in 2016.

Apple unveiled its first augmented reality product, ARKit, with iOS 11 at WWDC, and has since made improvements to the feature with the launch of ARKit 1.5 in March as part of iOS 11.3. ARKit brought mapping for irregularly shaped surfaces, vertical surface placement, and object and image recognition. With the additional changes coming in iOS 12, developers should be able to do a whole lot more with augmented reality.

Article Link: ARKit 2.0 Will Let Two iPhones See the Same Virtual Object
 
  • Like
Reactions: fairuz
Apple "in AR for the long run," like they are for privacy. Compare them to Google anywhere in privacy. I think they've made the right choices there, but it'll only pay off later.
 
Last edited:
  • Like
Reactions: Abazigal
The sheer processing power required to render the same object, with the same vector, on two different phones, with different perspectives, is going to be insane. I wouldn't be surprised if they end up having to require a nearby Mac for processing, acting as a LAN server.

Edit after the fact: I guess I was wrong, they don't need an external synchronizer. Cool.
 
Last edited:
The sheer processing power required to render the same object, with the same vector, on two different phones, with different perspectives, is going to be insane. I wouldn't be surprised if they end up having to require a nearby Mac for processing, acting as a LAN server.
What's difficult in terms of processing about that? It seems like they just have to agree on where the object should be, then they can do their own thing.
 
The sheer processing power required to render the same object, with the same vector, on two different phones, with different perspectives, is going to be insane. I wouldn't be surprised if they end up having to require a nearby Mac for processing, acting as a LAN server.

Why would it need drastically increased processing power? Maybe i’m missing something, but seeing as all the processing will be occuring locally (as it is currently), surely the only change would be transmitting the ‘anchor point’ of the 3D models?
 
The sheer processing power required to render the same object, with the same vector, on two different phones, with different perspectives, is going to be insane. I wouldn't be surprised if they end up having to require a nearby Mac for processing, acting as a LAN server.
Not really. I'd imagine it would just require two phone doing all the work and communicating with one another via iCloud.
 
  • Like
Reactions: martyjmclean
What's difficult in terms of processing about that? It seems like they just have to agree on where the object should be, then they can do their own thing.

Yep. There won't be any extra processing for rendering because each device would do its own rendering which is what happens now.

There may be extra processing for synchronizing data between the devices, which I guess could be done directly between devices or through a server.
 
superimpose.jpg


I find it interesting to present information or graphic representations with depth impression in glasses later on. Then, for example, descriptions can be superimposed on the object at an actual distance. The example presented here is a typically silly toy, a preliminary stage. Nothing more. The applause comes from me in a fitting place, I can wait.
 
Last edited:
Great tech demo. Meanwhile, still trying to find practical uses for AR in the real world.
Pretty straightforward to me. Walk down the street, see a business, point your phone at it and see the business's info, website, hours, phone number, etc, without even having to do a google search. Lets you "business shop" in the blink of an eye. I'd really like this, tbh. :| Could probably simply integrate with Yelp, and use image recognition and OCR for business logos/names, as well as use location to aid in finding the business, if it's a small local thing.
 
Last edited:
The sheer processing power required to render the same object, with the same vector, on two different phones, with different perspectives, is going to be insane. I wouldn't be surprised if they end up having to require a nearby Mac for processing, acting as a LAN server.

That’s one of the reasons why Apple invests heavily in Ax development .
 
  • Like
Reactions: Keane16
Pretty straightforward to me. Walk down the street, see a business, point your phone at it and see the business's info, website, hours, phone number, etc, without even having to do a google search. Lets you "business shop" in the blink of an eye. I'd really like this, tbh. :| Could probably simply integrate with Yelp, and use image recognition for business logos/names, as well as use location to aid in finding the business, if it's a small local thing.
If only someone could invent something even better than this. Something tangible, in the real world, without requiring special hardware and software. Maybe we could take a board or paper and write those things on it? Then somehow attach it to the building with nails or adhesive. We could call it a "sign" and anyone could use it! Will probably never work though because people's phones are always blocking their view of the world.
 
If only someone could invent something even better than this. Something tangible, in the real world, without requiring special hardware and software. Maybe we could take a board or paper and write those things on it? Then somehow attach it to the building with nails or adhesive. We could call it a "sign" and anyone could use it! Will probably never work though because people's phones are always blocking their view of the world.
Adding signs for every piece of information one might want to know about a business is highly impractical. With such an app, there is no limit to the amount of information that can be exposed, by simply pointing the camera at the business.
 
Adding signs for every piece of information one might want to know about a business is highly impractical. With such an app, there is no limit to the amount of information that can be exposed, by simply pointing the camera at the business.
I was being facetious.
 
  • Like
Reactions: robbyx
Pretty straightforward to me. Walk down the street, see a business, point your phone at it and see the business's info, website, hours, phone number, etc, without even having to do a google search. Lets you "business shop" in the blink of an eye. I'd really like this, tbh. :| Could probably simply integrate with Yelp, and use image recognition and OCR for business logos/names, as well as use location to aid in finding the business, if it's a small local thing.
But it doesn't exist. I still haven't seen anyone deploy a useful AR application for consumers. Industry, maybe. I don't doubt that there will be some use, but it's surely not there now.
 
  • Like
Reactions: aylk
Please no Ipad Pro notch. Please please! I need to buy one...please no!
 
Great for games but I can see some apps taking advantage of this tech for other purposes.
 
If only someone could invent something even better than this. Something tangible, in the real world, without requiring special hardware and software. Maybe we could take a board or paper and write those things on it? Then somehow attach it to the building with nails or adhesive. We could call it a "sign" and anyone could use it! Will probably never work though because people's phones are always blocking their view of the world.

I don’t think they will put up signs in your language of honest reviews and prices or if folks you know are inside. I mean maybe they are fast with the pen :)
 
You aren’t thinking nearly deeply enough. The uses are countless.

Oh, I've watched plenty of sci-fi movies in my life. I can thinking of millions of ways we could use AR. But fortunately for everyone, we don't live in my mind. We live in the real world, where the actual application of it doesn't really exist. So while it's cool that Apple is enhancing a feature, they're enhancing it on something a majority of people don't use.
 
The sheer processing power required to render the same object, with the same vector, on two different phones, with different perspectives, is going to be insane. I wouldn't be surprised if they end up having to require a nearby Mac for processing, acting as a LAN server.

No. Each phone needs simply to agree on the physical location and orientation of the virtual object. Then each phone renders it the same as each phone renders any other virtual object.
 
  • Like
Reactions: TheWatchfulOne
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.