Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

BornAgainMac

macrumors 604
Feb 4, 2004
7,302
5,311
Florida Resident
I would rather have a Terminator App. You point the iPhone at people or things and it will identify those objects with all known information. Wake me up when they have that on the App store.
 

sparks9

macrumors 6502a
Jan 29, 2003
602
0
Copenhagen
Alright, it's a cool demonstration of what technology can do, but who in their right minds would use something like this? :confused:

So you lift up your phone, and look at the world through your camera lens, so it can tell you what each building is, even though there are signs that say it anyway? :confused:

Maybe I'm missing something, but this seems, pointless.

you are missing the point.

tech like this will revolutionize the way we interact with the world.
 

Porco

macrumors 68040
Mar 28, 2005
3,318
6,927
The best feature is when you hear it say "Aren't you a little short for a stormtrooper?".
 

Small White Car

macrumors G4
Aug 29, 2006
10,966
1,463
Washington DC
Alright, it's a cool demonstration of what technology can do, but who in their right minds would use something like this? :confused:

You see 4 restaurants in front of you. Let's say you want to look at all 4 menus to check prices, then call them all to see who has the shortest wait.

Using this program you could do all that in a minute. Using google maps and the web, that sounds like a 10 minute task to me.

Or maybe you're in another country and you can't read the signs on the buildings and want to know what's what. You've been told that the emergency dentist you need is on this block, in fact, google maps says you're in front of it, but you see 6 offices in front of you, all with signs you can't read. This would be useful for that.

Or maybe you're closer to home...right in front of your favorite movie theater. Yeah, you COULD bring up the theater web page and navigate through its menu to see showtimes, but why do that when you can just point your phone at it and hit one button? You'll get showtimes in 10 seconds instead of 90.
 

4np

macrumors 6502a
Feb 23, 2005
972
2
The Netherlands
Cool stuff always comes from the Dutch right?

Heiniken
TomTom
and now: Layar

Let's just hope that the guy Layar talking on the WWDC next year - :D - doesn't have such an Dutch accent as the TomTom guy had :D (yes, even the Dutch noticed that).

I wonder if you miss spelled Heineken on purpose? ;) Used to work for one of them though :p
 

circuslove

macrumors newbie
Jun 16, 2009
1
0
you are missing the point.

tech like this will revolutionize the way we interact with the world.

you could superimpose virtual objects and people and play virtual games in the real world, think GTA but you are really walking through a real city using your phone view to explore what is virtually placed within it and interact with them.
 

lars666

macrumors 65816
Jul 13, 2008
1,202
1,325
You see 4 restaurants in front of you. Let's say you want to look at all 4 menus to check prices, then call them all to see who has the shortest wait.

Using this program you could do all that in a minute. Using google maps and the web, that sounds like a 10 minute task to me.

Or maybe you're in another country and you can't read the signs on the buildings and want to know what's what. You've been told that the emergency dentist you need is on this block, in fact, google maps says you're in front of it, but you see 6 offices in front of you, all with signs you can't read. This would be useful for that.

Or maybe you're closer to home...right in front of your favorite movie theater. Yeah, you COULD bring up the theater web page and navigate through its menu to see showtimes, but why do that when you can just point your phone at it and hit one button? You'll get showtimes in 10 seconds instead of 90.

Very smart ideas, like it!!
 

bmk

macrumors regular
Oct 29, 2007
165
13
Paris
For those who are pooh-poohing this sort of thing, there are lots of good examples of how it could be used in a very cool way. Here's one example:

Imagine walking around museum. You see the paintings on the walls, and they have basic info like who the artist was and the title of the work. But then point your iPhone at the painting and click the "Get Info" button in your geospatially-aware browser. The browser checks your coordinates, determines that you're in the Louvre, figures out what painting you're looking at, and then displays detailed information, including (possibly) voice narration delivered through your headphones.

I don't want to pooh-pooh this totally and the example you give is one of the better and more practical uses for it, but even your example suffers from the inherent drawback of the whole idea - you need people and companies to be constantly updating their 'layar' information to account for changes. It is hard enough for most companies (and individuals) to keep their web pages up to date, let alone the complex and time dependant information that would make such a system brilliant if it could actually work. To take your example, what happens when a museum lends its picture to some travelling exhibition, or changes its displays, or acquires a new work? All that layar information would have to be reconfigured on a constant basis.

I would like it to work, but at the moment can't see how it is going to be practical for third party content providers to maintain reliable information.
 
Sorry to be the wet blanket, but to my knowledge, the iPhone... even with 3.0, DOES NOT provide a LIVE VIDEO STREAM from the camera in a fashion that developers are able to use. I'm hoping this has changed, but to my knowledge, just as you cannot create an iPhone app that automatically takes stop-motion photography (the user must initiate the taking of the picture), you cannot create an app that puts a layer over live video.

Here's hoping Apple has/will fix this, but Layar can look at the iPhone all it likes, but a "compass" is the least of what the device needs to be capable of. By using relative positioning, its possible to simulate a compass with access to the live camera feed (just mark "north" and use the shift markers to "move" the scenery relative to that position).

Pasted from another forum, Apple says:
"Embed a video recorder into your application using the new interface used in the Camera application. The new interface provides a switch to toggle between still photos and video mode, giving your users the flexibility to capture the moment the way they want. Once the video has been captured, users can choose the videos they want from the updated Media Picker."
This sounds like a more functionally rich version of their "camera" API, that presents users with the ability to choose a photo from their photo album or take a picture from the camera, but provides devs with NO control over the camera itself. It's like having the "browse file" control in HTML, but not giving the web page the ability to automatically upload files from your hard drive without your say-so.

While the security model is comforting, it does NOT allow for "stop-motion video", "automatic timed shots", "remote automatic upload of live camera imagery" (which would be a nice addition to 'Find My iPhone'), or... most of all... "augmented reality".

~ CB
 

roocka

macrumors regular
Sep 11, 2007
134
0
Indianapolis
Wow.. This is super cool..

I have to hand it to the company that developed this. This is the future. It's strange when you see it for the first time and truly begin to understand the potential for technology. It's neat to know that there is still so much left to unlock. The human mind is such a brilliant thing. It's sad that people have to ruin their lives with meth and heroin searching for a false appreciation when technology today is unlocking so many doors that are interesting and wonderful.

I can't wait to get my hands on this. It will make traveling so much more interesting because you will just be able to wander off the beaten path and find your way home later.

So cool.
 

zombitronic

macrumors 65816
Feb 9, 2007
1,127
39
I'm glad that some of us get this. This will be important. These location info apps will seem quaint compared to the type of AR that will truly create infinite layers of reality. Of course, we'll need many more sensors in our devices before we get a true alternate layer of the world. LIDAR, maybe, or at least stereo cameras to judge distances and objects and better AI to really "understand" what's coming into a camera lens. Just because this is an early version of a new technology, don't underestimate what it will become.
 

clayj

macrumors 604
Jan 14, 2005
7,622
1,168
visiting from downstream
I don't want to pooh-pooh this totally and the example you give is one of the better and more practical uses for it, but even your example suffers from the inherent drawback of the whole idea - you need people and companies to be constantly updating their 'layar' information to account for changes. It is hard enough for most companies (and individuals) to keep their web pages up to date, let alone the complex and time dependant information that would make such a system brilliant if it could actually work. To take your example, what happens when a museum lends its picture to some travelling exhibition, or changes its displays, or acquires a new work? All that layar information would have to be reconfigured on a constant basis.

I would like it to work, but at the moment can't see how it is going to be practical for third party content providers to maintain reliable information.
It goes without saying that this sort of thing will largely be an incremental process, aside from a few "hardcore" organizations that will support it full out just because they think it's cool. Making this sort of process easy to maintain is simply a matter of good database design and software connectivity. Many restaurants now, for example, publish their daily menus to their web sites in the form of PDF files. This doesn't mean that someone is scanning the menu each day and turning it into a PDF and posting it on their site; it means someone has set up an automated process which takes the menu, converts it from Word format into a PDF, and pushes it to the web site all without any human interaction. The system is set up to maintain itself.

In the case of a museum, they could have a database of their own which tracks the comings and goings and relocations of work, simply based on someone updating the database whenever something is hung or taken down or sent on tour. Proper automation in the background could keep the geospatial database up to date without any interaction beyond someone typing in the change somewhere.

This is by no means insurmountable, and it's not even hard to do. It's simply a matter of intelligent software design.
 

ipoppy

macrumors 6502
Oct 12, 2006
423
9
UK
That can make me feel like 007:D. I am not buying 3Gs because wasn't ichat there, but this thing make me think again.
Hmm....nah I will wait another 12 months and you Apple, don't you dare to NOT get an iChat into iphone then :mad:
 

bengst

macrumors member
Jan 15, 2008
64
4
Malang, East Java - ID
Layar means "Screen"

nice technology.
Imagine in the store, looking at some items, and at the same time we can find it's information from the net :D

fyi, in Indonesia, "Layar" means "Screen".
 

QCassidy352

macrumors G5
Mar 20, 2003
12,029
6,049
Bay Area
this is amazing. of course, it would require a great deal of work to make it practical (my first worry is that the GPS must be VERY accurate for this to work), but in theory, it's awesome.
 

PsudoPowerPoint

macrumors regular
Sep 18, 2008
124
0
San Diego CA, USA
Alright, it's a cool demonstration of what technology can do, but who in their right minds would use something like this? :confused:

So you lift up your phone, and look at the world through your camera lens, so it can tell you what each building is, even though there are signs that say it anyway? :confused:

Maybe I'm missing something, but this seems, pointless.

Different people interact with the same GUI in different ways, and GUIs typically provide multiple options to do most tasks. While this display mode may not do anything for you, it may do a lot for some people. I know people who don't do well with maps, most of them navigate by landmarks instead; Layer provides a simple way to spot landmarks to navigate by. All the GPS I'm familiar with provide a street view or highway view for navigation as an alternative to a map, Layer goes a step beyond by overlaying that view on a real time image.

Beyond that, the value of this sort of augmented reality concept is to show what's not visible on the surface. My fun favorite application would be using augmented reality to show a historic view of a location. Imagine an application that lead the user to a series of sights where historic photographs have been taken, then when you arrive at each, it provides an overlay of that photo on the present reality. This would be awesome for a large outdoor sight like Gettysburg.
 
In the case of a museum, they could have a database of their own which tracks the comings and goings and relocations of work, simply based on someone updating the database whenever something is hung or taken down or sent on tour. Proper automation in the background could keep the geospatial database up to date without any interaction beyond someone typing in the change somewhere.

This is by no means insurmountable, and it's not even hard to do. It's simply a matter of intelligent software design.
A museum is NOT a realistic candidate for most consumer-accessible augmented reality implementations. Augmented reality is much more useful for buildings, landmarks, outside sculpture, locations, etc. On the macro-level, they would need to invent an entirely new device for highly accurate interior positioning. This would NOT be GPS, but some other invention that likely works through relative positioning within a structure. I've mused about such an invention for a while, possibly something that uses RFIDs.

For museums and malls, I think the best you could possible get without a Bluetooth or 30-pin accessory to help it, is "where am I"... which would identify the general area of the building you're in, and supply you with information on what stores or exhibits are near you.

This is different from augmented reality, which allows you to view THROUGH your device's camera, and have labels and information placed on top of their images.

Another possible route would be to allow users to identify which room they're standing inside of, and where they are standing in the room. Then N,S,E,W positioning could easily handle the rest, if you could get past the non-starter modifying the interface layed atop of live video feed in iPhone OS.

As I noted earlier, even given an ideal example, I highly doubt the iPhone OS officially supports augmented reality (without an accessory). Jail-broken, people would be able to access unofficial, unsupported APIs (as they did to get early video recording), but that's certainly not an option for anything sold through the AppStore.

~ CB
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.