Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
OK, if you say so. The good news, for you, is AR is something you'll never need to deal with regarding building/home design and you can keep on doing what you've been doing. And no worries, mate, that's certainly OK.

I know my architect will be all over it jumping in with both feet.


you got an architect on retainer? neat.
 
you got an architect on retainer? neat.

Nope - I should have said "my architect friend." After going through the design process, dealing with city planning and building depts, construction, etc, we became friends. I think he liked that my wife and I came in having already done our own programming (needs, requirements, etc) and were pretty hands on during all phases. And like myself, he's pretty tech oriented.
 
You make them sound more glamorous then they are. Neither of these require any level of precision. It's not the like the surgery assistance is overlaying a line that says "cut here". It is simply visual aids that prevent the surgeon from having to look away to see them.

Why do they have to be glamorous? I can also see a broad range of uses opening up with the tech even without having a “down to the mm” precision aspect, though that can be fixed as well if the effort and need is required.

I’ve also rarely had an issue with the measure app being consistently off with multiple measurements. At the same time I know better than to think Apple has put the effort into making it as accurate as a set of calipers.
 
Can't get excited about AR. It just seems gimmicky (and an excellent way of inducing nausea and accidents)

Can't get excited about 'everyday carry' computers. It just seems gimmicky (and an excellent way of inducing backaches and accidents).

*swaps 2006 iBook for 2008 iPhone*

Ohhhhhh...

Your point being...???

Maybe you’re too young to remember the world before the iPhone arrived.

Pretty much nobody wanted the connected world with them away from home.

Because almost nothing we did online were things we would want to do away from home.

All of that changed with the smartphone. Putting the connected world in our pocket, making it so much more frictionless to use the tech, it made the tech explode into what we use (and can’t stop using) today.

Our ACCESS to the connected world radically shaped the very fundamental technology and capability of that technology.

AR is going to be 100x more frictionless. Not with v1 tech demo of the glasses we see next week, but with v5 (same thing with the v1 tech demo iPhone that if you remember literally couldn’t run an app for 2 years).

You’re not going to need to think about an app, pull your device out of you pocket and hold it between your face.

It’s just going to be there. An always-on HUD embedding the connected world (and all it’s insanely capable new AI content) into the real world of your everyday life.

AR is going to change FAR beyond its current novelties. Far beyond what we can even imagine we’ll use it for.

Just like tech changed when the last radically new interface arrived (the pocket computer).
 
Maybe you’re too young to remember the world before the iPhone arrived.
...
AR is going to change FAR beyond its current novelties. Far beyond what we can even imagine we’ll use it for.

Just like tech changed when the last radically new interface arrived (the pocket computer).

LOL - have you read my signature? I have been computing since the days of punch cards. Perhaps that is why I am little jaded. Technology advances are often fads and not that useful. AR has been around for some time but hasn't really made an impact. There are other examples: 3D video and my personal favourite, OpenDoc, come to mind. I guess we'll see. I wouldn't mind being proven wrong.
 
LOL - have you read my signature? I have been computing since the days of punch cards. Perhaps that is why I am little jaded. Technology advances are often fads and not that useful. AR has been around for some time but hasn't really made an impact. There are other examples: 3D video and my personal favourite, OpenDoc, come to mind. I guess we'll see. I wouldn't mind being proven wrong.

You’ve been comparing radically different things. Not apples to oranges, but saucepans to oranges.

AR glasses are an INTERFACE medium that embeds content into the real world.

Those other things are content (consumed via different interface media).

The difference is incredibly vast. And as I explained, new interface media radically shape the content we consume.

Content on an AR handheld device will look as different to content on AR glasses as MySpace in 2007 is different from the Meta app ecosystem today. (For good and bad.)

Today alone a billion people will do 100 things on the connected world that in 2007 they couldn’t even have imagined they could do let alone WANT (need?) to do. And certainly not through a microblogging website!
 
Last edited:
  • Like
Reactions: VulchR
You’ve been comparing radically different things. Not apples to oranges, but saucepans to oranges.

AR glasses are an INTERFACE medium that embeds content into the real world.

Those other things are content (consumed via different interface media).

The difference is incredibly vast. And as I explained, new interface media radically shape the content we consume.

Content on an AR handheld device will look as different to content on AR glasses as MySpace in 2007 is different from the Meta app ecosystem today. (For good and bad.)

Today alone a billion people will do 100 things on the connected world that in 2007 they couldn’t even have imagined they could do let alone WANT (need?) to do. And certainly not through a microblogging website!
Maybe you’re too young to remember the world before the iPhone arrived.

Pretty much nobody wanted the connected world with them away from home.

Because almost nothing we did online were things we would want to do away from home.

All of that changed with the smartphone. Putting the connected world in our pocket, making it so much more frictionless to use the tech, it made the tech explode into what we use (and can’t stop using) today.

Our ACCESS to the connected world radically shaped the very fundamental technology and capability of that technology.

AR is going to be 100x more frictionless. Not with v1 tech demo of the glasses we see next week, but with v5 (same thing with the v1 tech demo iPhone that if you remember literally couldn’t run an app for 2 years).

You’re not going to need to think about an app, pull your device out of you pocket and hold it between your face.

It’s just going to be there. An always-on HUD embedding the connected world (and all it’s insanely capable new AI content) into the real world of your everyday life.

AR is going to change FAR beyond its current novelties. Far beyond what we can even imagine we’ll use it for.

Just like tech changed when the last radically new interface arrived (the pocket computer).


Hm, yes. I think the averse reactions of alot of people to AR/VR are based on their current alienation from reality with smartphone useage and the always connected world, as you say.

Always online, is somewhat tireing. If you cannot disconnect, your mind can scatter, as we see alot today. People are running down rabbitholes into algorithms that are always looking to create more "heat" and interest.


I can see that with Ai/AR it will be possible to alter your view of the world completely. If you do not like something in the real world, you can change it for you. OR you can subscribe to a worldview that will take over your entire world, litterally.


This is a nascent early device, but the tech coming up is absolutely and completely impossible to see clearly as one could just 4 years ago…


I think humanity CAN go down a road where we connect to Ai and essentially become part of Ai or we become beings that can expand our minds beyond our skulls… I did consider, what could I do if my brain was able to grow beyond the physical limitations… Would I become a supergenius, how would it feel to have absolute access to infinite memory and thinking?

Some are trying to make a brain/computer interface… And it will likely come too, sooner then later. How would the experience be to access a system like that?

The similarities from 1923 and 2023 are very striking… The industrial revolution was afoot and everything was changing drastically. Everyone was moving from backbreaking labour at farms to huge industrial factories and the progress was rapid. I imagine it was difficult to imagine what life would be in 20 years then. And the same is true now. Exactly now it is almost impossible to see what computers will look like in 20 years, 4 years ago I would not have assumed there will be big changes except for smaller and faster… But now… I wouldn’t hazard a guess.
 
I wouldn’t hazard a guess, but I can tell what I hope.


1. Humanity bands together to fix the environment we depend on to survive… This is not hippie ********, just plain logic and basic biology. IF we are too far gone, I hope we can band together to figure out how we survive and thrive…


2. My kid doesn’t have to suffer in a world plagued by famine and massive war….


And finally, I hope I can do my part for those two things to be the future.
 
My base M1 MBP struggles with memory when doing After Effects and other Adobe apps especially at the same time, also struggles with storage as Adobe gobbles up disc cache, and also render times in After Effects are relatively slow. Waiting 20 minutes for a render makes me not want to render at all, and then after I notice I need to re-render because I did something wrong and that's another 20 minutes.
Sounds like you'd be better off with a windows PC, with as much ram as you want, as much HDD space are your heart desires and any discreet GPU you want, for both video editing and AI. Not to mention you can upgrade any part of it you want later. For power users there is no better option. If you're surfing the web by all means stay with Apple.
 
I'm more excited about the quest 3 for $500! That actually sounds like a good deal for what you get.
 
...
Some are trying to make a brain/computer interface… And it will likely come too, sooner then later.
I am a neuroscientist. I wish I shared your optimism. True brain-machine interfaces are likely to be experimental medical devices implanted through invasive surgery, and they are likely to leave more scar tissue the more sensors they have. The non-invasive approach, currently using EEG or near-infrared measurements on the scalp, is like trying to understand what is going on in a football game by listening to the roar of the crowd from outside the stadium. You can tell something is going on, but there is not a lot of information about what it is (abeit we probably have not reached the maximum information capture from signal processing yet).

The brain is a complex 3-D structure. The problem is measuring neural activity in the interior without damaging the overlying tissue and creating scars. People have been trying to get around the issue since the 1970's but with very limited success. I think the best hope is optical techniques (e.g., inserting genes to make neurons bioluminescent when they are electrically active), but then one needs a wavelength of light that penetrates brain, blood, and bone (or you have to make an optical window in the skull), without frying the brain with heat. And of course the intensity of any light will fall with the square of the distance of the path from the light source to the sensor.

I'm not sure which will come first: a true non-invasive brain machine interface or faster-than-light travel...
 
I am a neuroscientist. I wish I shared your optimism. True brain-machine interfaces are likely to be experimental medical devices implanted through invasive surgery, and they are likely to leave more scar tissue the more sensors they have. The non-invasive approach, currently using EEG or near-infrared measurements on the scalp, is like trying to understand what is going on in a football game by listening to the roar of the crowd from outside the stadium. You can tell something is going on, but there is not a lot of information about what it is (abeit we probably have not reached the maximum information capture from signal processing yet).

The brain is a complex 3-D structure. The problem is measuring neural activity in the interior without damaging the overlying tissue and creating scars. People have been trying to get around the issue since the 1970's but with very limited success. I think the best hope is optical techniques (e.g., inserting genes to make neurons bioluminescent when they are electrically active), but then one needs a wavelength of light that penetrates brain, blood, and bone (or you have to make an optical window in the skull), without frying the brain with heat. And of course the intensity of any light will fall with the square of the distance of the path from the light source to the sensor.

I'm not sure which will come first: a true non-invasive brain machine interface or faster-than-light travel...
I'm not a neuroscientist (I'm a biomedical engineer) and I've been trying to make the same point for a long time. It's difficult to convey, from an electromechanical standpoint, how difficult it would be to tap into the brain on a meaningful level. and I don't think most people would tolerate the surgery necessary to make the interface.
 
  • Like
Reactions: VulchR
I am a neuroscientist. I wish I shared your optimism. True brain-machine interfaces are likely to be experimental medical devices implanted through invasive surgery, and they are likely to leave more scar tissue the more sensors they have. The non-invasive approach, currently using EEG or near-infrared measurements on the scalp, is like trying to understand what is going on in a football game by listening to the roar of the crowd from outside the stadium.
Great analogy.

I'm not sure which will come first: a true non-invasive brain machine interface or faster-than-light travel...
FTL travel will require new physics, while brain interface will 'merely' (ha!) require an enormous leap in imaging sophistication.

The interface would be easy once technology catches up to the point we can wear a cap that basically acts as an always-on fMRI, with high-resolution, high-framerate imaging of brain activaction. Which you then you feed into an AI processing algorithm trained for each user. At that point you could 'easily' (ha!) detect previously trained command-level thoughts.

Imagine the personal synesthesia there would be. When I think about the close command, I might evoke the feel-imagery of trapdoor dropping the window down into a rancor pit. But another person might evoke the sound of the word "close", while another might evoke the blip and bitter taste of a soap bubble popping out of existence. Yet another might have some non-physical abstract idea of closure, a related variant of the conceptual 'exit', both within some ineffable and unrepresentable category that makes sense only to their wetware.

To the interface all three would be unique 3D timelapse maps of brain activation particular to each user.

Think of the neuroscience that would come from studying that alone. (Let alone the untold discoveries of actually having such a magical wearable always-on fMRI.)

Both that and FTL are sci-fi to be sure. But I bet there's a chance we'll see the BrainVis at some point in the lives of some people walking around today. The technological leaps that come every 30 years or so are truly wondrous and almost always impossible to imagine to the people alive 60 years before they happen.
 
Last edited:
Great analogy.


FTL travel will require new physics, while brain interface will 'merely' (ha!) require an enormous leap in imaging sophistication.

The interface would be easy once technology catches up to the point we can wear a cap that basically acts as an always-on fMRI, with high-resolution, high-framerate imaging of brain activaction. Which you then you feed into an AI processing algorithm trained for each user. At that point you could 'easily' (ha!) detect previously trained command-level thoughts.

Imagine the personal synesthesia there would be. When I think about the close command, I might evoke the feel-imagery of trapdoor dropping the window down into a rancor pit. But another person might evoke the sound of the word "close", while another might evoke the blip and bitter taste of a soap bubble popping out of existence. Yet another might have some non-physical abstract idea of closure, a related variant of the conceptual 'exit', both within some ineffable and unrepresentable category that makes sense only to their wetware.

To the interface all three would be unique 3D timelapse maps of brain activation particular to each user.

Think of the neuroscience that would come from studying that alone. (Let alone the untold discoveries of actually having such a magical wearable always-on fMRI.)

Both that and FTL are sci-fi to be sure. But I bet there's a chance we'll see the BrainVis at some point in the lives of some people walking around today. The technological leaps that come every 30 years or so are truly wondrous and almost always impossible to imagine to the people alive 60 years before they happen.
fMRI with ultrahigh magnetic fields (and therefore very high spatial resolution) is already being developed, but there are limits. Also remember that fMRI does not measure brain activity. It measures blood oxygenation levels, and, through a series of uncertain assumptions and statistical calculations, estimates the underlying brain activity from that (and using standard techniques researchers once found 'activity' in a dead salmon - the techniques have been prone to false positives). Also, fMRI is good for imaging the cerebral cortex, but smaller structures beneath the surface of the brain are far harder to image.

I still think a brain machine interface will be optical, but will likely require genetic manipulation of brain cells so they change their optical characteristics when active, and require trepanned holes in the skull to make an optical window. I think only patients will get this. I just don't see an accurate comprehensive brain-machine interface ever becoming a consumer product, but we'll see. I infamously mocked Android when it first came out, so...
 
fMRI with ultrahigh magnetic fields (and therefore very high spatial resolution) is already being developed, but there are limits. Also remember that fMRI does not measure brain activity. It measures blood oxygenation levels, and, through a series of uncertain assumptions and statistical calculations, estimates the underlying brain activity from that (and using standard techniques researchers once found 'activity' in a dead salmon - the techniques have been prone to false positives). Also, fMRI is good for imaging the cerebral cortex, but smaller structures beneath the surface of the brain are far harder to image.

I still think a brain machine interface will be optical, but will likely require genetic manipulation of brain cells so they change their optical characteristics when active, and require trepanned holes in the skull to make an optical window. I think only patients will get this. I just don't see an accurate comprehensive brain-machine interface ever becoming a consumer product, but we'll see. I infamously mocked Android when it first came out, so...
I think you read my post "that basically acts as an always-on fMRI" the way somebody might read a description of an fMRI before it existed as "basically video X-Ray of the brain", and then you talked about all the ways xrays are limited to the application.

This functional brain-activation mapping device isn't likely to use xrays, or magnetic resonance (at least not the way we currently use it). It will sound as whiz-bang futuristic to us today as the use of a modern fMRI would have sounded to somebody in 1960. But it will be grounded in the ever-exponentially-evolving physics that we use today.
 
  • Like
Reactions: VulchR
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.