Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I'm not stuck on anything. But you need to give me a reason why someone would use AR. Just saying it will be the future is not a great argument.

You can lower the price to $500, you can make it weightless, but if you don't have a reason to use it, it will never be anything more than a niche product for people who like techy things.
I did, several in fact, you don’t like them. Nothing else to add to the conversation. No matter what I come up with you’ll respond “muh phone can do that.” Or “That’s a commercial use, that doesn’t count.” You don’t see it, that’s fine. It’s not like I’m a member of the AR consortium trying to gain a R&D grant. 😄

Just remember, “no one would ever want a computer at home,” yet as people were saying that, the Commodore 64 became the largest selling computer of all time.
 
I was just reading in another thread how an official "hack" to avoid Apple AI is to boot off an external drive.
(Apple Intelligence doesn't work at all when booting off an external drive)

Ironically, this makes buying the base M4 Mac mini an incredibly great move.

Save a ton by not paying Tim's storage upgrade extortion pricing and get the benefit of "not being able to use Apple AI" by booting off a larger, cheaper and just as fast, external NVMe drive in an enclosure

It's the built in "off switch" for Apple AI :D
Can’t you just turn Apple Intelligence off?
 
I did, several in fact, you don’t like them. Nothing else to add to the conversation. No matter what I come up with you’ll respond “muh phone can do that.” Or “That’s a commercial use, that doesn’t count.”
That's pretty much my point. You can't say AR is the future if you can't give me a groundbreaking feature. A feature every average Joe will want to use it for. Meta has been at this for 10 years and still can't come up with any real reason to use their devices. They demoed their new AR glasses but again they didn't show any real reason to use them. I hoped Apple would show us something but gave us an incredible nothing burger. Outside of niches, AR and VR are DOA unless that ground breaking use case comes to fruition.
 
Actually a ton of people fell for that grift/hype and it's pretty amazing. There's a sucker born every minute, especially when it comes to new technology
I wouldn’t call it grift/hype. There are real gains being made. These headwinds were to be expected. But, this will also likely lead to breakthroughs in the approach that these companies are taking. This is how innovation works.
 
Said only by those who don't understand the massive limitations to AI technology
Feels ironic that you'd say that to me given how regularly I say that to others. Lots of people in management seem convinced that AI is ready to take over all office and programming jobs.

The only jobs it's ready to take over in that domain are jobs that shouldn't exist at all - I keep telling them that if AI can handle it, it's a sign that it's totally pointless busywork that should just be eliminated.

Generating images or audio though? They're astonishingly good at that already, and I haven't heard people saying that AI is about to plateau on those tasks even though they're getting close to the point of matching professionals while generating output thousands of times faster.
 
It’s 2-3 years away at most. Enjoy.
I’d love to see an explanation or sources that lead you to believe it. Please cite research that says we are remotely close that isn’t sponsored or affiliated with a company who would benefit from further investment by publishing hyped projections. There’s a reason so many OpenAI people left.

“Enjoy” is kind of low effort trolling me, instead of doing that why don’t you point to some evidence so I can learn why you think this?

I work in the field and have studied and implemented research in this area for a living, have you? If so I’d love a detailed reply even more, and I’m being serious because I’m sure there are things I’ve missed but I frankly doubt you’re deeply familiar with the disciplines. I’d love to be wrong and learn some things though, the internet is great that way when it’s used for knowledge instead of arguing.

Feels ironic that you'd say that to me given how regularly I say that to others. Lots of people in management seem convinced that AI is ready to take over all office and programming jobs.

The only jobs it's ready to take over in that domain are jobs that shouldn't exist at all - I keep telling them that if AI can handle it, it's a sign that it's totally pointless busywork that should just be eliminated.

Generating images or audio though? They're astonishingly good at that already, and I haven't heard people saying that AI is about to plateau on those tasks even though they're getting close to the point of matching professionals while generating output thousands of times faster.
Programming is one thing generative models coupled with other technology will be good at. I have seen deterministic source code generation in labs, years ago, and worked directly with the person that invented the method and that work predated transformer models by a few years.

It’s a lot easier to have a verifiably correct way for a machine to run than it is for a generated song to be evocative or art to be novel. Especially in the case of Generative “AI“ art where novelty is almost impossible and iteration doesn’t really work very well.

That said, there will be augmentations to creation tools but those jobs and art aren’t going away, just low quality stuff like background music or bad looking app icons or emojis. Humans are extremely good at pattern recognition so even the best ”AI art” starts to look the same after a while to us.


Kurzwell should have stuck to talking about keyboards.
He was absolutely ahead of his time there, I’ll give him that. Those synthesizers are pretty interesting even compared to what we have today.

Kurzweil was correct with some of the timeline in his first singularity book but that has more to do with the definition of what “AI” is and especially the notion of the Turing Test being good enough to base any related definition on being flawed. Just because it can fool a human with probabilistic text does not mean it is literally artificially intelligent. I really wish we had a better name for this technology.

One of my fears and assumptions is that before the decade is over we will have someone “claim” they have AGI, and they’re going to massage the definition because they made some agent really good at a specific domain which should be disqualifying. Generalized Intelligence is going to be very tough to crack and will require a lot of disparate disciplines to be involved working together, not just computer scientists. It may also require biological processors which are starting to be developed and used in small tests.

My personal opinion is that we’re at least 15-20 years out from true AGI, and that is likely overconfident.
 
Last edited:
It's not. Extreme predictions about tech fail 99% of the time because people can't change. Technology always come up against this big barrier called the human body and human behaviour. The internet took off 30 years ago and we still got people with 1st century beliefs. Habits, desires, ambitions and needs change very slowly and sometimes don't change at all. Sometimes tech is abused for outrageous crimes, which means people are turned off tech.

Kurzwell should have stuck to talking about keyboards.
AGI in 2-3 years. It’s already reached parity with humans in multiple domains
 
Last edited:
  • Haha
Reactions: NT1440
That's pretty much my point. You can't say AR is the future if you can't give me a groundbreaking feature. A feature every average Joe will want to use it for. Meta has been at this for 10 years and still can't come up with any real reason to use their devices. They demoed their new AR glasses but again they didn't show any real reason to use them. I hoped Apple would show us something but gave us an incredible nothing burger. Outside of niches, AR and VR are DOA unless that ground breaking use case comes to fruition.

Which ironically leads me back to my point.

"Who ever gets this right wins the game."

I never said I had the answers or the perfect use case. Nor did I say it's right around the corner. There are people way smarter than me and way dumber than me working on it. Outside of diving, I never thought I was never going to wear a device on my wrist, ever again, and yet here I am. 😉
 
Generating images or audio though? They're astonishingly good at that already, and I haven't heard people saying that AI is about to plateau on those tasks even though they're getting close to the point of matching professionals while generating output thousands of times faster.

They're only "good" at it on a very superficial level. On a professional level, they're terrible, and I say that as someone who works in a media job who's experimented with AI image/video generation a lot.

They have the same general limitations as LLMs. If you prompt things that it hasn't seen in its training data, you often end up with unusable garbage. If you try to correct it, it's a roll of the dice whether you'll be able to prompt it into a correct image, because the AI has no understanding of what's actually wrong with the image. If you want something very specific, good luck. If you want specific text in a specific font in a specific place...nope.

Generative AI is only really good at generating variations of generic imagery of which there are already millions of examples of all over the internet. If you want a photo of a dog running in some flowers, then sure. No problem. It's really quite terrible at generating very specific complex imagery that exactly matches a human's intended prompt.
 
They're only "good" at it on a very superficial level. On a professional level, they're terrible, and I say that as someone who works in a media job who's experimented with AI image/video generation a lot.

They have the same general limitations as LLMs. If you prompt things that it hasn't seen in its training data, you often end up with unusable garbage. If you try to correct it, it's a roll of the dice whether you'll be able to prompt it into a correct image, because the AI has no understanding of what's actually wrong with the image. If you want something very specific, good luck. If you want specific text in a specific font in a specific place...nope.

Generative AI is only really good at generating variations of generic imagery of which there are already millions of examples of all over the internet. If you want a photo of a dog running in some flowers, then sure. No problem. It's really quite terrible at generating very specific complex imagery that exactly matches a human's intended prompt.
And I’ll add things are only going to get worse as time goes on because it will be learning more garbage from previously generated AI garbage. If things keep going as they are entropy will eventually break “AI”.
 
Last edited:
Don’t know if it is an already existing term, but I call this “AI rot”. As more and more AI generated material is put out in the world, the AI models will train themselves on already AI generated material, and the results will be worse. LLM’s may have already peaked, not for lack of progress in technology but for degression in the available input data. AI is literally eating itself.

Are you thinking of what people have been calling 'model collapse'"

"Using artificial intelligence (AI)-generated datasets to train future generations of machine learning models can contaminate their results, a concept known as 'model collapse."
 
  • Like
Reactions: adrianlondon
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.