Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It appears that Apple has been smart about this. Rather than AI for AI's sake, they're trying to deploy it to enhance the usability of their products. They have a reason and a purpose for integrating AI. Other companies, it seems, are struggling with how exactly to expand its uses.
Apple hasn’t been smart, they just missed the boat and not they’re trying to play catch up. But the results are so far pathetic. Image playground is a joke, notifications summary are so bad the whole internet is laughing about it and text generation is abysmal. They have a lot to work if they want to make apple ‘intelligence’ even just vaguely useful.
 
Apple hasn’t been smart, they just missed the boat and not they’re trying to play catch up. But the results are so far pathetic. Image playground is a joke, notifications summary are so bad the whole internet is laughing about it and text generation is abysmal. They have a lot to work if they want to make apple ‘intelligence’ even just vaguely useful.

I just turned it off, on my iPhone and Mac, and I don't feel I'm missing anything. It's just not useful. I don't work in content generation, and I don't work in tech.

I never use Siri anyway.
 
Last edited:
  • Like
Reactions: Dr_Charles_Forbin
So what you're telling me is that AI will be good enough at writing code to help software developers do their job but not good enough to replace them?

As a software developer I think this is excellent news.
Code generation isn't the issue. How many more libraries do you need out there? Macros have been around for years. The problem is what it's always been, Garbage In, Garbage Out. Is the code human readable or does it require a symbol table? What about the next person that picks up that piece of code? What if you have a developer that has an implicit bias towards certian socio economic groups and buries the "common knowledge" in the code? Yeah, I harp on that side a bit because it's the scariest to me. People are inherently lazy and just accept what the computer tells them, regardless if it's right or wrong? Like I just said, we envision Data and we'll get M5 (Star Trek reference).
 
I turned off all AI on my phone and Macs and got rid of Copilot on my work PC. No thank you, at least not yet.

I have Copilot disabled as much as possible, with a non-admin account, on my work PC. IT didn't disable it via group policy for some reason, so Microsoft keeps trying to push it, but if you try to use it, it times out. They have Copilot and all the other AI options blocked at the network level. Ah well, AI sucks anyway.
 
I think for maybe over a year now, a lot of the big breakthroughs in generative AI have been in music (like Suno) or video (Kling, Runway, etc). Text-based AI has become a field of diminishing returns, even if tech twitter still likes to argue whether Claude or ChatGPT is better. :p

(at least until another big breakthrough comes along, however many years that takes)
 
I turned off all AI on my phone and Macs and got rid of Copilot on my work PC. No thank you, at least not yet.

I was just reading in another thread how an official "hack" to avoid Apple AI is to boot off an external drive.
(Apple Intelligence doesn't work at all when booting off an external drive)

Ironically, this makes buying the base M4 Mac mini an incredibly great move.

Save a ton by not paying Tim's storage upgrade extortion pricing and get the benefit of "not being able to use Apple AI" by booting off a larger, cheaper and just as fast, external NVMe drive in an enclosure

It's the built in "off switch" for Apple AI :D
 
Apple hasn’t been smart, they just missed the boat and not they’re trying to play catch up. But the results are so far pathetic. Image playground is a joke, notifications summary are so bad the whole internet is laughing about it and text generation is abysmal. They have a lot to work if they want to make apple ‘intelligence’ even just vaguely useful.
There is no known “work” that can be done, there are hard limits in Computer Science right now that limit the applications this technology can be used for. Scaling alone can not and will not solve those problems.

CRITICALLY: GenAI CAN NOT SUMMARIZE.

Yet every company under the sun is rolling out summarization features, because it ‘appears‘ like it can. Apple never should have shipped that feature, the rest of it fine but it is possibly the first instance of Apple software I can confidently, without hyperbole say “Steve Jobs would never have shipped this”.

Steve Understood OOP in the 80s, I do not think some members of Apple’s current leadership understand the research and boundaries of the technology which Steve took the time to do by making people explain it to him until he was satisfied. That critical piece is missing now, and we are all suffering for it whether Apple user or not.
 
Last edited:
  • Like
Reactions: Mr Todhunter
What're you talking about? In my experience it does a good enough job of it.
Without getting into the technical weeds, it cannot synthesize novel insight or pull out pertinent points without missing things, a human can. Even with Mixture of Experts, even with retrieval augmented generation, even with the latest advances in the field. I follow this work closely and have been in the industry for a while at advanced research companies.

I don’t mean this as an insult to you specifically, but “in my experience it does a good enough job” is exactly why we’re in this situation.


Steve would have tested this, saw it was ****, gotten the technical explanation that “we don’t know when or if it will improve to the point where it won’t make errors” and shelved the damn thing. Instead we all have to suffer for the next few years while the public figures that out and investors and shills make an ass ton of money off of a fad.

History rhymes, anyone skeptical should research expert systems and how they were going to change everything. I’m not saying this as an old person either, I was barely an infant when that was around but I know the history and I know the current technology and have been paid to review thousands of papers in this and adjacent areas.

If anyone wants to argue the point, give me a single example of a peer-reviewed paper written in the last 2 years that covers summarization and how it’s solved for. I’ll gladly wait and appreciate the effort, because I have access to non-public portals and as far as I’ve seen there are exactly zero.
 
  • Love
Reactions: turbineseaplane
I agree with just about all that you said. My disagreement is on AR. Who ever gets this right wins the game.
Can you please tell me legitimate use cases for everyday people to use AR? There is nothing I have seen from AR that would ever make me think this tech is really useful.

Don't give me commercial applications. I want everyday needs that my phone or watch can't already do 90% as well or better.

Also not trying to be an a$$ but everyone keeps talking about AR as the future but never gives any real world use cases. At least you can watch movies and play games with VR.
 
What're you talking about? In my experience it does a good enough job of it.

Screenshot 2024-11-13 at 13.48.52.png

Screenshot 2024-11-13 at 13.49.00.png

Screenshot 2024-11-13 at 13.49.05.png

Screenshot 2024-11-13 at 13.49.10.png

Screenshot 2024-11-13 at 13.49.26.png

Screenshot 2024-11-13 at 13.49.41.png
 
This news is delightful. And not entirely unanticipated. The fact is to a significant degree these neural networks must learn with the same relative effort that a biological brain does and any shortcuts taken so far are purely the product of spoon feeding information that is “conclusive” and “affirmative” to bypass the actual training effort that otherwise would be required using human confirmation.

I’ve speculated about ways this might be accelerated but haven’t settled on an approach that would be substantively improved over current modes of training. And frankly, I don’t believe we should push for these models to become too much better than they already are given the human cost of such as thing existing at all.
 
AI intelligent enough to hide its true capabilities from us is what we should worry about
 
Yeah that’s what I was referring to when I’d aid Apple notification summaries are a joke. That’s doesn’t mean LLMs can’t summarise, just that Apple’s poor implementation can’t.
 
  • Like
Reactions: rmariboe
Western world is full of gullible sheep who consider themselves "informed" because of what some schmuck says in their nightly news monologue while the intense orchestra music plays in the background. People are talking, with complete sincerity, about "the threat of AI" as if GPT4 was some kind of rogue nation state with an agenda.

"AI" is the most disappointing nothingburger of a "next big thing" I've seen in my lifetime.
 
Western world is full of gullible sheep who consider themselves "informed" because of what some schmuck says in their nightly news monologue while the intense orchestra music plays in the background. People are talking, with complete sincerity, about "the threat of AI" as if GPT4 was some kind of rogue nation state with an agenda.

"AI" is the most disappointing nothingburger of a "next big thing" I've seen in my lifetime.
“AI” is definitely replacing many jobs, which is the one thing that came true.

And it’s because executives don’t understand the drawbacks or limitations at all. There’s going to be a huge whiplash in the tech jobs market in a few years barring some major miracle and they will be scrambling for experts who left the field entirely.

Anything for short-term gain though.
 
I think a big problem is the type of data they are inputing to create the AI model--the old garbage in, garbage out problem. Might want to seriously consider getting multiple cultural and political points of view inputed over the next year or so, in my humble opinion.
 
So AI is pretty dead unless someone comes up with a way to improve it. We need to come to a point where AI comes up with ways to improve itself. Then that new AI will come up with even better ways to improve itself an so on.

I never understood the appeal of using AI to remove something from a photo.
 
This isn't a cpu, power or model issue, it's an approach issue. We (royal) aren't building machines that learn.

No idea what cap means.
Apparently it’s short for high capping which is to show off or lie to make yourself look good. Apparently the word exaggerate wasn't good enough. We decided to model ai on characters who thought that words like exaggerate didn’t tell it like it was.
 
  • Like
Reactions: Mr Todhunter
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.