Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Need to really stop calling this smoke-and-mirrors stuff AI. Selecting the best suited answer based of previous data comes across as an insultingly reductive way of synthesising intelligence.

Obvious but has to be said, these stories only gain traction because the word AI is mentioned.

Make the bad IT nerd future go away please. Who is this **** really enriching at the end of the day outside of super duper productive wankstains.


Edit: Outside of research of course!
 
Last edited:
Strange how so many find the feature not useful. I found it tremendously useful. Especially Slack and Github notifications that usually arrive in big batches, it generally summarizes quite well. Saving me a time to look into them.

I don't really get news notifications so it cannot really **** up there and for peopls messages I think it shouldn't save you from reading them but may summarize the topic well enough at least.
But for grouped notifications it classifies, summarizes and then saves you from having to scroll over them. I think it is very useful for it.

Though I think a great AI feature is when it finally works to remote control a phone. I cycle and obviously have to stop to use the bike, but just let me tell siri to read out the message condensed and then do things for me. Somehow it is still utterly useless in such cases (probably partially because mics and the wind is not ideal conditions) When is handsfree finally usable? When that happens AI has provided some real benefit (to my life at least).
 
OK, so it has it's flaws, but sometimes it's worth putting up with a few flaws when on the whole it effectively solves a significant problem.

...remind me again what problem this solves?
 
Turned it off and it's staying off. I miss quite a few things because of the summarize, which aren't good to begin with. This constant shoving AI down our throats is getting old. It's a toy. That's about it.
 
Was the source article well written? Could be a case of junk in, junk out. Would be ironic if it’s from an AI written article.

The article is from the BBC…one of the largest news organization in the world, as measured by number of full time journalists and reporters…

They don’t use AI to write their articles.
 
OK, so it has it's flaws, but sometimes it's worth putting up with a few flaws when on the whole it effectively solves a significant problem.

...remind me again what problem this solves?
Making the preponderance of information in our daily lives more manageable. Now maybe some don’t want that. And that’s fine.
 
  • Like
Reactions: blufrog
These definitely need a lot of work... So far the only AI feature I like is transcription of Voice Memos. Thankfully that does save me some time.
 


Apple is facing calls to remove its AI-powered notification summaries feature after it generated false headlines about a high-profile murder case, drawing criticism from a major journalism organization.

bbc-news-headlines-notification-summary.jpg

Updated to iOS 18.2? Then you may have received this notification (image credit: BBC News)


Reporters Without Borders (RSF) has urged Apple to disable the Apple Intelligence notification feature, which rolled out globally last week as part of its iOS 18.2 software update. The request comes after the feature created a misleading headline suggesting that murder suspect Luigi Mangione had shot himself, incorrectly attributing the false information to BBC News.

Mangione in fact remains under maximum security at Huntingdon State Correctional Institution in Huntingdon County, Pennsylvania, after having been charged with first-degree murder in the killing of healthcare insurance CEO Brian Thompson in New York.

The BBC has confirmed that it filed a complaint with Apple regarding the headline incident. The RSF has since argued that summaries of the type prove that "generative AI services are still too immature to produce reliable information for the public."

Vincent Berthier, head of RSF's technology and journalism desk, said that "AIs are probability machines, and facts can't be decided by a roll of the dice." He called the automated production of false information "a danger to the public's right to reliable information."

This isn't an isolated incident, either. The New York Times reportedly experienced a similar issue when Apple Intelligence incorrectly summarized an article about Israeli Prime Minister Benjamin Netanyahu, creating a notification claiming he had been arrested when the original article discussed an arrest warrant from the International Criminal Court.

Apple's AI feature aims to reduce notification overload by condensing alerts into brief summaries, and is currently available on iPhone 15 Pro, iPhone 16 models, and select iPads and Macs running the latest operating system versions. The summarization feature is enabled by default, but users can manually disable it through their device settings.
Apple has not yet commented on the controversy or indicated whether it plans to modify or remove the feature.

(Via BBC News.)

Article Link: Apple Faces Criticism Over AI-Generated News Headline Summaries
I have definitely noticed stuff like this. The problem is it tries to summarize all grouped notifications as one sentence. Of course they’re completely different email, news stories, messages etc, but it mashes it all together in one supposedly cohesive sentence.
 
The other day I disabled Mail preview summarization because I noticed some weirdly out of context phrases in a message summary. I wouldn't mind occasional wrong summary for marketing fluff but for personal messages - no thanks.
 
What did the notification say? Bet it was written in bad English.

Trash in trash out.

Even if we imagine the British Broadcasting Corporation was writing in "bad English", in what way is Apple's LLM not at fault for incorrectly interpreting it anyway?

If the model cannot factually summarise, it should just give up instead of hallucinating lies. No one (but pundits and Wallstreet) asked for this, this is Apple pushing faulty technology onto their users, and somehow people act like it's everybody's fault but Apple's.
 
[…]

If the model cannot factually summarise, it should just give up instead of hallucinating lies. No one (but pundits and Wallstreet) asked for this, ..|]
Pundits and Wall Street are a universe of people that make up apples universe. This is where we are headed and apple would be damned if they do and damned if they don’t.

So if a feature isn’t worthwhile, don’t use it.
 
I don’t understand the point of summaries

Take something that takes 6 seconds to read, and make it take 4 seconds? But then, just in case it’s wrong or incomplete, read the full thing to be sure, so now your 6 second task took 10 seconds?


I’ve still yet to see a convincing demonstration on an actual, tangible, useful benefit to ANYTHING AI related.

Someone, please show me.
I found a use. Comparison of car features among all brands.

“Which cars sold in the USA are offered in fastback versions?”

“Which AUDI models have DCC?”
 
I have to disagree, especially as a teacher. Sure, people can opt out of using or relying on these tools, but guess what, that's not generally how humanity works. When we have a convenience, we use it. Especially the generations that come into the world always having it.

You might not respect or appreciate all that teachers do, but at least understand that we're already fighting a battle to have students use their brains rather that ChatGPT.

As someone who has taught (admittedly only mathematical subjects at the university level), I do appreciate what teachers do to try to get students to use their brains.

I'm struck, however, by the similarities I'm hearing now with the warnings we heard about calculators and television in the classroom in the '70s, PCs and Word Processors in the '80s, the Internet and Search Engines in the '90s, and Tablets and Mobile phones in the '00s and '10s. Now education is being challenged by advances in AI.

The challenge is not to get students to use their brains in the way you were taught. The world you were raised in, one without liberal access to AI, is gone. You won't be relevant if you endeavor to pull your students back into the way you learned.

Teach them to think in the new world they’ll inhabit, where they can access information previously only available to academics and those with advanced degrees. Instead of spending hours and days poring over books, they can interrogate AI and receive personalized instruction. Teach them logic and how to synthesize answers into deeper understanding and new questions. This will enable them to surpass our achievements.
 
Remember, Apple Intelligence is still taking up space on your device whether you can or want use it or not
Yeah, which is egregious when you have the lowest storage amount on the Mac. There's no reason to have this crap wasting on my SSD
 
I found a use. Comparison of car features among all brands.

“Which cars sold in the USA are offered in fastback versions?”

“Which AUDI models have DCC?”
Do you really actually trust the result enough to go and spend $50,000 on a car? What if it it missed a model that you would have really liked?

What if it provided a list of features only available in a different market?

You have to go and search for yourself anyway to avoid the chance of wrong information, so did you save time or spend even more time?
 
You have to go and search for yourself anyway to avoid the chance of wrong information, so did you save time or spend even more time?

That's not how people work. That's not what the behavioral sciences show. Look up satisficing.
 
I’ll dive into semantics just for a moment…
Is what iOS is generating a “headline”, or is it a summary? Because if we are calling it a headline about a news article, Apple/iOS shouldn’t be changing it at all, at least if it’s based on a news article with its own headline.
News organizations have started doing a lot of clickbait in their actual headlines (where you won't know the offered/necessary information until you click and read). I hate the AI craze, but using AI to generate actually useful headlines to combat the clickbait that the original sources put out is a form of fighting fire with fire that I can at least understand.
 
  • Like
Reactions: vantelimus
As someone who has taught (admittedly only mathematical subjects at the university level), I do appreciate what teachers do to try to get students to use their brains.

I'm struck, however, by the similarities I'm hearing now with the warnings we heard about calculators and television in the classroom in the '70s, PCs and Word Processors in the '80s, the Internet and Search Engines in the '90s, and Tablets and Mobile phones in the '00s and '10s. Now education is being challenged by advances in AI.

The challenge is not to get students to use their brains in the way you were taught. The world you were raised in, one without liberal access to AI, is gone. You won't be relevant if you endeavor to pull your students back into the way you learned.

Teach them to think in the new world they’ll inhabit, where they can access information previously only available to academics and those with advanced degrees. Instead of spending hours and days poring over books, they can interrogate AI and receive personalized instruction. Teach them logic and how to synthesize answers into deeper understanding and new questions. This will enable them to surpass our achievements.
You’re not wrong. We have to prepare students for the world that they live and work in, and that now includes AI. But as is always the case, technology advances faster than society. As things are now, AI is often allowing students to achieve success with little effort, knowledge, or learning. Educators and the education system itself are still learning how to adapt, just as we’ve had to learn how to adapt to the existence of smartphones. I’m not one of those educators who believes that they should be flat out banned from classrooms, for exactly the reason you say. There’s no sense in preparing students for the world of 30 years ago.
 
  • Love
Reactions: vantelimus
Do you really actually trust the result enough to go and spend $50,000 on a car?
Yes. But I would go to the dealer and test drive and then look at brochures .
What if it it missed a model that you would have really liked?
I’ll find out at the dealers.
What if it provided a list of features only available in a different market?
I’ll find out out the dealers.
You have to go and search for yourself anyway to avoid the chance of wrong information, so did you save time or spend even more time?
Most people I know wouldn’t blindly do a $50k wire transfer to the dealer. All this stuff is a starting point. Not the finish line.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.