Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
For the Smart Reply feature, the AI is...
<jaundiced eye> ...Apple is insistent that everyone acknowledge it as "smart" and "intelligent".
programmed to identify relevant questions from an email and generate concise answers. The prompt for this feature is as follows: The Memories feature in Apple Photos, which creates video stories from user photos, follows another set of detailed guidelines. The AI is instructed to generate stories that are positive and free of any controversial or harmful content...
In short, it's a pablum-generator.
 
  • Like
Reactions: StyxMaker
I like how polite the prompts are. Do LLMs work differently if you don’t say please?
 
It's like the girl/guy at your local coffee shop - you might get better service if you're nice to them.
It’s a machine though, it doesn’t have feelings/sensibilities/ego that can be affected by politeness or lack thereof like a person does.
 
If you tell an AI "draw a house, but do not draw a horse", invariably you end up with a tiny horse somewhere in the drawing of the house. You cannot negative prompt!
 
> "Each chapter is a JSON with these keys and values in order"

Great, good start, you've got a bug in your prompt. A JSON *Array* (or list) is what you wanted to say there... Sigh.
 
Leave it up to Apple to put an end to a problem that the entire AI industry and the Terminator universe have been unable to solve: tell the AI to "Just stop it."
 
I don't get this take at all. You seem to be claiming that the only valid way to program is through inscrutable code. It's inevitable, imo, that all programming will transition to natural language in the not-so-distant future.
I don't see this as inevitable at all, because natural human language is very much not geared toward being specific and exact. Look at engineering specification documents or legal contracts: they're often huge wobbling piles of nearly redundant statements, because if they were written simply and succinctly, they would be too vague for purpose and allow all sorts of (mis)interpretation. Natural language might be okay for simple cases where a benign interpreter can gather some context—for instance, if a HomePod in the room "Bedroom" hears "Siri, turn off the lights" but the Apple TV in the room "Living Room" is having user interaction, maybe don't turn off every light in HomeKit—but there are uncountably more scenarios where making intent crystal clear would require reams of natural language to do the job of a few lines of code.

Then again, my prompt contains "You are a curmudgeonly engineer and coder", so maybe I just need to get better at ignoring previous instructions...
 
  • Like
Reactions: LockOn2B
I don't see this as inevitable at all, because natural human language is very much not geared toward being specific and exact. ...
Exactly that. Is "You can't have too much..." an encouragement to have more or a warning that you should have less.
 
Good to know. Waiting try out Apple Intelligence later this year.
 
"There are two three things you don't want to see being made - sausage, legislation and AI"
-Otto von Bismarck, German Chancellor

Wise guy, but he should have said this..


"There are three four things you don't want to see being made - sausage, legislation, BMW’s, and AI"
-Otto von Bismarck, German Chancellor


Doubt he’s ever worked on a modern German car.
 
  • Like
Reactions: MobiusStrip
So Apple's dicking around with this, when they have yet to produce a competent file search. Phenomenal.
 
"The Memories feature in Apple Photos, which creates video stories from user photos, follows another set of detailed guidelines. The AI is instructed to generate stories that are positive and free of any controversial or harmful content..."

So how do you tell it that your cat died, so stop creating videos featuring her every week?
 
It’s a machine though, it doesn’t have feelings/sensibilities/ego that can be affected by politeness or lack thereof like a person does.

They also don't understand analogies, but there's people that don't either.

It just sees you being nice and copies it, based on interactions it was trained on where niceness was met with niceness.
 
I love how you can, apparantely, reduce AI hallucinations and lying just by telling them not to hallucinate or make up facts.
It sounds dumb but some people swear they get better results if they say things like "please get this right or my boss will fire me." Could be placebo. Who knows.
 
  • Like
Reactions: LockOn2B
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.