Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I don't get this take at all. You seem to be claiming that the only valid way to program is through inscrutable code. It's inevitable, imo, that all programming will transition to natural language in the not-so-distant future.
it's like an employee I wouldn't hire. Someone bright but unstable. They offer AI tools on my job management app and after playing with them and getting crazy wrong results results there is no way I would embed them in my data. It's amazing technology, but it is not ready yet. Nowhere near ready.
 
The interesting thing about this is how unsophisticated these prompts are. They are using none of the modern techniques that have emerged from research. Compare this to the Anthropic leaked artifacts prompt, here: https://gist.github.com/dedlim/6bf6d81f77c19e20cd40594aa09e3ecd

I notice they are zero-shot prompts, too. We normally work with general models, so we do multi-shot. I wonder if Apple can get good results without examples (and with very basic prompts) because they are using small, task-specific models.
 
The interesting thing about this is how unsophisticated these prompts are. They are using none of the modern techniques that have emerged from research. Compare this to the Anthropic leaked artifacts prompt, here: https://gist.github.com/dedlim/6bf6d81f77c19e20cd40594aa09e3ecd

I notice they are zero-shot prompts, too. We normally work with general models, so we do multi-shot. I wonder if Apple can get good results without examples (and with very basic prompts) because they are using small, task-specific models.

I wonder how easily Apple could update these in the background. Will we see Apple Intelligence getting better due to Apple silently tweaking their prompt prefixes?
 
I wonder how easily Apple could update these in the background. Will we see Apple Intelligence getting better due to Apple silently tweaking their prompt prefixes?
Given that this is part of the package pushed to the device, I think the only way an update would happen is via a release. That being said, this is in active development, so I would think they'd be able to dramatically improve performance with better prompting. Depends on the model, of course.
 
I don't get this take at all. You seem to be claiming that the only valid way to program is through inscrutable code. It's inevitable, imo, that all programming will transition to natural language in the not-so-distant future.
Code doesn't have to be inscrutable, but it does have to allow you – at least sometimes - to specify expected behavior in a manner that leaves no room for interpretation. Which is generally much harder and slower to achieve using natural language, at least if you care about getting reliable, exact and repeatable results.
 
The questions should be short, no more than 8 words. The answers should be short as well, around 2 words.
I understand why they are limiting the AI features to English. This will require a whole lot of language-specific customization if they go about it in that way.
 
Last edited:
I am pretty sure the prompt "Do not hallucinate" will not work. The AI does not know if it's hallucinating or not. That's the problem.
Paradoxically, this still tends to reduce hallucinations to some degree, for reasons that aren’t entirely clear. Similar to “think step by step” and the like.
 
I still find it surreal that we are now instructing computers to do things using natural language, even at the back-end level, instead of using programming code.

Absolutely! We have truly reached a new level of human-computer interaction!

Of course, all of the filler words are immediately dropped when that natural-language pre-prompt is processed. Root words are extracted, and the intent is established. That's why repeating words like "mail" and "output JSON" can be useful, as it raises the importance of those words thereby "tuning" the prompt.

Reading these prompts also highlights why many people might have had trouble with Siri over the years. Earlier versions of Siri were not able to process more "loosely-assembled" natural-language requests as today and tomorrow's Siri can. It wasn't a fault of the technology, but evolution was needed, and we're now there.
 
Who needs hallucination, in time, without a continuous feed of information created by humans, the systems will pollute each other

 
These instructions may replace thousands of lines of code for a programmer.

If this works, it’s pretty amazing.
If it works, but right now it sounds like trusting a machine to do guesswork, which makes me feel very uncomfortable about whether I can trust or rely on it
 
I love how you can, apparantely, reduce AI hallucinations and lying just by telling them not to hallucinate or make up facts.
 
I still find it surreal that we are now instructing computers to do things using natural language, even at the back-end level, instead of using programming code.
Apple is almost certainly doing more. But the English language stuff is easy to find, so this is what they found. We git some hints that more is done because the instructs say "provide a list of dictionaries" which has very exact meaning if you know Python or related languages. Also they say "output only in JSON". From this we know that Apple is not showing the AI's output directly to the user, They are doing something else before the user sees it. Maybe simple formating, but "something."
 
This whole thing comes off as preposterous. That these instructions have to be laid out like that,, like a reminder to someone suffering from a mental breakdown means LLMS are nowhere near ready for prime time. Useful sure, cool yeah, but there is still something way wrong. You can't really deploy these systems for anything major or vital until they figure these things out.

You _also_ need some context to understand what problem is trying to be solved. Do you not form a similar set of ground rules as part of say responding to an email? E.g.

"I'm writing a response to an email from my boss. Try to answer all questions in a precise and productive manner. Use professional language, avoid filthy or risqué comments and avoid pointing out the original sender is a moron."

I keep thinking that perhaps LLMS are really not the way forward for AI. I think Yann LeCun is right. He seems like one of the only level headed guys in the AI space.
And Apple is correct in not rushing these features like Google and others. This stuff is just not ready for prime time.
LLMs are absolutely a way forward. I don't think they are going to get us to generalized AI, e.g. they aren't a single step to the destination. The current LLM boom is basically a result of everybody expecting there to be an asymptotic curve where LLMs stop improving, but they have actually managed to continue to improve steadily even when throwing ludicrous amounts of memory and compute at them.

A lot of research and funding is going in which is going toward improving efficiency and creating entirely new techniques. We don't know where things will end up once the hype dies down. But each AI push before has given us broadly useful technologies, for image recognition, for improving sensor data, for mapping based on erroneous LIDAR data, for adaptive directions by navigation systems, and so on.

Unlike OpenAI keeping servers running on Microsoft's dime, Apple wants to have this running predominantly in local hardware on every phone and Mac released after now. As such, the final 18.1 version is probably a decent yardstick to where this tech is without VC hype.
 
  • Like
Reactions: ZZ9pluralZalpha
I don't get this take at all. You seem to be claiming that the only valid way to program is through inscrutable code. It's inevitable, imo, that all programming will transition to natural language in the not-so-distant future.
Code doesn't have to be inscrutable, but it does have to allow you – at least sometimes - to specify expected behavior in a manner that leaves no room for interpretation. Which is generally much harder and slower to achieve using natural language, at least if you care about getting reliable, exact and repeatable results.
The path forward with the current level of intelligence is to not use natural language as the tool when you can use it to help build you specific tools. This is how you get a language model to answer math questions, for instance - you give it tools to use internally. A lot of the progression being researched in LLM is how far can you get toward generalized AI by having models build specialized tools for other models to use as part of answering sophisticated questions.

In the cases prompted here, that isn't possible - I can't build a if/then statement to do natural language summarization. Thats why I instead process using a language model.

But if I am trying to get a certain task done? I could use LLM to build a one-time or a multi-use tool, such as a shortcut. I use intelligence features in Xcode to help generate Swift code, rather than pretending LLM execution can replace predictable, optimized, hopefully human-reviewed code.
 
I don't get this take at all. You seem to be claiming that the only valid way to program is through inscrutable code. It's inevitable, imo, that all programming will transition to natural language in the not-so-distant future.

100% certain it will. Human/Machine interactions already moved from a programming interface (a command line) to languages humans understand: pointing at things and English (and many other languages). Programming computers will follow the same path. There's no need for humans to know how to "speak computer". Only a fraction of people know how to program, a revolution will happen when all of them are able to, the same revolution that happened after the GUI when everyone could use one.
 
  • Like
Reactions: jarman92
So I think the thing with saying "do not hallucinate" is similar to other weird prompts that data scientists have uncovered can improve results:

Saying please and thank you, for whatever reason, gives better answers. This may be because in the context of human language, being polite when asking questions generally results in better answers online? Just think of the forums. If someone is being nice, they are likely to get a nice response. If they are rude, they are likely to get people telling them to go Google it themselves.

Same for telling it to work step by step. IDK if that still applies to the current version of ChatGPT, but for a while telling it to do that yielded better results, presumably because again those words were adjacent to other instances of better solving the problem when the problem is broken down into smaller steps and worked through by humans.

They are learning from us, and what's weird is even the people building these things aren't exactly sure how it all works. So my guess is that telling it to not hallucinate, probably reduces the chances of it hallucinating by some non-trivial amount. Even improving your responses by a few percent by adding that line to your prompts is probably worth it.

I am pretty sure the prompt "Do not hallucinate" will not work. The AI does not know if it's hallucinating or not. That's the problem.

"Do not hallucinate."

oh i didn't realise it's that easy

“Do not hallucinate”

Apple cracked the AI code. Can’t believe no one tried this before.

Wait, so their entire AI is writing prompt to some custom ChatGPT API?
lol

do not hallucinate make it even funnier lol

Does saying “do not hallucinate” really work? How can anything know if it’s hallucinating or not? Isn’t that the definition of a hallucination?

If that works then wouldn’t it be covered by “do not make up factual information?”

Definitely not. But imagine if that's all it took—thousands of engineers are working on the hallucination problem and all you have to do is say "do not hallucinate" 😂
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.