Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Sorry to be harsh but you don’t understand how LLMs work.

They don’t understand what you’re saying. They calculate a statistically likely response to your input

This means they aren’t going to be able to _do_ anything like your examples above because that would require them to understand your input

This isn’t possible with current large language models because they don’t understand anything. Adding that could require entirely new engineering
Define "understand". How do they manage to write programs/scripts based on the spec you give them if they "do not understand" what you are saying? Statistical miracle?
 
  • Disagree
Reactions: maxoakland
Define "understand". How do they manage to write programs/scripts based on the spec you give them if they "do not understand" what you are saying? Statistical miracle?
@maxoakland already covered this.

It generates a response that statistically matches an expected response to the prompt and previous prompts in the conversation.

It isn’t understanding in the traditional sense that we describe it. This is simulated understanding, but that’s about it.

Don’t fall into the trap of personifying it and then inflating its capabilities beyond what it is capable of doing.
 
@maxoakland already covered this.

It generates a response that statistically matches an expected response to the prompt and previous prompts in the conversation.

It isn’t understanding in the traditional sense that we describe it. This is simulated understanding, but that’s about it.

Don’t fall into the trap of personifying it and then inflating its capabilities beyond what it is capable of doing.
That's just nonsense. The only way to test understanding is to ask questions and ChatGPT gives the answers demonstrating excellent understanding of many issues. Besides, even if you believe that it's just a "simulated understanding" (though, what's the "real" understanding then?), who cares? It produces excellent results (better than most people with "real" understanding). I'll take that. Apple superfans in the meantime are going to wait until Apple comes up with "real" understanding (which, no doubt, can be achieved only on Apple devices).
 
  • Like
Reactions: Good User Name
The advantage of an Apple LLM is obviously going to be:

  • Training a more limited (Siri replacement) model that then can be run locally on your device, operated primarily via voice.
  • Which will never use your data to train it (and how they keep us buying iPhones)
  • For a 'privacy first' approach to LLM:
    • i.e. you completely control what it's doing
    • (no doubt it will use your iCloud data store to remember stuff and work cross device).
  • We can expect it talk to apps on your phone via a locally run API (which will then presumably use their own remote API calls with your permission).
    • Though Apple being Apple, I'd expect them to have 'preferred partners' in each market for certain tasks ($$$$$)
  • Google, OpenAI Microsoft et al will continue to run AIs in the cloud that can run anywhere and hooks up to cloud APIs and will likely continue to use your interactions to train them.
So if I'm right, Apple's approach will be a significant point of difference.

P.S. a lot of people on this thread seem to be invested in calling out LLMs as a scam.

They've obviously been released a little before they're fully ready to gain interest and to drive investment i.e. their models need further refinement.

I'm not a computer scientist, but I can't see any reason why LLMS can not keep on scaling up with their models being further refined to be an incredibly useful and powerful entity.

AGI is an entirely different thing and that's further away on the horizon.
 
Last edited:
What are you asking it? Examples?

Because of healthcare regulations such as HIPPA.

Studies have already shown that GPT4 is better than the average physician at diagnosing rare diseases.

I'm not going to start posting my chats on here, but if you want to dig it was referencing using thionamides in Thyroid Storm. The bit about physicians and rare disease makes sense because the average physician doesn't diagnose rare diseases; they send the patient to an infection specialist. My physician buddies wouldn't even try because they aren't trained in it.

None of this matters anyway as some people, including myself, just don't have much use for a natural-language LLM. I've tried and tried to make GPT useful for something and gave up, same with Siri and Alexa. The best thing I got from it was a couple excel macros.

I just don't need an all-encompassing digital assistant, not to mention the privacy implications.
 
Last edited by a moderator:
“AI is apparently "absolutely critical" to Apple, with Cook confirming that Apple has been investigating generative AI "for years."

Lmao! We all know that Tim totally missed the boat on generative AI.
They’re using a transformer-based generative AI in the iOS 17 keyboard. It’s cross-trained on your text so it predicts what you’re going to write, and unlike MS’s it’s on-device rather than cloud-based.

There’s also loads and loads of AI in the camera and photos apps, in Apple Music (mostly server-side), in iCloud mail’s spam filtering, and so on, and has been for years. The speech recognition and generation capabilities are also heavily dependent on AI

What apple aren’t doing much (if anything) on is generalised chat bots: they’re a neat gimmick but they’re not actually a useful or entertaining product yet, and apple has almost always focused on producing a product.

Something I assume they’re working on, but are lacking in, is natural text to command processing. In some ways Siri’s text-to-command processing isn’t as convincing as old text adventures, though those were a lot more rigidly coded, while the underlying commands are a horribly irregular mess and not well-integrated with Automator actions or applescript even on the Mac.
 
They’re using a transformer-based generative AI in the iOS 17 keyboard. It’s cross-trained on your text so it predicts what you’re going to write, and unlike MS’s it’s on-device rather than cloud-based.

There’s also loads and loads of AI in the camera and photos apps, in Apple Music (mostly server-side), in iCloud mail’s spam filtering, and so on, and has been for years. The speech recognition and generation capabilities are also heavily dependent on AI

What apple aren’t doing much (if anything) on is generalised chat bots: they’re a neat gimmick but they’re not actually a useful or entertaining product yet, and apple has almost always focused on producing a product.

Something I assume they’re working on, but are lacking in, is natural text to command processing. In some ways Siri’s text-to-command processing isn’t as convincing as old text adventures, though those were a lot more rigidly coded, while the underlying commands are a horribly irregular mess and not well-integrated with Automator actions or applescript even on the Mac.
Thanks for the update Philip. Appreciate it 🤙
 
That's just nonsense. The only way to test understanding is to ask questions and ChatGPT gives the answers demonstrating excellent understanding of many issues. Besides, even if you believe that it's just a "simulated understanding" (though, what's the "real" understanding then?), who cares? It produces excellent results (better than most people with "real" understanding). I'll take that. Apple superfans in the meantime are going to wait until Apple comes up with "real" understanding (which, no doubt, can be achieved only on Apple devices).
It isn’t nonsense.

Understanding is much more in depth and captures nuances and subtleties. It is the difference between being able to parrot things vs understanding them.

These models can review their training set to parrot correctly formatted responses, but since they can’t understand the conversation they can’t ensure accuracy or correctness.
 
Generative AI in its current form is just an experimental beta. It doesn’t meet usual product reliability standards. Apple is doing the right thing by not treating users like guinea pigs by seeing what they do with a half-baked product. Generative AI isn’t a threat at all. The only real threat is a possible general AI system that may come at some point in the future, but it’s a threat to everyone, not just Apple.
 
Define "understand". How do they manage to write programs/scripts based on the spec you give them if they "do not understand" what you are saying? Statistical miracle?
Yes it literally is a statistical “miracle”. It takes in massive amounts of examples and then calculates a statistically likely response to your input

How did you think we were at the point where computers can “understand” what you’re saying? That’s science fiction

Smart people have said it before: computers are going to fool a _lot_ of people into thinking they’re sentient long before they are, if they ever are at all
 
  • Like
Reactions: Pinkyyy 💜🍎
Yes it literally is a statistical “miracle”. It takes in massive amounts of examples and then calculates a statistically likely response to your input

How did you think we were at the point where computers can “understand” what you’re saying? That’s science fiction

Smart people have said it before: computers are going to fool a _lot_ of people into thinking they’re sentient long before they are, if they ever are at all
One can make a stronger, or at least more enlightening, statement.
Most programmers do drudge work which consists of essentially writing code very similar to stuff that has been written a million times before (compare eg a cabinet fitter installing more or less the same thing every day, only small details like organization or color change). People like that claim ChatGPT helps them a lot, because it can act as a tool to both find similar code to the task at hand, and make the more-or-less-mechanical modifications required to customize.

But some programmers are operating at the leading edge, doing things no-one has done before. And people like that find ChatGPT basically useless, because there's nothing for the statistics to work on, no corpus of pre-existing code that does the job and which can trivially be tweaked.

Remember - most people think a math PhD is someone who uses the calculator a lot. They have ZERO concept of what people who are truly creative in science are doing, and have no ability or competence to judge the difference between "chatGPT helped me write a linked list class" and "chatGPT helped me design an initialization protocol with one fewer packets for MPTCP".
 
Yes it literally is a statistical “miracle”. It takes in massive amounts of examples and then calculates a statistically likely response to your input

How did you think we were at the point where computers can “understand” what you’re saying? That’s science fiction

Smart people have said it before: computers are going to fool a _lot_ of people into thinking they’re sentient long before they are, if they ever are at all
I am familiar with how the Neural Networks work. It's not really a statistical thing. Here is an example of a request I submitted to ChatGPT4:

Can you produce TikZ code that a draws a green equilateral triangle with one (lower) edge being horizontal. Add another equilateral triangle (red) which has one (upper) edge being horizontal and all its vertices placed in the centers of the green triangle edges.

Bot replied:

Here is a TikZ code that draws the requested triangles:

\documentclass{standalone}
\usepackage{tikz}
\usetikzlibrary{shapes.geometric}

\begin{document}
\begin{tikzpicture}
% Draw green equilateral triangle
\node[regular polygon, regular polygon sides=3, minimum size=4cm, draw=green, fill=green!30] (green_triangle) {};

% Draw red equilateral triangle
\node[regular polygon, regular polygon sides=3, minimum size=2cm, draw=red, fill=red!30, rotate=180] (red_triangle) at (green_triangle.center) {};

% Place vertices of red triangle at centers of green triangle edges
\path (green_triangle.corner 1) -- (green_triangle.corner 2) coordinate[midway] (A);
\path (green_triangle.corner 2) -- (green_triangle.corner 3) coordinate[midway] (B);
\path (green_triangle.corner 3) -- (green_triangle.corner 1) coordinate[midway] (C);

\node[regular polygon, regular polygon sides=3, minimum size=2cm, draw=red, fill=red!30, rotate=180] (red_triangle) at (barycentric cs:A=1,B=1,C=1) {};
\end{tikzpicture}
\end{document}


When loaded into the online TeX Viewer, this code produces the following picture:


tri.png


I'd say the bot understood me very well. Now, not all similar requests get handled equally well but the results always indicate at least some level of understanding.

To make things more interesting, here is an additional conversation I had with the bot.

Me: Now add a blues circle on top of the green triangle.
Bot:
Here is the updated TikZ code that adds a blue circle on top of the green triangle:

<TikZ code that draws the following>

tri1.png

In this code, I added a blue circle on top of the green triangle by creating a new circle node at the center of the green triangle. The size of the circle is set to approximately the height of the green triangle (3.464cm) to make it fit nicely within the triangle.

This was not what I wanted but one can see that I did not express my request very well. So...

Me: I wanted the circle to be fully outside the green triangle, above it (by Y axis) and touching the triangle's top vertex.
Bot:
I apologize for the misunderstanding. Here is the updated TikZ code that places the blue circle fully outside the green triangle, above it, and touching the triangle's top vertex:
<new version of TikZ code draws this>



tri2.png


In this updated code, I first calculated the radius of the circle that is needed to touch the top vertex of the green triangle. Then, I created a new circle node above the green triangle's top vertex by using the yshift option. The size of the circle is set to twice the calculated radius to make it touch the triangle's top vertex.

Was it very smart? Not really (although ChatGPT knowing TikZ is a nice bonus, this thing knows a lot of languages). But the bot definitely behaves as if it could understand me. It does not matter what epithets one attaches to the term "understanding". The things are clearly moving in the right direction. This bot is light years ahead of Siri. Give it five more years and we could get a phenomenal tool. And it is useful already. So, it's not a surprise that Apple wants to get in this game. They have the money, but they lack both the expertise and the reputation as an AI player.
 
AI is the biggest threat to Apple's entire ecosystem.

GPT4 is demonstrating that an LLM can be your super personal assistant. And if LLMs can be your super personal assistant, your iPhone is suddenly no longer that important.

For example, need to plan and book a trip? Just tell your LLM to do it. It can even book tickets for you.

The flight is canceled and you're stranded? No problem. Ask your LLM to automatically call the flight company's customer service department to get a refund and check to see if their terms and service will provide a hotel for you. If so, have the LLM book a hotel.

None of this requires anything other than a screen, an internet connection, and your LLM personal assistant. You don't need to juggle different apps to do this. Just tell your LLM what you want. Therefore, it doesn't matter if you're using an iPhone or a $50 junk phone.
Wow, this would be really scary, if true..
You know, the thing is, the current AI/ML technology is not even perfect yet. ChatGPT currently has some outdated information and tends to make stuff up sometimes. It’s not reliable at all.

Who knows what it’ll possibly evolve into in the future, but that would definitely be really scary in general, regardless of the Apple ecosystem. I just can’t imagine that we will somehow end up living in a world where machines do everything for us..

While AI/ML does have some benefits, it still scares me. It can already generate anything, photos, videos, art, music, etc.. and while it’s not perfect, but since it does have this ability, I’m concerned about what it could possibly evolve into in the future. If anyone can generate art and music using AI, what would happen to the ACTUAL humans who have those skills ? As an artist myself, that imagination alone makes me very concerned.. 🙁

Now that kind of reminds me of that WALL-E movie with all those obese people just laying there with a digital screen in front of them, while the robots were basically doing everything for them.. :oops:
 
If anyone can generate art and music using AI, what would happen to the ACTUAL humans who have those skills ? As an artist myself, that imagination alone makes me very concerned.. 🙁

Now that kind of reminds me of that WALL-E movie with all those obese people just laying there with a digital screen in front of them, while the robots were basically doing everything for them.. :oops:
That's the key question. I think artists (and all other creatives and workers who could be replaced by AI) should unionize and put in their contract that companies can't use AI. Otherwise, most jobs are going to be replaced by a race to the bottom using AI.

We can already see that Hollywood wants to do that and writers and actors are smart enough to be striking about that. Other industries should follow suit or we're screwed
 
If just an ounce of GPT's ability to hold a conversation and accurately transcribe words makes it in to Siri then I'm happy.

The number of sentences that make current Siri go look something up in Safari instead of speaking an answer or doing the thing I asked it is staggering and far beyond acceptable.

The only way to use Siri today is by learning which commands it accepts and how to pronounce and structure the sentence so that it fits and triggers the response you want.

Not much of a virtual assistant when you have to assist it in assisting you.
Exactly. This is my main issue with Siri, most of the time it doesn’t understand what you’re saying unless you use different wording.

And the “I found this on the web” answer instead of choosing an actual answer.. yeah they do need to improve that as well.

I think Siri is a great assistant for controlling simple stuff, such as setting alarms, notes, making calls, sending messages, playing music, etc. But when it comes to asking her a question, it’s not so great when it can’t pick up an answer and instead throws a bunch of web results and calls it a day. This does need to be improved. Better integration with third-party apps would also be very welcome.
 
That's the key question. I think artists (and all other creatives and workers who could be replaced by AI) should unionize and put in their contract that companies can't use AI. Otherwise, most jobs are going to be replaced by a race to the bottom using AI.

We can already see that Hollywood wants to do that and writers and actors are smart enough to be striking about that. Other industries should follow suit or we're screwed
Yeah I totally agree. I feel like AI just keeps getting more dangerous as it evolves which is why it scares me. It could possibly end up replacing lots of jobs, leaving so many people jobless..
Thinking about what could possibly happen if it keeps evolving always makes me realize how dangerous it could really become. Not saying that it isn’t useful, but it seems to be more dangerous than useful for me.

So yeah, I think the use of AI should be somehow limited by companies and people before it could possibly get out of hand..
 
AI servers ingest massive amounts of data and require huge computing resources and datasets to generate responses right now. Nvidia's A100 supports 1.2 TB of memory and costs ~15K. Their deep learning H100 graphics card can cost $45K. How would you be able to have a handheld device with limited resources perform the same functions while respecting privacy by not connecting to enterprise servers with their huge datasets and cpu/gpu power?
 
  • Like
Reactions: Pinkyyy 💜🍎
Exactly. This is my main issue with Siri, most of the time it doesn’t understand what you’re saying unless you use different wording.

And the “I found this on the web” answer instead of choosing an actual answer.. yeah they do need to improve that as well.

I think Siri is a great assistant for controlling simple stuff, such as setting alarms, notes, making calls, sending messages, playing music, etc. But when it comes to asking her a question, it’s not so great when it can’t pick up an answer and instead throws a bunch of web results and calls it a day. This does need to be improved. Better integration with third-party apps would also be very welcome.
I think those simple things related to the hardware and services that Apple already provides are the important bits and the most likely target for an improved “smarter” Siri. Turning it into a websearch answerbot is getting into areas that Apple isn’t prepared to go and that most of us won’t want. Better to use a dedicated search bot for that.

Also, if Apple were ever able to get an LLM style AI running locally, it would be more feasible to limit the scope to local actions that can be predicted and that might not need to send things off to a data center.
 
  • Like
Reactions: Pinkyyy 💜🍎
Of course, Apple should train an LLM - and then work at beefing up it's NE (not sure if that's needed) to run inference on that LLM locally. That's the promise from @EMostaque about LLMs in the near future.

You have to figure Apple can take the lead on 'local disconnected inference' with Apple Silicon SoCs in a phone that can make this possible. Assuming that capability is no more than three years away... Who else can get there before them?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.