Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
So ChatGPT which has been around for 3 years can talk to multiple people at once, and Siri which has been around for 14 years can barely speak to one 😂

If you knew anything about computing you’d know Siri (and other typical voice assistants) are not built on large language models and pre-date the transformer models used by ChatGPT, Gemini and Claude today.

You really do not want large language models doing what Siri, Alexa and Bixby do. They are too expansive, compute intensive and error prone for that. You’ll just have people arguing with their phones all the time while burning the world down with GPUs.

But hey…Jensen Huang will be able to afford another few million leather jackets.
 
  • Haha
Reactions: Fraserh02
No it does do calculations. In fact, you can even ask it to show you step by step how it arrived at the answer. Heck, you can even ask it to write a program that does the calculation.

I tried Gemini 3 Pro in Cursor this week. It started to randomly delete parts of the codebase it was not supposed to touch and when I asked why it apologized harder than a salary man caught drunk at work.
 
This is interesting. I created my own app which led two users, have a back-and-forth session with Apple Intelligence (in a storytelling setting). Basically the differentiating factor wasn’t that you could get an AI to write a story but that two people could take turns prompting an AI to write a story together as an interactive experience. Looks like that functionality will soon be native in ChatGPT. I can’t mention my app by name in here or I’ll get banned lol.
 
No it does do calculations. In fact, you can even ask it to show you step by step how it arrived at the answer. Heck, you can even ask it to write a program that does the calculation.

This is how out of touch commentators are on Macrumors about state of AI.
It’s fine to be keen on AI, knock yourself out, but you should educate yourself about how they really work.
They don’t think or calculate anything.

As for your previous post asking me for an example, I can give you one from this morning.
For context I am a software developer, and I am very much aware of the strengths (and imo very big) weaknesses.

I was expanding the functionality of a certain service class which listens to updates from a few different repositories and does certain things with the data, in order to separate the concerns of different parts of the codebase into one point of access.

Now the problem, or better, idea of what was to be built wasn’t exactly trivial, and the solutions offered by the ai were, on paper, not stupid, and technically worked. But they massively over complicated not only the functions themselves, but the responsibilities of the class. We went around in prompt circles trying to fix this problem, as is always the case.
The fix?
Taking my dog for a walk and having the idea of a completely different way to architect the solution, using a separate class entirely.
I would have gone around in circles with the ai for hours and ended up with an abomination of coupling and over complexity, because they can’t, I repeat, cant reason, think, come up with ideas or real solutions.

It heard me say: I want to add this feature/functionality to this context, and it spewed out the appropriate slop for that scenario.
At no point is it possible for it to say - hang on a sec, is this a good idea? Maybe you should put this somewhere else or think about it differently. Because, why? It can’t think. It can’t have ideas.
 
No it does do calculations. In fact, you can even ask it to show you step by step how it arrived at the answer. Heck, you can even ask it to write a program that does the calculation.

This is how out of touch commentators are on Macrumors about state of AI.

Ah yes the "get educated" response common with AI and crypto bots.

Just because gemini shows you how it came up with an answer does not mean it calculates.

Even so, it just looks up formulas and puts in variables. The same as you would. You don't need a degree to solve "university" math problems.
 
It’s fine to be keen on AI, knock yourself out, but you should educate yourself about how they really work.
They don’t think or calculate anything.

As for your previous post asking me for an example, I can give you one from this morning.
For context I am a software developer, and I am very much aware of the strengths (and imo very big) weaknesses.

I was expanding the functionality of a certain service class which listens to updates from a few different repositories and does certain things with the data, in order to separate the concerns of different parts of the codebase into one point of access.

Now the problem, or better, idea of what was to be built wasn’t exactly trivial, and the solutions offered by the ai were, on paper, not stupid, and technically worked. But they massively over complicated not only the functions themselves, but the responsibilities of the class. We went around in prompt circles trying to fix this problem, as is always the case.
The fix?
Taking my dog for a walk and having the idea of a completely different way to architect the solution, using a separate class entirely.
I would have gone around in circles with the ai for hours and ended up with an abomination of coupling and over complexity, because they can’t, I repeat, cant reason, think, come up with ideas or real solutions.

It heard me say: I want to add this feature/functionality to this context, and it spewed out the appropriate slop for that scenario.
At no point is it possible for it to say - hang on a sec, is this a good idea? Maybe you should put this somewhere else or think about it differently. Because, why? It can’t think. It can’t have ideas.
100% agree on this. Code architecture/structure is an art and these AI yokes have crayons. One really has to be firm
with high-level implementation in prompts or it’ll go to town on the code like you said. The amount of times I have to state “Don’t butcher my code structure or formatting” is frustrating.

Dunno how a person could vibe code, it does not feel mentally healthy.

Myself, I normally just use gen AI to rubber duck, to generate util functions, create boiler plate code or create/format mock data. Usually it’s mostly convoluted ***** that comes back on any unguided ideas (probably from scraping bad answers from Stackoveflow) but there’s enough there to get the brain moving if you’re drawing blanks.
 

If you knew anything about computing you’d know Siri (and other typical voice assistants) are not built on large language models and pre-date the transformer models used by ChatGPT, Gemini and Claude today.

You really do not want large language models doing what Siri, Alexa and Bixby do. They are too expansive, compute intensive and error prone for that. You’ll just have people arguing with their phones all the time while burning the world down with GPUs.

But hey…Jensen Huang will be able to afford another few million leather jackets.
Yeah I know nothing about computing. That’s why I am on a tech forum 🙃, and clearly if you knew anything about humour..
 
It’s fine to be keen on AI, knock yourself out, but you should educate yourself about how they really work.
They don’t think or calculate anything.

As for your previous post asking me for an example, I can give you one from this morning.
For context I am a software developer, and I am very much aware of the strengths (and imo very big) weaknesses.

I was expanding the functionality of a certain service class which listens to updates from a few different repositories and does certain things with the data, in order to separate the concerns of different parts of the codebase into one point of access.

Now the problem, or better, idea of what was to be built wasn’t exactly trivial, and the solutions offered by the ai were, on paper, not stupid, and technically worked. But they massively over complicated not only the functions themselves, but the responsibilities of the class. We went around in prompt circles trying to fix this problem, as is always the case.
The fix?
Taking my dog for a walk and having the idea of a completely different way to architect the solution, using a separate class entirely.
I would have gone around in circles with the ai for hours and ended up with an abomination of coupling and over complexity, because they can’t, I repeat, cant reason, think, come up with ideas or real solutions.

It heard me say: I want to add this feature/functionality to this context, and it spewed out the appropriate slop for that scenario.
At no point is it possible for it to say - hang on a sec, is this a good idea? Maybe you should put this somewhere else or think about it differently. Because, why? It can’t think. It can’t have ideas.
This is way over exaggerated. The fact that the AI can do it, whether it is up to your standard or not, is insane. Just 3 years ago, you'd be dreaming that an AI can remotely do this.

To be honest, it sounds like a prompt skill issue on your side rather than the AI problem. You'll learn to write better prompts.
 
This is way over exaggerated. The fact that the AI can do it, whether it is up to your standard or not, is insane. Just 3 years ago, you'd be dreaming that an AI can remotely do this.

To be honest, it sounds like a prompt skill issue on your side rather than the AI problem. You'll learn to write better prompts.

You have your opinion, which I can tell is very much set in stone, so whatever.

It’s handy, for some small stuff, boiler plate and things which require no thought. But that’s the wall.
I personally force myself to use it here and there just to make sure I’m up to date on its capabilities.

It’s not about prompt skills, it’s the fundamental limits of the technology.

Sure you can vibe code a small simple app, but good luck trying to scale it or maintain it down the road.
For large scale production codebases, it’s only good for small targeted snippets.

The danger is how it dumbs you down, even when you are experienced or proficient in a language, too much use can start to mess with your thinking skills.
Using it on something you don’t understand is downright dangerous, and you will never, ever get good.
 
  • Love
Reactions: turbineseaplane
You have your opinion, which I can tell is very much set in stone, so whatever.

It’s handy, for some small stuff, boiler plate and things which require no thought. But that’s the wall.
I personally force myself to use it here and there just to make sure I’m up to date on its capabilities.

It’s not about prompt skills, it’s the fundamental limits of the technology.

Sure you can vibe code a small simple app, but good luck trying to scale it or maintain it down the road.
For large scale production codebases, it’s only good for small targeted snippets.

The danger is how it dumbs you down, even when you are experienced or proficient in a language, too much use can start to mess with your thinking skills.
Using it on something you don’t understand is downright dangerous, and you will never, ever get good.
It seems like your opinion is also set in stone.

How long has coding agents truly gone mainstream? 4 or 5 months? Give it 1-2 years. You might be out of a job.
 
So, since LLMs still aren't living up to their original hype of solving humanity's problems, OpenAI continues its strategy of throwing things against the wall, hoping that something will stick and turn into a sustainable business model.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.