Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
What does AI development mean anymore? So vague.
Let’s even take the word “apple”, if you search that word you don’t even get the fruit anymore. Corporations try to predetermine what we think that simple answers get overlooked.
It's reasonable to believe Apple will use some forms of AI to boost some limited functionality of their OS vs apps. But not like so many others that are using AI to data-mine tidbits about peoples activities. So more of a knowledge navigator than what MS and Google are after. Perhaps a first tier problem solver for various AppleCare troubleshooting is one direction.
 
I’m very curious to see how Apple handle their AI implementation given its potential to take work away from creative professionals, who have traditionally been their biggest customers. You can imagine how it could cannibalise Mac sales. I note that Adobe have jumped in with both feet though, they know it’s no use ignoring it if they want to stay at the top of the market.

End of the day there’s no hiding from AI, the genie is out of the bottle. Everyone needs to embrace it and work out how best to use in their field, otherwise they’ll get left behind.
 
An AI driven Siri bot or standalone app would only interest me if its dataset is updated in real time. ChatGPT is fun to play with, but since its dataset was frozen in September 2021, it's useless for anything contemporary. It doesn't even know that Donald Trump has been indicted in 91 criminal and civil crimes in multiple jurisdicaitons, or that Russia invaded Ukraine, or that there's a war in the Middle East. Among other deficiencies in medical, academic, political, technical and other arenas. Two years is a long time in the 21st century.

But if you want to learn about ancient Greek & Roman philosophers, ChatGPT is you best friend.
 
Exactly. People forget about the dirty little secrets that get this stuff started. Cryptocurrency had its value inflated through ransomware payments. Without ransomware bitcoin would be worthless.

ChatGPT plagiarized for free the work of millions of people and trillions of hours. But it can only do that once. Now the AI companies want to pull the ladder up and not let anyone get high quality training data.

So yes you’re exactly right. If we just let AI generate everything it will quickly degrade and our culture will stagnate even more than it has the last fifteen years.
Yeah, this is a good point - ‘competent’ AI/ML depends quite strongly on quality of data being ingested. The ‘wow, look at this’ flood of OpenAI, ChatGPT and the like were all AFAIK done on ‘public internet data’ which includes ofr example generative like MidJourney and the like as well as textual data.

Even before the LLM ‘explosion’ it was already problematic to get reasonably good data for specific use cases - you need a large pool of data for speech for text to accommodate different accents and pronunciation, you need large amounts of data for various automotive or smart city type of applications, etc. or your baseline ‘general’ AI/ML model/application will perform fairly poorly, moreso as environmentals (e.g. training a model for vehicle model recognition in Central America then deploying it in China) change.

Places like Kaggle have a random collection of ‘total trash to moderately useful for a proof of concept’ but rarely something qualitatively complete or thorough enough to make something ‘v1-like’/beyond proof of concept. Synthetic data has been used and is still in use, although in general it’s not all that fast (fun fact that things like Unity and others are in use for synthetic data for AI/ML use).
Two inherent rules apply:
1. The best data will always be specific to the use case at hand (location, resolution, camera perspective, sensor data types and positioning, spoke enunciation and local colloquialisms, etc.), and the wider an audience attempting to serve, the more covering <all of these variations> is needed. ’Best data’ is ideally provided directly from the ‘customer’ of the use cases. When we get to LLMs, well, they’re trying to do an awful lot in most people’s heads, but even handling input in N languages is quite a bit needed for effective training.
2. Even if the cases where ‘very good’ data is used for training, variables will change over time. Visually, for example, vision models trained for car model recognition may have leveraged a ‘mostly sunny’ environment for training data - guess what? Weather changes and will impact things. Likewise if camera positioning changes, or zoom levels/perspective, etc. There are a LOT of variables (and data) needed in general to go from ‘good for a proof of concept’ to ‘very good in many permutations.’ You could also consider this as part of ‘fine tuning over time’ where <some amount of data> going back to <whomever manages the core model/app(s)> is almost required, e.g. some data making it back to Apple, all data to Amazon, Google, etc..

There are clever mechanisms and evolving model architectures that do reduce the amount of data needed for some purposes, but the need for more and more data for training and fine-tuning never goes away completely.

There are companies out there trying to service the above via synthetic data, and things like videos are a growing part of places like shutterstock and the like, all in the name of ‘monetizing their data,’ much of which is for AI/ML training purposes.

Midjourney and the like put a real focus on it, as obviously various types of ‘styles’ become apparent, as well as ‘how does it even know who trademarked properties (Batman for example, or any movie/game/etc) are, and if the data ingestion was ‘pull in everything possible’ or followed fair use practices. It takes another fun step forward, this one IMO completely properly, where various actors likenesses or voices are replicated by AI for ’for profit use’ (such as in advertising) without paying the source of that data (the actor), and same arguments are being made for the generative visual models where you can, for example, give a prompt ‘in the style of Van Gogh’ or even when not, various artists styles were used in the training.

All of the above is simply on the start of the ‘full mess’ of AI/ML, the data needed just to train a given type of model, and in general, the need for ongoing data <from somewhere> in order to either improve it’s general outcomes/inferences or to make them baseline reasonable from the start. All before we get to ‘what can it do from there?’ It’s a fun field, but separating the hype from reality can get complicated quickly.
 
"Could launch next year"? COULD?? Needs to! Siri needs improvement in iOS 18, or Apple's involvement in voice assistant tech is toast. Siri is SO BAD now, it can't go another year with what we're seeing with ChatGPT and Google. And it isn't even the breadth of what generative AI can do, it really is merely the terrible "prompt" handling that is going to be Siri's death knell. At this point I'd just settle for some on-device intelligence that could carry over information from the exchange directly prior! How many times I've asked Siri to send a message to the person that JUST SENT ME A MESSAGE, and Siri responds with a "Which Susan do you mean?" THE ONE YOU JUST READ ME A MESSAGED FROM, IDIOT! And it doesn't get better the more you try to do. At this point, I can't even expect Apple to get Siri answering queries like ChatGPT within a year… but I'd sure like Siri to just be able to understand well-phrased, "structured" commands.
(Another "need" is for when Siri misinterprets ONE word in, say, a message I'm crafting to send… eg, I saw "I'm headed to work" and for some reason Siri hears "I'm headed to word" (which makes NO sense), why can't I say "change word to work"? Nope… need to say "change it", and then restate the ENTIRE phrase… whereby Siri will invariably mishear something different… "I'm head did to work". 🤦‍♂️ Also, the ability to 'chain' requests: "turn on the kitchen light and the dining room light." Is 2023, and Siri still can't do what Alexa does. And why do I have to wait for Siri to finish speaking before I can issue a follow-up command, ie "Do you ["Send it.] want to Change it ["SEND It!"] or Send it?" ["SEND IT!"] "OK, I'll send it." This is why we have duplex communication channels, Apple! Worse, I paid $200 for AirPods Pro that have duplex communication channels and isolation, and Siri doesn't use. Sheesh.)
Upvoting because all of this is like nearly all of our daily/when we bother even attempting Siri experience. A whole lot of common sense was ignored in the most basic of interactions. Even now, spell check on my iPad, literally changed ‘nteractions’ (missing an i up front, sticky i key on my $$ MK) to ‘niter actions’ - I have no clue WTF a ‘niter action’ is but thanks Apple. So many simple quality of life improvements to be had, but let’s shoot for the moon which I doubt they’ll reach any time soon.
 
  • Like
Reactions: ScooterComputer
I don’t want to say that LLM chatbots are not interesting, but there’s definitely not a “massive shift to a conversational UI” LOL. In fact, there’s been a massive shift to visual UIs for decades. There’s a lot of AI (although I don’t like the name, because it’s a generic buzzword that can cover basically everything) everywhere, chatbots are just another part of it. Of course, if LLMs help to improve Siri (while maintaining privacy, etc.), even better. But maybe ask yourself: why hasn’t Apple put some effort in order to catch up with Google or Alexa? I guess it’s because, other than for basic stuff (old voice control was awful even for those tasks), people don’t really care about voice assistants. If you want to see how Apple invests in “AI”, look at the Vision Pro.

When attempting to rebut a point, try to get the quote right:

Jobs' last big move was acquiring Siri and quickly implementing it into the iPhone. When he died, Tim Cook let it sit, like Xerox did with the mouse, unable to see the massive importance it would play in a shift to conversational UI in a decade.

“Massive importance“ not “massive shift“ to conversational UI. That shift is still ongoing. For context, Siri is a primary user interface for the last 3 of Apple’s new segments with incremental exclusive reliance on voice:

  • Watch
  • AirPods
  • HomePod

Augmented reality is developing alongside as the visual component (VisionOS) but we will be increasingly interacting with computers in natural conversation as we do with humans. Transformers and LLMs are only accelerating that trend.

The “massive importance” part is how much of an existential threat to the company it would be if Apple doesn’t get up to par with what will be devices from Google and Microsoft (and openAI itself) that can have full natural conversations and go out and process tasks and fetch information. How would an iPhone running the terrible mess that is Siri right now look to consumers when compared to human-like assistants in Android/Microsoft/openAI powered phones/watches/pin? Apple needs to protect its golden goose and it’s been caught flat footed by a trend Steve Jobs anticipated but Tim Cook ignored.

You don’t have to take my word for it, the news reported in this article shows that Apple understands that importance and is scrambling to avoid that existential risk.
 
Eddy Cue, Apple's senior vice president of services, is also involved in the push
Oh god... How a guy can be in charge of the Apple Maps release, which was a disaster, and then be put in charge of Siri which is still a tire fire and still have a job... he must have so much dirt on everyone that he is untouchable.
 
I honestly think they've they've missed the AI boat now. Even Siri was mostly awful from the outset. They've got the money, of course, to just buy some startups but they do seem strangely weak in some areas these days. Of course, they they can't be expected to excel at everything - they've already got a lot going on.

Quite honestly, I think it’s the result of a COO becoming a CEO. Looking back, Apple has become the Microsoft of 20 years ago with boring annual hardware updates and annual evolutionary software updates. It’s not to say the aren’t great announcements, but compare it to the reality distortion field Jobs generated with things that wowed people at WWDC with almost magical new revolutionary announcements. Available the same day.

In true COO type fashion, it’s like efficiency has taken front seat to innovation and creativity. The move to Apple silicon has been a big focus, and I replaced my MBP, but it’s boring and doesn’t change the world.

Siri has been around for more than a decade. But Siri didn’t seem to get much improvement until Amazon released the Echo in midway last decade.

Apple seems to be reactive in the last 10 years with the exception of the Apple Watch. As much as I love Apple and have been supportive since the late 90s, there are few cases where they are first anymore with new ideas. HomePod was a reaction to Echo. Vision Pro a reaction to Quest.

iOS is close to 20 years old with no revolutionary changes to the way it works or how we interact with it.

The Apple we have today just gave us 3 different types of Apple Pencil.

It’s time for some of the execs to go.
 
Where was apple 10 years ago? Siri is completely useless compared with Google Assistant and has been since the beginning.
I have both in my house and I honestly have no clue what you're talking about. Google is certainly better at answering questions from the web, but that's about it
 
Are there any options for deleting Siri entirely? I have to say that the accidental trigger of Siri disrupting everyday operations makes me wanting it gone. I want the ability to completely disable Siri so that it doesn't accidentally trigger ever again, because I get nothing from it in return.
 
I swear, one of these days the tech billionaires will turn The Terminator into a documentary.
 
Last edited:
  • Like
Reactions: nt5672
Good. I’ve been waiting impatiently on a massive overhaul since the exceptionally innovative “movies and sports” results years ago and it hasn’t improved much in what feels like a decade. Furthmore, aside from more “AI” like implementation, overhaul the dumbass rules for basic commands. I can swipe down and turn on Bluetooth while the phone is locked but asking Siri to do it hands free requires unlocking your phone. That’s just one example of many frustrations. I use Siri and Alexa numerous times daily. Alexa and Siri are like comparing apples and oranges for day to day around the house requests. Siri is a joke, really.
 
It's about time. Siri is dumb as a post.

It's sad that Apple is playing catch-up, but they've been playing catch-up for years without making any significant progress. Several years ago now Google's assistant was so much better than Siri. That gap has widened substantially, and that's before Google and Bing jumped on ChatGPT 3/4 and other related AI startups. Microsoft in September released their CoPilot AI in Windows 11, Office 365, Bing, Edge and [already had it] in Visual Studio for developers.

Apple, Apple, wherefore art thou?
 
  • Like
Reactions: nt5672
Expecting to see the improved Siri with iOS 18. A lot of catching up to do for Siri
 
Society and the culture are going to rue the day AI was unleashed upon the world. Any technology can be used for both good and evil. Unregulated, unbridled AI is ripe for abuse. If you don't believe that then just look at what mobile phones and social media have done to our youth. Every social scientist, psychologist, anthropologist, mental health expert are in agreement that these ‘tools’ have caused havoc. Imagine what AI will do.
Scare of this happening?
los-angeles-california-usa-june-260nw-668647585.jpg
 
They should just give up on Siri and just give us Alexa, Siri is really really dumb that’s why every update they’re always telling you Siri is getting smarter, No, no it’s not.
 
I am all in for a smarter Siri. For now it’s a joke telling machine that my 10 yo son enjoys asking questions it cannot answer correctly.

If I had put millions of dollars into building that I would have been disappointed
 
  • Like
Reactions: nt5672
I once in a while try like “Hey siri activate the alarm on my house” and she will answer “ok I have set an alarm to 7:00 tomorrow”.

Maybe it would be better in English. I’m talking Danish to her.

And if I’m in a conversation with someone and raises my hand, the Apple Watch will think I’m talking to Siri and she will interrupt us. I know that can probably be deactivated but it’s still stupid as s***
 
  • Like
Reactions: wegster
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.