Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If you observe Siri on a tvOS device, it's limited to what resources it can draw upon compared to iOS/IPadOS/MacOS. So it doesn't acquire some kind of history from you, or can it use anything more then some app history. Ask any simple question about an actor, event, and because it not linked to a browser it can't go to the same resources that Siri has on other OS devices.

I picked this example because the Siri used on an Apple TV has nothing to help a user find information to assist with fact finding media details when you are watching/choosing content. It's funny this would be an outstanding advantage to buy Apple's device but its the weakest of all Apple's Siri implementations.

Looking at Apple Music if you say go open Apple Music, and suggest an artist for it to bring up, Siri only tries to guess more common words instead of what you said. Comparably you can mistype a word on a web browser search engine and it will usually suggest what you are looking for, which is a function missing from most interactions of Siri.
Yes, Siri could be improved by bringing in a larger concept of context and by extending the interaction to clarify when there is ambiguity. Ask us a question, Siri!

Some Siri implementations like the TV, Homepod ,and Watch are inherently limited. It can be frustrating when those devices respond to an invocation instead of your phone or iPad. I wish they would either work together or let us target a specific device.
 
Considering the amount of money Apple has sitting in the bank, I've always wondered why. instead of blowing it on cars that will never arrive or AR/VR that will be half-assed when it does arrive, they don't just go all in on trying to crate a real AI that is sentient. Just imagine all the research that Apple could fund to make that a reality.
 
Siri is so awful, I gave away my HomePod a few months after I got it. With a phone and HP in the same room, "Hey Siri, show me the weather" would always get a spoken answer on the HP, rather than showing me the weather on the screen. Other similar confusions made it completely unusable. So this utter stupidity has cost them the sales of several HPs for my home, and that of the various homes of people who ask me for advice.

Siri was brilliant when it came out, a total game-changer, but now it's a game-loser. Apple's falling further and further behind every year on this, and it's a significant threat to their future business. I wonder how long it will take them to figure that out and get serious about it? Because their behavior so far shows that they definitely are not so far.

Oh, and that line about how a short callphrase is difficult engineering? What a bunch of crap. The garbage echo my wife insisted on installing two years ago came set up to answer to "alexa" by default, but we immediately changed it to answer to "echo". It works fine.
 
Siri triggers when I’m talking on a zoom meeting if I say something that even sounds remotely like “hey Siri.” I fear changing to just “Siri” will make the number of false triggers even higher.
 
Yes, Siri could be improved by bringing in a larger concept of context and by extending the interaction to clarify when there is ambiguity. Ask us a question, Siri!

Some Siri implementations like the TV, Homepod ,and Watch are inherently limited. It can be frustrating when those devices respond to an invocation instead of your phone or iPad. I wish they would either work together or let us target a specific device.
I can only imagine some kind of disaster occurring with Siri being used to initiate some tasks for home automation devices before Apple at least proofs out a better digital assistant technology marketed as Siri.

If one looks at the various types of digital assistants, where does Siri fall under? If you are thinking Apple has just touched upon trying to get to the first one, I think that is all they gotten so far barely.
  • Voice recognition: This helps the digital assistant to establish the authenticity of the user.
  • Voice-enabled “Natural Language Processing” (NLP): This AI capability helps these digital assistants to understand the voice requests from users. It also helps them to respond back to users after gathering the required information and completing the task.
  • “Machine Learning” (ML): AI-powered digital assistants use the input from the user and other information they gather to complete tasks. These intelligent assistants also use the users’ personal data and their usage history for this purpose. ML helps them to process this data and derive useful patterns out of it.
 


Apple is working on an updated Siri experience that moves away from the trigger phrase "Hey Siri" currently required to invoke the digital voice assistant hands-free, Bloomberg's Mark Gurman reports.

hey-siri-banner-apple.jpg

In his latest Power On newsletter, Gurman says that Apple is working on a way for Siri to be able to understand phrases and commands without the need to use the "Hey Siri" trigger phrase but instead simply saying "Siri." Gurman says the change is expected to roll out sometime next year or in 2024.
Gurman also reports today that Apple is working to further integrate Siri into third-party services and apps to provide more context and assistance to users.

Article Link: Gurman: Apple Working On Revamped Siri Experience That Doesn't Require 'Hey Siri' Trigger Phrase
When my Swedish niece is in the room I will not be able to call by her name. Hard enough now to ask her a question being with Hey Siri …
 
  • Like
Reactions: compwiz1202
To be honest this sounds like it'll increase accidental activations.

Indeed. I had to turn off “Hey Siri” for this very reason, as every time I would speak French and use the word “si”, Siri would get triggered and just scare my conversation partners with her “Sorry, I did not quite catch it!” in English! It was a bit of fun, except when it was happening at work.
 
I thought people already stopped pretending like they enjoy using Siri.

On a more serious note, conceptually, Siri and I suppose a lot of other voice assistants are really trapped in the past. They are mostly speech recognition + some more traditional NLP to map whatever was spoken to a set of tasks it can perform. All of that is (at least last time I checked) still extremely constrained, it lacks actual understanding of context beyond a few "tricks" it can do and it can't really improvise.

The more modern massive NLP models I've seen can perform much, much better than this. Models like GPT-3 can actually remember fairly long context, have excellent understanding of prompts and they can be taught to code. I always thought the future of voice/text assistants will be that: you give them fairly unconstrained (not recklessly of course) access to APIs and a scripting language and basically tell them what you need to do. I imagine things like "Can you check out the staff page of company X and give me all the names of the people working with AI?" and it will access a browser in the background, navigate the company staff page, read it and spit out the names of those people. To my understanding, it can even understand all kinds of APIs to use applications, say Microsoft Office. "Can you replace every date in this spreadsheet with tomorrow's date?" and it will do that as well.

I take it all big players must have understood this massive opportunity. In the light of that, I strongly doubt that Apple will invest much resources into this outdated concept for a voice assistant. I believe some time in the next couple of years voice assistants will emerge that will make today's attempts look like a joke (granted, they already look like a joke).
 
Actual exchange I had last night:

Me: “Hey Siri, turn off the fan in 15 minutes”

HomePod: “I’m sorry, you must set automations at least one minute in advance”

...

Me: “Hey Siri, just turn off the fan”

Siri: “okay”
Well, on the bright side, if she gave you the runaround for fifteen minutes, then your command could happen at the originally desired time anyway.

I've run into similar problems numerous times with asking "what time is sunset today", and getting told about tomorrow's sunset as if it was the correct answer to my question. Or asking which of my lights are on, and getting told how many of my lights are on. And woe be unto you if you want to add a reminder where the text of the reminder seems like it contains any reference to time whatsoever - Siri will either try to use that bit as the time for the reminder (even if you have very clearly stated a time for the reminder), or Siri will simply remove the bit she doesn't know how to handle, rather than copying it through verbatim.

The fundamental problem is trying to pass off Siri as a fully sentient assistant ("just ask her things in normal language"), rather imposing (and publishing) a stricter syntax - and then they try to build out the backend as needed. And the result is often like some bad 80's sitcom plot where the main character has hired on as an assistant for someone by pretending to speak their language, and then the "hilarity" is that they keep getting things ridiculously wrong when they misunderstand because they don't really know the language. I would have much rather had them start Siri out with a strict and limited syntax, and work on Siri always getting things right within that syntactical framework, and then slowly expand outwards a bit at a time, towards more natural language.

As it is, I wish there was a way to tell my HomePod minis, "NEVER interpret any request I utter as a command to play music" - I never ask them for music, yet Siri misinterprets occasionally and starts playing random songs. If I want music, I'll just say, "Hey Sonos, play (whatever)", which basically always works (on the rare occasions when the Sonos speaker doesn't hear me clearly, it says it doesn't understand, and waits for me to repeat the request, rather than, say, trying to dial a phone call or control a light or play random music).

Needless to say, if they remove "Hey" from "Hey Siri", they had damn well better have a toggle in the menus for reenabling it. Otherwise it's going to be difficult to have any conversation that mentions "Siri" without her trying to "help".
 
I have been using Siri on my HomePod a lot recently. Really handy for checking the weather and listening to music, also have been asking it random questions about everything, it's like google without the effort of having to type on your keyboard. Being able to trigger Siri by just saying its name would be a welcome feature.
 
Honestly, I'm gonna be satisfied with Siri only when it'll reach Google Lamda's level of intelligence.
My standard used to Star Trek TOS's computer. It's now Jarvis, circa the first Iron Man movie. Give me Jarvis, and I'll be completely satisfied with voice recognition and AI. Until then, Siri is good good for some things (timers, alarms, reminders and calendar events, and turning lights on and off), while being occasionally infuriating.

Actually, another use that I find helpful occasionally - adding up columns of numbers or other simple but tedious math, when you're in the middle of something else.
 
Amazon does it with "Alexa". I have a few echo dot speakers and I must day Alexa is so much more advanced than Siri. Using Siri can still be a very frustrating experience.
 
So if I decide to say, play the Witcher 3, I won't be able to have my phone in the room, or everyone talking to Ciri will wake Siri. ;)
 
Honestly, I'm gonna be satisfied with Siri only when it'll reach Google Lamda's level of intelligence.
I have all 3 (Siri, google, Alexa). I rate Alexa as number 1, google a close 2nd and Siri a nowhere near close last place. The other 2 are so far ahead of Siri it’s ridiculous.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.