Our family already has a code word. Got to, in this day and age.
My dad died from ALS and the robot voice speak and spell machine he had was already pretty special for him. This would have been an absolute delight for him. He would have had a lot of fun with it.
Apple this week previewed new iPhone, iPad, and Mac accessibility features coming later this year. One feature that has received a lot of attention in particular is Personal Voice, which will allow those at risk of losing their ability to speak to "create a voice that sounds like them" for communicating with family, friends, and others.
![]()
Those with an iPhone, iPad, or newer Mac will be able to create a Personal Voice by reading a randomized set of text prompts aloud until 15 minutes of audio has been recorded on the device. Apple said the feature will be available in English only at launch, and uses on-device machine learning to ensure privacy and security.
Personal Voice will be integrated with another new accessibility feature called Live Speech, which will let iPhone, iPad, and Mac users type what they want to say to have it be spoken out loud during phone calls, FaceTime calls, and in-person conversations.
Apple said Personal Voice is designed for users at risk of losing their ability to speak, such as those with a recent diagnosis of ALS (amyotrophic lateral sclerosis) or other conditions that can progressively impact speaking ability. Like other accessibility features, however, Personal Voice will be available to all users. The feature will likely be added to the iPhone with iOS 17, which should be unveiled next month and released in September.
"At the end of the day, the most important thing is being able to communicate with friends and family," said Philip Green, who was diagnosed with ALS in 2018 and is a member of the ALS advocacy organization Team Gleason. "If you can tell them you love them, in a voice that sounds like you, it makes all the difference in the world — and being able to create your synthetic voice on your iPhone in just 15 minutes is extraordinary."
Article Link: iOS 17 Will Let You Create a Voice Sounding Like You in Just 15 Minutes
You're coloring my valid criticism as a mindless negativity and that's not good debateBut people who do happen to be Apple customers will get to use it. Isn't that a win? Must the glass always be half-empty?
I imagine Mom might wonder why you had changed your name and banking details before sending you the 50 quid you asked for.Scammers will absolutely love this... "Hey Mom I need you send me some money, ASAP Please!"
That's right!I am with the person who wants someone who has read the article and understands the tech at play to write out exactly how this could be used for nefarious purposes as described. I mean, provided the EU or some other governing body doesn’t “consumer safety” their way into completely blowing open the SE and forcing Apple to open source everything.
First, bad actors would have to get 15 minutes of you reciting the exact text from a randomized list in the exact order.
Think about that for just a moment. How would they do that? Coersion? Drugs? If they are in a situation to do that, why not just have you read a specific message? How would they get the ML to wash out those stress elements from the sampling? Oh! I know, hire an impersonator, right? Again, if they already have someone out a much easier to access tool that sounds “close enough”, why bother with this for nefarious means? Oh! Maybe you can use movie clips to sound kind a famous actor. Good luck getting 15 minutes of those specific words in that specific order without some awful background audio.
I didn’t know US banks use voice authentication over phone calls. Sounds very unreliable.“my voice is my password. Please authenticate me”….what u have to say to most banking institutions when u call these days in the US
Recently there has been a lot of discussion how if someone steals your phone after having watched you enter your passcode (i.e. they know your passcode), they can then also take over your iCloud account (and do so in actual practice). If you created a clone of your voice, now they'll also have your voice. Not a very common occurrence for sure, but something to think about.Anyone saying this can be used for scams should write out all the steps needed for that to work.
Commentor made a self-deprecating joke about his own voice.Commentor completely ignores the accessibility part.
This technology is already available, see Descript for example. It is used for dubbing videos.A great use of this technology. Sadly, though, this technology is being used to exploit and scam people as well and probably worse to come. Human nature remains the same: tools can be used for good or ill and usually both. /waxesphilosophical
You can do that over text messaging already.Scammers will absolutely love this... "Hey Mom I need you send me some money, ASAP Please!"
Interesting point. Apple could gradually apply an aging algorithm to your synthetic voice - so it gets slower and raspier over the years.I really don’t get how it’s going to create a voice sounding like me in 15 minutes. You’d think it would make more sense to create a voice sounding like how I sound now.
And, it’s potentially a dangerous paradox. If I try it and it tells me that it CAN’T create my voice… is that because my voice will no longer exist in 15 minutes? Is it because I’m going to die or have my larynx ripped out by a sky leopaard? Then, would I be able to take steps to prevent my own demise thus upending the space/time continuum?
Going to enter a bug report imploring them to make it sound like me NOW… just too dangerous otherwise.
You won't say that when you get hacked and the scammers ring your bank sounding like you and empty your accounts!This is a feature with narrow but deep impact on those who need it. If people could not be cynical for five minutes that'd be great.