Personal Voice lets folks with ALS train an iPhone, iPad or Mac to simulate their voice, which is more affordable than the time-consuming voice banking process.
Use Personal Voice to make text-to-speech sound just like you | Screenshot: ApplePersonal Voice enables you to replicate your own voice after just fifteen minutes of audio training on your iPhone, iPad or Mac.
Reading aloud a randomized set of text prompts is all it takes to create a computer voice that sounds eerily similar to your own.
You can use this voice with a feature called Live Speech to have anything you type out spoken out loud in your synthesized voice instead of a Siri one.
What is Personal Voice? How does it work?
Personal Voice is a godsend for nonspeaking users | Screenshot: ApplePersonal Voice is a new accessibility feature available on the iPhone, iPad and Mac with iOS 17, iPadOS 17 and macOS 14. By leveraging machine learning, it can create a computer voice that sounds just like your own—without relying on specialized equipment and a time-consuming process known as voice banking.
You could also use the feature to clone the voice of a loved one to use when they’re deceased as a way of memorializing them. By following randomized prompts that guide you to speak specific sentences aloud, your iPhone, iPad or Mac can replicate your voice with just fifteen minutes of audio training.
Introducing Live Speech
Using canned phrases for quick responses | Screenshot: AppleBut it goes beyond that. Another new accessibility feature in iOS 17, dubbed Live Speech, can use your synthesized voice during cellular and FaceTime calls, as well as in-person conversations. Your Personal Voice model preserves vocal fidelity by imitating your accent, tone, inflection and cadence.
For those who no longer speak with the clarity they once did, Live Speech is a great way to type out a greeting, an order or whatever you’d like and have the device speak it out loud in a way that sounds like you. To save time, you can pre-write phrases and remarks to play aloud with a touch.
Who should use Personal Voice and Live Speech?
Apple designed Personal Voice and Live Speech to “support millions of people globally who are unable to speak or who have lost their speech over time,” including those diagnosed with amyotrophic lateral sclerosis (ALS) or other conditions that can progressively impact speaking ability.
Personal Voice and Live Speech requirements
Here’s what you need to use the Personal Voice feature:
iPhone with iOS 17.0 or later
iPad with iPadOS 17.0 or later
Apple silicon Mac with macOS 14.0 or later
Personal Voice is available in English but will expand to other languages over time.
Personal Voice and your privacy
But does Personal Voice protect your privacy? According to Apple, privacy is guaranteed because all processing is done directly on your iPhone, iPad or Mac.
Apple can’t listen to your voice recordings. Furthermore, samples aren’t shared with other companies nor uploaded to Apple’s servers. You can, however, give explicit permission for secure syncing of your Personal Voice model via iCloud.
How to create a Personal Voice model to clone your voice
Training Personal Voice by reading out loud a set of prompts | Screenshot: AppleYou’ll need to read aloud a set of randomly chosen voice prompts to create a sound-alike voice. The feature needs about 15 minutes of audio training to reliably make a synthesized voice for text-to-speech that sounds just like you. You don’t have to sit through the whole fifteen minutes at once.
If you don’t have time to finish the process in one sitting, you can pick up where you left off later. Your iPhone, iPad or Mac will then analyze voice recordings to create your Personal Voice model, but this will take time.
Even though Apple’s devices are equipped with a dedicated coprocessor to run deep neural networks in a battery-friendly way, called Neural Engine, you may need to leave your device plugged in overnight to chew through all those samples.
Your Personal Voice model is saved to the device you created it on, meaning you’ll need to repeat the training process to create similar voice profiles on your other devices. However, you can give explicit permission for your voice profile to be synchronized and shared between devices with end-to-end encryption.
ALS and voice banking
AssistiveWare’s Proloquo will soon support Personal Voice | Screenshot: AppleALS impacts one’s speaking ability as muscles in the throat and mouth progressively weaken. People with ALS often undergo a time-consuming process called voice banking to create a digitized version of their own voice. With Personal Voice, anyone can clone their voice just by reading through pre-crafted prompts.
To learn more about this condition, visit the ALS website.
AAC apps embracing Personal Voice
“At the end of the day, the most important thing is being able to communicate with friends and family,” Philip Green, board member and ALS advocate at the Team Gleason nonprofit, said in Apple’s press release.
“If you can tell them you love them, in a voice that sounds like you, it makes all the difference in the world—and being able to create your synthetic voice on your iPhone in just 15 minutes is extraordinary,” Green added.
Your Personal Voice model is available not only in Apple’s apps but also in specialized augmentative and alternative communication (AAC) apps from third-party companies like AssistiveWare.
iOS 17 is launching this fall
Apple will preview iOS 17 and other updates at WWDC, with the first developer-only beta of iOS 17 arriving following the June 5 keynote. Some weeks later, the general public will be able to install iOS 17 and take Personal Voice for a spin.
The company will continue releasing developer and public betas of iOS 17 and other operating system updates throughout the summer. iOS and iPadOS 17 should release publicly this fall before the next iPhone arrives.
Personal Voice turns typed text into speech using your synthesized voice.