
In a bid to boost performance without compromising user privacy, Apple is introducing a new way to train its AI models. While previously, Apple relied on synthetic data, such as artificially generated messages, the new method allows the tech giant to privately compare synthetic data with real user content, without accessing or storing any actual user emails. This privacy-first approach will also give Apple a distinct position in the AI race.
When Apple relied on synthetic data to train its AI features, such as writing tools and email summaries, the company admits it struggled to capture how people actually write and summarise content.
How does it work?
Apple generates thousands of fake emails covering everyday topics. These are converted into “embeddings”, data representations of the content, and sent to a small number of devices that have opted into Apple’s Device Analytics programme.
On each device, a small sample of recent user emails is privately compared with the synthetic messages to find the closest match. The results never leave the device. Using differential privacy, only anonymized signals about the most frequently selected synthetic messages are sent back to Apple.
These popular messages are then used to refine Apple’s AI models, helping improve the accuracy of outputs like email summaries, while maintaining user anonymity.
Also Read: Apple training Apple Intelligence by using emails, without compromising user privacy
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.