science popularization

Here’s How Apple Is Planning to Secure Your AI Data


It’s no secret that Apple is working on AI features that will roll out with iOS 18 and macOS 15. When you update your iPhone, iPad, and Mac later this year, you may find a more natural-sounding Siri, or be able to generate emojis based on whatever you’re talking about in Messages. Pretty cool—but how will Apple protect your data while the AI processes all these nifty new features?

While reports suggest Apple will be running many of these features on-device, at least with its newer products, rumors also say the company is planning on outsourcing much of the processing to the cloud. That’s not atypical from the rest of the industry: Most AI processing right now is happening in the cloud, simply because AI processing is intense. It’s why companies continue to push the capabilities of their NPUs (or neural processing units), which are specialized processors that exclusively handle AI functions. Apple has been using NPUs for years, but made a big show of touting the new M4 chip’s beefy NPUs earlier this year, while Microsoft started a new AI-PC standard with its Copilot+ PC line.

Running AI on-device is more secure

Of course, whether or not your AI features are running on your phone or in the cloud probably doesn’t matter to you, so long as the feature is working as it should. The issue, however, is that running these features on-device provides an inherently more secure experience. By pushing the processing to the cloud, companies risk exposing user data to anyone with access, especially when the service doing the processing needs to decrypt user data first. Exposure risks include the employees of the company in question, but also bad actors that may try to break into the company’s cloud servers and scrape whatever customer information they can find.

This is already an issue with services like ChatGPT, and why I advise not to share any personal information with most cloud-based AI services: Your conversations are not private, and are all being fed to these servers, both for storage and to train the AI model. Companies with an investment in user privacy, like Apple, prefer to use on-device solutions whenever possible, since they can demonstrate that keeping user data isolated to their phone, tablet, or computer keeps it out of anyone else’s hands.

How Apple will use ‘Secure Enclave’ to protect AI data

While newer Apple hardware should be powerful enough to run the AI features the company is cooking up, for older devices, or for features that are too power intensive, it may be forced to turn to cloud-based servers in order to offer those features at all. However, if a report from The Information and cited by Android Authority is accurate, the company may have found a solution: the Secure Enclave.

The Secure Enclave is already part of the hardware of most Apple products in use today. It’s a part of the SoC (System on a Chip) that is kept separate from the processor, and its job is to store your most sensitive information, like your encryption keys and biometric data. That way, if the main processor is ever compromised, the Secure Enclave ensures bad actors can’t access its data.

According to The Information, Apple is working on an AI-cloud solution that would send all AI user data to the Secure Enclaves of M2 Ultra and M4 Macs running in its server farms. There, those server Macs could process the request while preserving encryption, then send the results back to the user. In theory, this process would keep user data safe while also giving older devices access to Apple’s latest AI features.

We won’t know for sure whether this is Apple’s plan until they reveal what they’re working on at WWDC, if at all. If Apple stays hush-hush about how it will protect AI user data, we may never know exactly. But seeing as Apple touts itself as a company that cares about user privacy, the approach (or any approach that ensures cloud-based data is end-to-end encrypted) would make a lot of sense.

Related Articles

Back to top button