Last weekend’s report from the British newspaper the Guardian detailed that Apple hires contractors who are tasked with listening to some Siri audio, prompting critics to take the story as proof that Apple’s commitment to privacy is nothing more than marketing talk.

An excerpt from the Guardian‘s report:

Apple contractors regularly hear confidential medical information, drug deals and recordings of couples having sex, as part of their job providing quality control, or ‘grading’, the company’s Siri voice assistant, the Guardian has learned.

Although Apple does not explicitly disclose it in its consumer-facing privacy documentation, a small proportion of Siri recordings are passed on to contractors working for the company around the world.

Are you positively, hundred percent sure about that? Because I’ve just reviewed Apple’s privacy screens scattered throughout its operating systems and for sure they clearly spell out that some Siri recordings may be used to improve the Siri service. On top of that, I was able to find the following excerpt by quickly combing though Apple’s iOS software license agreement:

By using Siri or Dictation, you agree and consent to Apple’s and its subsidiaries’ and agents’ transmission, collection, maintenance, processing and use of this information, including your voice input and User Data, to provide and improve Siri, Dictation and dictation functionality in other Apple products and services.

That whole passage is in bold and I think it’s pretty unambiguous about Apple’s intentions.

The “revelation” that the company is selling the voice data collected by the Siri personal assistant to third parties—contractors tasked with training the Siri algorithm—as a part of its attempt to improve the Siri service is nothing new if you’ve been following technology.

Even the Guardian acknowledges as much:

They grade the responses on a variety of factors, including whether the activation of the voice assistant was deliberate or accidental, whether the query was something Siri could be expected to help with and whether Siri’s response was appropriate.

This isn’t the first time that people have “discovered” that some Siri audio snippets are being passed on to a third-party. Back in 2015, an anonymous employee of a company called Walk N’Talk Technologies wrote on Reddit that the voice data being analyzed by the company was coming from personal assistants like Siri and Cortana.

Apple released the following statement to the Guardian and others:

A small portion of Siri requests are analyzed to improve Siri and dictation. User requests are not associated with the user’s Apple ID. Siri responses are analyzed in secure facilities and all reviewers are under the obligation to adhere to Apple’s strict confidentiality requirements.

The crux of the Guardian’s story comes from a “whistleblower”, basically  an Apple employer who reported hearing some private stuff during accidental activations.

Here’s what the whistleblower told the newspaper:

There have been countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on. These recordings are accompanied by user data showing location, contact details and app data.

Audio recordings of user queries take advantage of meta data like location to better understand context. The worrisome part is the fact that contractors are able to hear private conversations in the first place. As the report itself acknowledges, that’s possible because a user may accidentally invoke Siri without realizing it, in turn prompting the audio recording of whatever is uttered after the wake-up phrase “Hey Siri” to be uploaded to the server, and certainly not because Siri is secretly recording conversations without user consent.

My feelings on this are certainly not the same as the rest of the Internet.

No digital assistant can be expected to improve over time based on the power of artificial intelligence and machine learning alone. That’s because the machine learning model used by an assistant has to be trained first by human editors—there are just no two ways about it.

Google Photos would have been unable to recognize faces with uncanny precision if the company hadn’t trained the algorithm using real people’s photos. Apple’s Face ID wouldn’t have been possible had its machine learning not been trained on more than a million photographs of diverse human faces. Even something as “ordinary” as speech recognition can be drastically improved upon by utilizing a machine learning algorithm using trained data.

The hard truth is, machines cannot (yet) train other machines’ machine learning models with satisfactory results—that’s the job us humans excel at.

Everyone in the industry who is serious about artificial intelligence employs human editors to train their machine learning algorithms with the goal of improving service. These employees are exposed to whatever content is at the heart of their efforts, be it short audio snippets captured by smart speakers during voice interactions or photos people upload to a photo-sharing service or flagged items in a social media’s feed, you get the idea.

Some companies take privacy more seriously than other companies. Some companies may not be as transparent in terms of how their human editors approach the task at hand as others. And, ultimately, some companies that don’t have as strong a track record protecting user privacy will probably be scrutinized more by the media and general public than the others,.

But I don’t believe for a second that it’s in any company’s interest to require that their employees actually listen in on our conversations with a personal digital assistant just because they can—or because they might be hoping to somehow derive actionable information from those private conversations—and get away with it for many, many years without anyone actually noticing or complaining anything. That’s just not possible in today’s networked world.

Don’t get me wrong, I do appreciate hard investigative work done on part of the journalists who report on these things because they give privacy crusaders something to chew on. But I’m not buying for a second their thinly veiled or implied conclusion that Big Tech is not just employing human editors to train Siri, Alexa and Google Assistant, but also to eavesdrop on our conversations for some yet-to-be-revealed but certainly nefarious purposes.

Yes, some Amazon staff listening to Alexa requests indeed has access to users’ home addresses because some spoken requests include location. And yes, some Google Assistant recordings reviewed by humans can potentially include private conversations because customers are free to say whatever the hell they want to their digital assistant.

It’s what these employees are asked to do with those recordings that counts at the end of the day. It’s the fact that some rogue human editor may misuse private customer information that should have us worried, not the fact that human beings are listening to select audio recordings in a far-flung secure facility with the explicit goal of improving the service.

Thoughts?