- cross-posted to:
- theandrocollection@lemm.ee
- cross-posted to:
- theandrocollection@lemm.ee
Our mobile devices listen to and collect a significant amount of data on us, even without using our microphones.
Our mobile devices listen to and collect a significant amount of data on us, even without using our microphones.
If manufacturers are to be believed, the only thing that our devices are always listening for is the trigger word. iPhones have a dedicated piece of hardware or circuit or smth that listens only for ‘hey siri’ and it doesn’t start keeping record until it’s heard that. After which it sends what you say to the cloud to understand what you said.
Yes, exactly. And in order to improve the ability to understand the wake word, they need to occasionally send data to the cloud when there is some indication there may have been a misunderstanding. Also, sometimes humans need to listen when the computer has low confidence.
And of course everything after the wake word goes to the cloud. And sometimes it thinks it hears the wake word when it did not. This goes to the cloud and a human may need to interpret it.
So, some things your phone hears will go to the cloud without the wake word. And humans sometimes listen to them. This is pretty clear. Is this malicious or nefarious? Probably not. But it is complex and hard for unsophisticated end users to understand. And the reality is your phone absolutely does 110% spy on you. Just not by listening to you. It is easy to understand why so many people refuse to believe their voice assistants are not spying on them.
This is generally wrong. Disconnect your device from the internet, and on most (for sure Siri/Alexa) will still activate if they hear the wake word. They won’t activate it they don’t. Both companies have basically said that the wake word functionality is hardware blocked, and that’s not been disproven.
Second, not all assistants/companies are created equal. For example, Apple has made the process of involving human review opt-in. Apple also has no incentive to use this data for anything other than improving Siri. They’re not an advertising company and if anything are fairly hostile to others using Apple customer’s data for that type of thing without explicit consent. Contrary to Alexa/Google, which has an incentive to use your Voice recordings to advertise to you, EG: you ask your VA what the symptoms of food borne illness are, they show an ad or suggest a search for pepto.
This part is mostly correct. Again, in the case of Apple the phone isn’t spying on you, but all of the shit you put on it is. All of those apps are collecting data and collating it in ways that people don’t understand. So even though I have a burner Facebook account, since it’s tied to my number or email (can’t remember which) and I’m sure most of my social graph shares contacts with everything that asks, as soon as I created that account FB suggests to me a whole lot of people I actually know even though I gave it no other real data. People also don’t realize that all of this data is often brokered through lots of services, so when you slow down buying tampons or something, another shopping app starts suggesting prenatal vitamins. This is a large part of the reason lots of major retailers have club cards or whatever.
My opinion here is. Sure keep the valid stuff provided the user agreed to it. Have an opt-out though where data is analysed for whatever purpose then deleted. I don’t know why they cannot keep data for a day, run analysis and delete on a rolling basis. The benefit of having old data to run improved analysis on is negligible when you’re getting as much new data daily as they do.
But, regardless the excerpts it sends when it thinks you might have said the wake word which turn out to be false should be deleted. Do any short analysis for the why right away and delete. Because it really wasn’t for the phone/personal assistant.