This reminds me of when Apple first introduced, with great fanfare, their pivot to privacy-first and “Differential Privacy.”
However, when privacy experts later examined Apple's implementation, they found that the promised privacy was largely an illusion. The parameters Apple had chosen for their Differential Privacy were so weak that only a few data exchanges would be enough to de-anonymize individual users.
I don't know if they improved on it, but back then it was less about true privacy and more about the appearance of privacy and an unfortunate example of marketing (core differentiator, premium justification) taking precedence over meaningful protection.
People interested in this will probably also like reading this friendly introduction to differential privacy: https://desfontain.es/blog/friendly-intro-to-differential-pr..., which is friendly yet goes into a lot of details and techniques in a long series of blog posts.
> Imagine you’re a data analyst at a global company who’s been asked to provide employee statistics for a survey on remote working and distributed teams
I'm going to go on a limb and assume the exercise of the analysis is just for fun and that the decision was already made by the strategic team of people who never go to the office themselves or aren't really affected by it.
The 0 to 3 day move that many did was not supported by data in any of the cases I've seen and had more to do with government pushes and saving downtown or encouraging self layoffs.
The article itself on the technique is fine although I'd also make sure to include the percentage with regards to non participants because trust in anonymity over a contentious topic will definitely affect who answers and what they answer.
This reminds me of when Apple first introduced, with great fanfare, their pivot to privacy-first and “Differential Privacy.”
However, when privacy experts later examined Apple's implementation, they found that the promised privacy was largely an illusion. The parameters Apple had chosen for their Differential Privacy were so weak that only a few data exchanges would be enough to de-anonymize individual users.
I don't know if they improved on it, but back then it was less about true privacy and more about the appearance of privacy and an unfortunate example of marketing (core differentiator, premium justification) taking precedence over meaningful protection.
People interested in this will probably also like reading this friendly introduction to differential privacy: https://desfontain.es/blog/friendly-intro-to-differential-pr..., which is friendly yet goes into a lot of details and techniques in a long series of blog posts.
Also great as well as entertaining: https://gwern.net/death-note-anonymity
Yes, that is a great resource. And if you're looking for a connection to ML, the search term is "DP-SGD".
No mention of ARX, but it is also a tool that lets you calculate those metrics: https://arx.deidentifier.org
> Imagine you’re a data analyst at a global company who’s been asked to provide employee statistics for a survey on remote working and distributed teams
I'm going to go on a limb and assume the exercise of the analysis is just for fun and that the decision was already made by the strategic team of people who never go to the office themselves or aren't really affected by it.
The 0 to 3 day move that many did was not supported by data in any of the cases I've seen and had more to do with government pushes and saving downtown or encouraging self layoffs.
The article itself on the technique is fine although I'd also make sure to include the percentage with regards to non participants because trust in anonymity over a contentious topic will definitely affect who answers and what they answer.