That was probably the best system I'd seen, but I can't remember what site it was.
they did away with that.
1) Opt-in, Opt-survey, Opt-out is the only ternary to build trust. Survey is an active validator of trust and assists in low-bandwith communication. Question should be presented to the end user the first time using it or the next time the application starts and this feature was added.
2) Provide the exact analytical information you want to the end user so they can parse it too. The means to self-evaluate allowed information to be shared with providing the reports or views improves trust.
3) Known privilege to trust leads to more consent. Having priority support with features and bugs could be aligned with those that Opt-in. Analytical history / performance may assisting in solving the recent bug that was reporter.
Apple, Microsoft, Google, and all apply ambiguity to their analytical sharing without details, not how they use it and can abuse it. Most don't even provide an Opt-out. I don't trust these organizations but I must engage with them through my life. I don't have to use Facebook or Twitter and don't. I accept the Steam survey.
RFC with an agreed upon analytical standard could be step to solving the latch of analytical information the open source community would benefit from. Both parties consenting to agreed upon communication.
*My Point of View; meta data is still personal data. Without the user the data and the meta data would not existing. Since the end user is the entropy to meta data they own the meta and the data.
The only way I see moving forward would be community driven effort to build the trust through said means and or other ideas. This not an easy problem to solve and would take time.
*Even the USA agencies like the CDC and FBI must utilize bias data for the decision making since not all states and organizations self-report.
Not sure what bias it adds
Like
"hey, we make this app, and we care about privacy, here is the information we have gathered over your usage for the past month, can we send this to ourselves, so that we can use it to improve the app?"
And then show human readable form of what data was collected.
Seems to me like a great implementation.
It’s very easy to confuse ‘loud protest from a small minority’ and the majority opinion. If a plurality of users chose to participate in an analytics program when asked and don’t care to protest phone-home activities when they’re discovered, then that’s where the majority opinion likely lies.
You could unbias the data by including the metric determining how long did it took them to click "Ok" and whether they actually reviewed the data before agreeing.
I have an incentive to see if the Linux desktop share has increased, so I usually run the survey for them to get my data point in. I also suppose the "gamer" crowed likes to show off how powerful their "rig" is, so I would imagine they commonly also run the survey for that reason as well.
Why? I don't think that's obvious. It may also be related to the way the opt-in is presented. In general, I would expect this to be a workable solution. Even if the opt-in group deviates from the "typical user", it's the best data you can get in an honest and ethically sound way. This should certainly be better than no data at all?
For any website/app that presents an opt-in cookie consent banner this is implicitly already the case.
Hardly. It just has some issues with regards to what you also pointed out, bias for one. But it still provides valuable insight into usage patterns, systemic issues, and enables tracking effects of developments over time. Correcting the bias is not a bigger task than it is now - I'm sure you already have an idea about feedback to different features according to reviews, user reports, discussions, and so on. Opt-in is the same, just much better.
I provide telemetry data to KDE, because they default to collecting none, and KDE is an open-source and transparent project that I'd like to help if I can. If I used your app, I would be likely to click yes, since it's open-source. Part of the problem I have with projects collecting user data is the dark patterns used or the illegal opt-out mechanism, which will make me decline sending telemetry every time, or even make me ditch it for an alternative. An app that asks:
Can we collect some anonymized data in order to improve the app?
[Yes] [No]
...with equal weight given to both options, is much more likely to have me click Yes if none of the buttons are big and blue whilst the other choice is in a smaller font and "tucked away" underneath the other (or worse, in a corner or hidden behind a sub-menu).Plus, I would think that SOME data would be better than NO data, even if there's an inherent bias leaning towards privacy-minded/power users.
The GDPR only applies to personal data. You can collect things like performance data without opt-in (or even an opt-out option) as long as you are careful to not collect any data that can be used to identify an individual, so no unique device IDs or anything like that. Of course, you should be transparent about what you collect. You also have to be careful about combinations of data points that may be innocuous on their own but can be used to identify a person when combined with other data points.
Opt-in makes the data useless - not just in terms of the huge drop in quantity but because of the fact it introduces a huge bias in the data selected - the people that would opt-in are probably not a good sample of "typical users".
Opt-out - no matter what safeguards or assurances I could provide is unacceptable to a subset of users and they will forcefully communicate this to you.
Don't get me wrong - I understand both the ease at which bad actors abuse telemetry and the ease in which "anonymous data" can prove to be nothing of the kind in a multitude of surprising ways.
But it's hard not to feel a little sad in a "this is why we can't have nice things" kind of way.