> Sorry, I don't follow. Am I misreading something? To me the the quoted text says the opposite.
Yeah me too. But how would the provider report CSAM content if they are not obliged to break encryption? I don't really follow the Regulation on that part.
It wouldn't.
It's a broad framework and - based on my cursory reading:
- providers have to set up a counter-abuse team and fund it
- authorities and industry-wide cooperation on trying to come up with guidelines and tech
- counter-abuse team needs to interpret the guidelines, do "due diligence"
- provider needs to have monitoring to at least have an idea of abuse risks
- if there are, work on addressing them if possible without breaking privacy
As far as I understand the point is have more of services like "YouTube for Kids", where you can give your kid an account and they can only see stuff tagged "kid appropriate" (and YT simply said we are going to be sure there are no bad comments, so there's no comment section for these videos - which hurts their engagement, which hurts profitability).There's a section about penalties and fines, up to 6% of global revenue, if the provider doesn't take abuse seriously. And - again, based on my understanding - this is exactly to prod big services to make these "safer, but less profitable" options.
Sorry, I don't follow. Am I misreading something? To me the the quoted text says the opposite.
"Providers should remain free to [...] and should not be obliged by this Regulation to [...] create access to end-to-end encrypted data"
> prevent against spam reporting, where someone could basically DoS the reporting service with false positives
Yep, probably there's no way to do this. (Likely this whole thing will be a lot of money spent to realize this.)