- peebee67 parentI've often had the mental image of Galileo trying to order a pizza and being very disappointed at the garlic bread that turned up.
- They've apparently had a corporate philosophy of obfuscating the underlying system from the end user and deliberately inhibiting their ability to learn how it fits together since at least the early 2000's.
I feel like the current ignorance of the average computer user was a deliberate outcome they've been working towards for more than 20 years. As someone who has been using computers since the late 80's, I find their current offerings harder to use than ever.
- If they ignore a properly configured robots.txt and the licence also explicitly denies them use, then I'd guess they have a viable civil action to extract compensation. But that isn't the case here at all, and while there's reports of them doing so, they certainly claim to respect the convention.
As for bending over, if you serve files and they request files, then you send them files, what exactly is the problem? That you didn't implement any kind of rate limiting? It's a web-based company and these things are just the basics.
- Greedy and relentless OpenAI's scraping may be, but that his web-based startup didn't have a rudimentary robots.txt in place seems inexcusably naive. Correctly configuring this file has been one of the most basic steps of web design for living memory and doesn't speak highly of the technical acumen of this company.
>“We’re in a business where the rights are kind of a serious issue, because we scan actual people,” he said. With laws like Europe’s GDPR, “they cannot just take a photo of anyone on the web and use it.”
Yes, and protecting that data was your responsibility, Tomchuck. You dropped the ball and are now trying to blame the other players.