However he made the very interesting point that having an app tell you if something is edible is a completely incorrect starting point for foraging because the main thing you get from using a reference book is that you are forced to notice that there are many plants that may look similar but are different (meaning you are more aware that your task is more about differentiation than identification, since botany is a bit fuzzy around the edges) and might seriously differ in their toxicity. None of these apps have a feature to say "it looks like this edible plant but if the angle you took the photo from was slightly off, it's definitely poisonous because the group of plants that look like this only only one or two edible members and everything else will definitely kill you". Some list confidence percentages but there were examples where the percentages were far too pessimistic or optimistic for a certain match, making them not particularly useful.
My impression is that these apps are developed using the same "move fast and break things" model as most other things these days, which definitely seems like the best model to use when making an app that may or may not lead to people's deaths (not your fault of course, because you waived all liability in your ToS). /s
While the article cites some persons who ate poisonous mushrooms because an app said they weren't poisonous, ultimately it will come down to "Did the app sell itself as telling you what's safe to eat and what's not?"
From the video, I'm particularly harsh towards Google on this as it doesn't seem to include the confidence level or analysis process and instead focused on showing a result and even trying to sell you something based on the result. This does seem like it's the wrong approach that will get people killed, with thoughts like "well Google thinks it's this, and it's even trying to sell me some, so must be okay."
For the others, you can see warnings in the apps about eating identified stuff, you can see how confident it is in the analysis, and they at least appear to be considering things like the region you're in when making an analysis.
I think it would be fairly defensible based on the warnings and preparation that a reasonable person would not be expected to take the identification as a go-ahead to eat something. That unreasonable persons take unreasonable actions as a result of these apps wouldn't really change the legal interpretation, but of course maybe it can be persuasive in other directions.
At first I shared the same opinion as you that these are dangerous apps, but watching what the apps do in the linked video, the non-google ones are actually pretty acceptable in my opinion. I would prefer they include a few more warnings though when there is low confidence matching or when there is conflicting matching just to remind the person using the app "hey, btw, a lot of plants can kill you. Don't eat this, I'm just an app."
They're mostly intended as entertainment, but some of them use the data to analyze where species are growing in realtime.
Did the developers of google maps fail their moral obligation if they know some users will follow google map's directions, despite their GPS being broken, and go down wrong streets, walk into walls, get lost without water, etc?
Did the developers of the bird scooter app, which tells you to wear a helmet (but developers know that warning will be ignored), fail their moral obligation since they know some scooter riders won't actually know how to ride and will fall and be injured?
Do the developers of competitive sport apps, like strava, fail a moral obligation since they know some people will injure themselves trying to get on a leaderboard?
Like, I agree that there's a moral obligation for developers. But on the other hand, I feel like you can expect some baseline of "bad users who misuse the app horribly", and it feels like if that's enough to obligate you to not build said app, you just can't build anything. Just about anything can be misused, and at the scale of most apps, it's reasonable to expect it will be.
Is there something about plant identification that makes it more special than the other apps above?
I think you're missing how many steps are necessary.
Those apps for identifying your local poplars and evergreens.
This person pointed it at a mushroom, picked the mushroom, used it as a topping and digested it.
I would be for an additional warning label when mushrooms are identified. But we need more people on the same page about the apps themselves, if you haven't used them I think you might be out of the loop.
There was a time when you could publish a book on how to make a pipe bomb and nothing would happen to you. Now the wrath of the government descends on you as if you broke the law by having knowledge and telling it to someone.
I do like mushrooms but I think it's just a step too far into 'Russian Roulette' territory to risk trying to forage for those. Eating something which might make you sick for a day or two is one thing. But risking eating something which might kill you or require a liver transplant, if you get it wrong, is a bit too scary for me. The only mushroom I'm confident in my ability to ID is psilocybin.
[0] https://play.google.com/store/apps/details?id=com.floraincog...
There’s a lot of room between a one in a billion mistake that hurts somebody and a fifty fifty chance a mushroom is poisonous. There are plenty of edible vs deadly mushrooms that only have incredibly subtle differences that you should trust nobody but a human expert to identify.
Effectively what I’m saying is a disclaimer shouldn’t be able to get you out of murder charges for a game of Russian roulette.
By your logic knife makers will all be praying that their knives arent used for killing or harming people or they'd be charged for murder.
These identification apps are selling incomplete information where the need for full information about toxins is obvious.
Nobody was ever surprised that a knife hurt somebody or had cause to accuse a knife maker that their product could cut things in a way a consumer wouldn’t know about.
The issue isn’t that a tool can be harmful but that a danger to the producer of a product would be obligated to know and share isn’t obvious to the consumer without previous knowledge. If you’re selling information you must know your audience.
Any human teaching you to identify mushrooms will teach you about the risks of poison right away. “I didn’t tell you what you needed to know but you should have known better” only works when it’s reasonable for you to know better. General audience mushroom identifiers shouldn’t be expected to know how easily a misidentification could kill them, plus it’s just not difficult to do.
This doesn’t match the situation with knives unless you can find me someone who really does need to be warned about knives and isn’t, say, 4 years old.
How is it okay to make fundamentally broken software and then release/sell it as if it's functional? It shouldn't be, just like it isn't for architects, engineers(real ones) to design and build buildings that fall down and cars that explode.
Or a hundred people who have their intellectual curiosity satisfied. I recently used Google Lens to ID (with like 75% confidence) a mushroom I had zero intention of eating either way.
Mushroom identifying books have all of the same issues. I guess one could make an argument that it's unconscionable to publish those, too.
Every responsible resource will be full of this kind of caution and neglecting it should be criminal. It isn’t covering your ass or putting an obligatory warning label on everything, a primary point of interest in mushroom identification is toxicity. If you don’t do it you kill people, not like maybe but certainly.
You also don't know that the app doesn't warn users that it shouldn't be used in life and death situations, but I'll maintain that it shouldn't have to. To avoid "criminal negligence" accusations.
If I have a rock identification app does it need to say not to eat rocks to avoid liability for dental damages? If I sell bicycles do I have to tell you that rolling downhill fast can cause death or injury?
It is more comparable if you built an app to identify cancerous moles, except it produced false negatives significantly more often than an actual doctor, leading to people dying. This isn’t a “roll the dice” thing.
What about an app that tells you how to pump gas while smoking a cigarette? That seems like pretty obvious negligent design. In California at least that opens you up to a lawsuit. [0]
[0] https://www.shouselaw.com/ca/personal-injury/product-liabili...
I'll just stick to distinguishing a '67 from a '68 Mustang.
Now there's an app that'd have to be designed very conscientiously.