Preferences

With some of these I'm left debating what is a traffic light. Is the pole itself a traffic light? I really don't know.

Also when it asks to select all the cars. Is a bus a car? Is a truck a car? I really don't know what it is expecting and I must pick wrong as I often fail.


Perhaps that's Google's intention. They are using your human intelligence to make educated guesses, rather than using AI. Your choices will be added to their dataset.
Yes, but my guess would be much more educated if they would tell me the question in sufficient detail.
The worst part is the self doubt afterwards when you get presented with ANOTHER reCAPTCHA, leaving you questioning whether you got something wrong.
It's very frustrating that there's no "your AI is an idiot, there isn't one" button. I saw an example of one that said "click the tiles containing the bus" where it was clear what the AI thought was a bus, but it definitely wasn't.
This reminds me of Janelle C. Shane's work on AI[1]. Computer vision's been learning based on the photos that people take, not the visions that people see, and this is giving AI a preference for photogenics instead of reality.

People generally don't take pictures of rolling green hillsides. But they very often take pictures of rolling green hillsides with sheep on them. So if you ask the robot to draw a picture of rolling green hillsides, it will include sheep. Or, if you ask it to draw a picture of the savanna, it will want to include giraffes.

Now you're being asked to find a bus in a photo without a bus because it's a street scene, and every street scene has a bus in it.

I haven't read her book, yet, but her Twitter[2] is often full of amusing anecdotes like this.

[1] https://aiweirdness.com/

[2] https://twitter.com/JanelleCShane

> People generally don't take pictures of rolling green hillsides. But they very often take pictures of rolling green hillsides with sheep on them. So if you ask the robot to draw a picture of rolling green hillsides, it will include sheep. Or, if you ask it to draw a picture of the savanna, it will want to include giraffes.

Don't humans do this too, though? If I asked someone to draw a picture of rolling green hills, they may well add sheep as an additional detail.

Well, personally, I had the "bliss" image from the Windows wallpaper collection in my head while I was writing this. But I'm cognizant that most of the photos of hillsides I took in Ireland had sheep in them.
I wonder if that would still be true, though, if "bliss" wasn't a default Windows wallpaper. In other words, you're still referring to common pictures, you're just particularly biased to one in particular that you've seen a lot.

I haven't done the experiment, but I'd posit that if you walked up to a group of 8-year-old children, gave them crayons, and asked them to draw pictures of "rolling hills", a significant portion would add sheep, cows, flowers, or some other details—even though a majority of rolling hills in the world don't have any of these features.

Thank you, I ordered her book on the strength of your comment!
I wonder if you ran into something I deliberately mislabeled as a bus.

Your goal shouldn't be to answer the question earnestly, but to confirm the machine's biases. Going with the flow is expedient, as well as giving Google less support.

In this case, it was obvious what it wanted someone to do.

It gets more frustrating when it's less clear. Is "click the hills" with a picture of a mountain a mistake, or a trick? Should I click all the tiles that contain a bus, even if it's only one pixel, or should I only click the ones that mostly contain a bus? etc.

The most frustrating is “select all pictures with bridges” when not one picture contains more than one bridge.
are you joking?
No, it's pretty regular. It will even tell you to "try again" if you correctly don't select anything.
They probably assume that the vast majority of the population are not massive pedants and will do what they expect.
They probably don't assume anything, and it's just a corner case in their automated system.
Right, but it's why fixing it is not a priority.
That is, they assume the vast majority of the population is indistinguishable from their AI. I believe that view also informs their customer service procedures.
I think it's pretty clear what it's asking for and you are misinterpreting the request.
I actually thought I experienced that this morning. But then I looked closely, and there were actually photos containing sections of what I was left to assume were actually bridges. So I think you're technically wrong, but the spirit of your comment is correct!
I always see complaints about recaptcha like this, but I've never experienced this struggle, certainly not to such a degree as to be outright frustrating. It's always seemed pretty obvious to me what the right answers are. My personal take to your examples are that that a traffic light's pole is not a traffic light, but any part of the box that contains the lamps/leds is (even if facing away from the camera), and that neither large trucks nor buses are cars, but would include vans, pickups, and SUVs.

Hopefully these rules of thumb will help someone reading this find these captchas less frustrating.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal