Ever wonder why these captchas are always cars, bicycles, motorcycles, traffic lights and crosswalks? Because YOU are doing the work of teaching the next generation of AI for self-driving cars.
Google has had more than enough data to train AI models from reCAPTCHA for many years. In 2010 it displayed 100 million captchas per day. You simply do not need hundreds of billions of solved captchas in your data set.
I feel like its only purpose nowadays is stopping basic bots and annoying people who don’t let themselves be tracked as much as advertisers would like.
Yeah the most recent version of CAPTCHA is completely seamless for the end user because there is no more value to be had gathering this kind of data. Instead it runs in the background of the web site, looking at your mouse movements/clicks/keystrokes, and determining whether or not you’re a bot based on that information.
The problem is a lot of websites still use the old version, or their own hacked together CAPTCHA alternative, which decent bots have been able to beat for a while now.
My favorite is when it asks me to identify stairs. I just imagine a self-driving car mistaking a set of stairs as more road and deciding to try and climb the steps.
Actually, it’s training a self-driving humanoid robot that’s supposed to climb stairs in order to terminate any potential John Connor that’s inside a house upstairs.
I took some compsci classes years ago when this tech was new and that’s exactly how it was described as being handled
Once image recognition software got good enough to be right most of the time they started this shit to help get it the rest of the way to all of the time
Do it any other way and you have to pay those people
Theres a CGPGrey video that describes old techniques. It’s not quite up to date on some of its predictions, but it is how some machine learning works. Of course, it doesn’t discuss current proprietary techniques, because those are company secrets. Still, it’s as good a guess we’ll likely get, unless something radically different has been invented:
I can’t believe I never put that 2 and 2 together.
It stresses how stupid AI is then if it was a human the question would be “is this a stop sign?” So it’s not even asking us to validate data. To me that means AI is still far from being intelligent. It’s requiring our input to learn. That’s not how we operate. My kids don’t require me to show them images of a stop sign for them to know what one is.
Ever wonder why these captchas are always cars, bicycles, motorcycles, traffic lights and crosswalks? Because YOU are doing the work of teaching the next generation of AI for self-driving cars.
It’s common courtesy to link to the xkcd you have the image from. It’s one of them.
plus illegal to not do under the creative commons license!
AI — anonymous Indians
Can’t wait until we get trolley problem CAPTCHAs and we have to choose the square with the most expendable human lives
I don’t believe it, at least not anymore.
Google has had more than enough data to train AI models from reCAPTCHA for many years. In 2010 it displayed 100 million captchas per day. You simply do not need hundreds of billions of solved captchas in your data set.
I feel like its only purpose nowadays is stopping basic bots and annoying people who don’t let themselves be tracked as much as advertisers would like.
Yeah the most recent version of CAPTCHA is completely seamless for the end user because there is no more value to be had gathering this kind of data. Instead it runs in the background of the web site, looking at your mouse movements/clicks/keystrokes, and determining whether or not you’re a bot based on that information.
The problem is a lot of websites still use the old version, or their own hacked together CAPTCHA alternative, which decent bots have been able to beat for a while now.
My favorite is when it asks me to identify stairs. I just imagine a self-driving car mistaking a set of stairs as more road and deciding to try and climb the steps.
Actually, it’s training a self-driving humanoid robot that’s supposed to climb stairs in order to terminate any potential John Connor that’s inside a house upstairs.
deleted by creator
How does it know when it’s right if you’re the one teaching it?
You and many other humans are doing verification work
It’s pretty sure it’s already right, but if enough people get the same image and get it wrong the same way then something’s up, flag it
You know this for a fact?
I took some compsci classes years ago when this tech was new and that’s exactly how it was described as being handled
Once image recognition software got good enough to be right most of the time they started this shit to help get it the rest of the way to all of the time
Do it any other way and you have to pay those people
Theres a CGPGrey video that describes old techniques. It’s not quite up to date on some of its predictions, but it is how some machine learning works. Of course, it doesn’t discuss current proprietary techniques, because those are company secrets. Still, it’s as good a guess we’ll likely get, unless something radically different has been invented:
https://youtu.be/R9OHn5ZF4Uo
There is also a second video about more modern stuff, but it’s more a footnote:
https://youtu.be/wvWpdrfoEv0
I can’t believe I never put that 2 and 2 together.
It stresses how stupid AI is then if it was a human the question would be “is this a stop sign?” So it’s not even asking us to validate data. To me that means AI is still far from being intelligent. It’s requiring our input to learn. That’s not how we operate. My kids don’t require me to show them images of a stop sign for them to know what one is.
deleted by creator