I didn't say otherwiseIn fact, I clarified some are terrible - my point was they're a necessary mechanism for many online systems, so the focus should be improvement, not removal.
The real problem that the human lock builders are up against is that the automated locksmiths built by other and equally skilled humans can do more --and more difficult-- things much, much faster than humans can. So the interest in biometrics, and then the interest in spoofing those off photos, etc.
It's an interesting trend, the escalation of tactics and countertactics. The negative spinoff is that for less than vital activities, jumping through the hoops to log into your own accounts securely and still retain some privacy does begin to lose value.
When I decide that a max two rounds of picking out images of fire hydrants and storefronts are how much effort a newspaper is really worth to me on the average occasion, what does my abandoning a login process tell that newspaper? That I'm a subscriber on the verge of bailing, or I'm a bot that just gave up easily and so "See, the system works great." Time will tell, eh?
Sure I want my online setups to be secure enough to deter the average bot from managing to log in with my credentials. I just don't think asking humans to pick out images of buses and fire hydrants is the best way to go. We can probably write improvements to password managers and have those apps keep track of even one-off usage of barriers that simulate random pairs of security questions and nonsense answers to them. The effect would be or less like that of a hardware dongle of the sort some companies issue to employees to access intracompany servers. Somewhere back at HQ there's a device that knows the passcode my device just transmitted is the only one that will open certain data doors right now, but not 45 seconds from now...
It's not so difficult to secure accounts without inconveniencing legit users, it's much more that it's just really hard to get corporations to sign up for security practices that cost money or slow down conduct of business. The CEO likes to know there's a back door for when his command boils down to "I don't care, just do it."
The use of permission barriers including stuff like recaptcha is sometimes just a sop to the guy in the c-suite who's charged with corporate security, because all these CEOs have those moments of ultimate authoritarianism in dealing with some business problem, yet the company can't afford to look like it does nothing to secure its data, including the login credentials of users.
"Security" itself is pretty ephemeral in the age of good online lock builders and better locksmiths. So... "fake security" is the next best thing, and that's being able to say to auditors (and Senate committees) well ok we got hacked but look, we do have a care for our data, we do this and this and this... and have the receipts to prove we spent the money on "real security".