Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
POW Captcha: a lightweight, self-hosted proof-of-work captcha (sequentialread.com)
116 points by wchar_t on Sept 10, 2021 | hide | past | favorite | 108 comments


Reminds me of Adam Back's hashcash[1], which was originally devised for similar purposes and was cited in Satoshi's Bitcoin paper[2]. Bitcoin's PoW scheme is a sightly embellished version of hashcash. I wish this work cited it too.

[1]: http://www.hashcash.org/papers/hashcash.pdf

[2]: https://bitcoin.org/bitcoin.pdf


This is using the hashcash PoW. The use of scrypt as underlying hash function is a rather poor choice though, as scrypt's memory hardness makes PoW verification unnecessarily expensive. To limit the damage, a rather small memory footprint is used for scrypt.

It's perfectly possible to make a memory hard PoW that's instantly verifiable, by using something other than hashcash. Examples include Cuckoo Cycle [1], and Equihash [2]. These can easily be made to use hundreds of MB in solving, while verification is memory less.

[1] https://github.com/tromp/cuckoo

[2] https://en.wikipedia.org/wiki/Equihash


I would prefer to see 2 options in browsers:

1. LSAT[1] support for micropayments (recently mentioned on HN[2])

2. RandomX[3], mining XMR for the site owner

Both provide something useful, replacing advertising and/or subscriptions for the site owner, rather than solely wasting energy. Let's eliminate captchas and advertising together.

[1]: https://lsat.tech/ [2]: https://news.ycombinator.com/item?id=28459713 [3]: https://xmrig.com/docs/miner


LSAT looks really cool! Hopefully it can significantly displace ads as a revenue source. I’m happy to pay 1 cent to read a recommended blog post or something, and I’m not the sort of person who would pay for an online news outlet subscription.

Any mining-based payment will inherently be worse and less efficient than a money-based payment, especially for mobile.


I would love to have the option of 1 instead of solving a captcha. Charge me 0.1 - 10 cents worth of bitcoin depending on the action, and I'd happily pay.


There is a bot that does micropayments on Telegram via LN.

Excellent for preventing spam: https://twitter.com/lntxbot


If in the future all bitcoin-like currency transactions have to be reported for tax purposes, and there's all these micropayments, wouldn't that effectively make your browser history part of a tax audit?


Mining also fits the definition of "wasting energy"...


POW: Proof Of Waste


Correct me if I'm wrong, but wouldn't this keep the endpoint accessible for any bot/script that is willing to "invest the work"? E.g. if I only plan to query the endpoint a few times per day, the captcha won't be an obstacle.

I mean, if that's an intentional exception for personal scripts, that's awesome, but it doesn't really seem to serve the expectations of a CAPTCHA then.

Also, while I like the idea, I fear this could stop working in the long term.

With cryptocurrencies, PoW works because the "good guys" (miners) and the "bad guys" (double spenders) have equal access to computing power: If the difficulty increases, both can simply add more mining hardware and stay in the game. If the "bad guys" threaten to get an advantage, the system can always increase the difficulty without risking to lock out the "good guys".

With CAPTCHA, the situation is different: Here, the "bad guys" (spammers) still have as much computing power available as they can buy and stuff in their data center. However, the "good guys" (regular users) have hard constraint: They have to use whatever hardware the browser runs on (which might just be a smartphone) and they can't spend more than a few minutes to solve the puzzle - otherwise, the user will probably grow impatient and give up.

This means, you can't easily increase the difficulty of the puzzle without locking out regular users. If the captcha grows popular, there can easily be a situation where you'd make the captcha unsolvable for all regular users ling before it would become unsolvable for spammers.


>With cryptocurrencies, PoW works because the "good guys" (miners) and the "bad guys" (double spenders) have equal access to computing power: If the difficulty increases, both can simply add more mining hardware and stay in the game. If the "bad guys" threaten to get an advantage, the system can always increase the difficulty without risking to lock out the "good guys".

No Bitcoin PoW works because of economics and game theory; it is rational if group of people invested a lot of resources into mining and building consensus that they will stick to that consensus in order to preserve their wealth.

Read what Satoshi said in the Incentive section of the Bitcoin Whitepaper: "If a greedy attacker is able to assemble more CPU power than all the honest nodes, he would have to choose between using it to defraud people by stealing back his payments, or using it to generate new coins. He ought to find it more profitable to play by the rules, such rules that favour him with more new coins than everyone else combined, than to undermine the system and the validity of his own wealth."


That's a pretty dogma but nothing more. Unfortunately, humans aren't rational in the homo oeconomicus sense. There are all kinds of motivations a person or an organisation could have to manipulate the network that have nothing to do with making money.

I mean, even in the regular, non-crypto economy there are lots of products that are sold at a loss for strategic reasons.

But you're right, that explanation wasn't quite correct. My point was that cryptocurrencies rely on the assumption that all legitimate users taken together have more computing power than a typical malicious user. That assumption is supported by game theory and made use of with dynamic difficulty adjustment.

However for a PoW captcha, the assumption does not hold: The captcha has to be solvable by WASM on a smartphone, otherwise it would lock out legitimate users. And that is a pretty low bar for an attacker to meet, computation-wise.


Unless the attacker's whole purpose is to destroy the value of Bitcoin because of some other non-economic reason, e.g. ideological reasons, national security, state self-preservation, ...


> Correct me if I'm wrong, but wouldn't this keep the endpoint accessible for any bot/script that is willing to "invest the work"? E.g. if I only plan to query the endpoint a few times per day, the captcha won't be an obstacle.

Isn't this already the case with other captchas where you can pay people to solve them for you? You could easily build a programmatic solution for that. If you're willing to "invest the work", nothing can really stop people automatically.


If nothing can really stop people anyway, then what's the point of all this?

But I think the problem is that a PoW captcha can be cracked significantly more easily than a regular captcha:

For a regular captcha, a spammer would have to deal with brittle image analysis software or find people willing to do extremely boring, borderline illegal work for pennies.

For a PoW captcha, they have to load the page in Selenium and... that's it. All that's left is a slight bump in the power bill.


> If nothing can really stop people anyway, then what's the point of all this?

It works, the thing is that "working" in that case is to increase the cost of doing this. Solving 1000 captchas costs $2 https://anti-captcha.com/.

The difference here is that people that have no money can automate selenium on their own computer and defeat a PoW captcha. But for people that have to pay for either servers or a captcha solving service, there is no difference.


> If nothing can really stop people anyway, then what's the point of all this?

Barriers to entry. Different services will require different levels of security. This might be enough for a simple poll app.


But a barrier for whom?

If you're a script kiddy who knows how Selenium works, you can crack this.

What this does for resource-poor attackers is implement some wasteful form of rate-limiting. But then, why not just use actual rate-limiting?


surely if you "query the endpoint a few times per day" the captcha indeed shouldn't be an obstacle to that. I thought the idea is to prevent ddos and spam, and not to distinguish between an actual human and bot. like it shouldn't prevent non-harmful scripts like web scraping etc. as long as you don't overtax the servers.


No, the purpose of captchas has always been to distinguish bot users, no matter what kind of bot. If you don't mind bots but just want to protect against overuse, you can simply set up rate-limiting and be done.


I could be wrong, but wouldn't this system "front-load" rather than "back-load" the rate limiting? That is to say: if you rate limit requests globally, it affects all users if one user attempts to spam. On the other hand, this system slows each individual user down without affecting others directly.


Good point. I'm no expert, but I believe rate limiting systems typically use buckets that are partitioned by IP address or network segment or something similar. If there were just a single global bucket, an attacker could exploit the rate-limiter itself to cause a DoS, which is clearly not what the site wants.

But yeah, this seems like a way to archive something similar with potentially less complexity. (If you're willing to tolerate the wastefulness, which is still not cool in the age of climate change).

A challenge could be correctly invalidating a nonce. You don't want an attacker to reuse a previously solved puzzle for multiple requests - on the other hand, it's difficult to set up a good "time to live" for a nonce as you can't know in advance how much time a user would need to solve this nonce.

So I guess some global state to track recently "spent" nonces would be necessary.


So now instead of annoying users with image or audio challenges, websites can annoy users by running up their electricity bills (CPU work aint cheap) and/or denying them access if they [selectively] disable JavaScript and/or block web workers in their browser.


I think it's the lesser evil in terms of privacy and self-hosting if you need anti-spam protection for something. In today's world spam/not protection is becoming more and more necessary, and even with JavaScript this is less intrusive than a service from Google or Cloudflare or something.

Suboptimal and an inefficient use of resources, yes, but possibly the only way to combat bots without privacy intrusive services. I'm open to hearing alternative ideas, though!


I agree.

Bots will trend towards resembling real users exactly.

All you can really do is make it expensive for a bot to spam requests. Everything else will be identical to humans one day, and in the meantime it's annoying to block legit Tor users or legit scraper bots.


I actually much prefer this. Its only a small amount of CPU, negligible to your electricity bill, but it doesn't involve clicking every traffic light.


This is practically useless, since desktop computers doing some work can be easily eclipsed by a specialized hardware doing it for spammers and sybil attackers.


It fails to be an automated test to tell computers and humans apart, as computers are more than capable of solving proofs of work without human intervention.


This is true but so are the existing captchas. Existing captchas are just harvesting training data for Google's self driving vehicles at this point.

With a PoW captcha, it doesn't matter how smart you make your algorithm, it's still going to be slow. With existing systems I'd argue it's probably a lot slower for people than for machines, especially since it's people guessing what a machine thinks people would classify an image as.

This is an easy solution for rate limiting low trust/high risk connections and better software isn't going to magically make it any faster. This has always been what captchas aim to accomplish.


Because we're accepting failure on that front. Instead, it's providing rate-limiting.

Hell, charge me one penny per refresh and a dime per tweet and login attempt. Then let the bots run freely if they're willing to pay that rate.


I'd be curious how well this performs on an older mobile device vs new CPU.

Seems like this might exclude users lower-end electronics that might be low-income.


Hi, developer here, there is a table showing hashes per second on various devices at the bottom of the readme. My laptop (thinkpad t480s) = 70h/s, my phone (motorolla g7) = 12h/s. Its not so bad on the phone. The site owner can tweak the difficulty for whatever lowest common denominator they want.


If it automatically scales based on current traffic, that might not matter.

You can have it turn itself off during a normal "1 request per minute" day on a small blog and then crank up to "A new CPU needs 2 seconds" during a DDOS.

Use token bucket or leaky bucket or whatever so a few normal users clicking around for 10 minutes won't trigger it, but after a while the server runs out of patience if they keep making requests faster.


I’m not sure what the point really is, they pay pennies to people in Bali to sit around and solve these. Anyone who really wants to get in is going to get in. At best it keeps honest people honest.

Check out this guy on YouTube, he can pretty much open any lock in thirty seconds without causing any physical damage, will change your whole perspective on security.

https://youtube.com/c/lockpickinglawyer

It’s better to plan for people getting in then depend on preventing it.


The point is to raise the price of the attack.

If someone wants to make 10,000 accounts, I'd rather it cost them 5 cents per captcha solve, $500, than for it to be free.

Some attackers can make it pay off, but many can't, so they don't try. That makes my life easier, as I'm the one being paged during an attack.


But that's why this proof of work scheme doesn't make sense.

I assume attacker doesn't need the accounts immediately. I also assume that a real user will wait at most 10 seconds when creating an account on their old underpowered phone.

So the attacker could either wait 27 hours (10*10000 seconds) to do the attack, which for most attacks wont matter much. Or they could use some high powered aws instance that's 100x as powerful as the phone and wait a few mins (aws pricing aint that bad if you just need 5 min of compute time).

Yes it increases "costs" but not by very much and not in a way that scales


PoW should probably be built in to the browser as a standard at some point if it is going to be in widespread use. If a website is trying to stop bots, the bots are at an advantage if they can compute the PoW using optimized C while legitimate customers are computing it in Javascript.


Webasm will help with this. If the browser's JIT is good enough, it'll be close to optimized C.

Then you just need to make sure your algorithm is also space-hard and resists parallelization so GPUs and ASICs can't get it.

Basically it's a password hash, like Argon2. I think libsodium already has an official WebAsm build, so there you go.

Web browsers also have "crypto.subtle" but it's not allowed on file:// (making testing on local difficult) and I don't know if it has password hashing.


There is no way to prevent people from optimizing PoW for spamming.

Generating 1MM units of PoW will always be more efficient than 1MM people generating each 1 unit of PoW.

Optimization always works better at scale. Therefore an attacker always has the upper hand.

PoW is absolutely useless as a CAPTCHA and doesn't even do what C.A.P.T.C.H.A. says.


exactly. just ask for sats via LN.


We can already stream money with https://webmonetization.org/docs/explainer/ Doesn't matter if the underling "wallet" is a blockchain or some other ledger like system.


so a 402, but in reverse? the user-agent gets paid, instead of server?


Its a HTML meta tag that contains an address where to send/stream money similar to an email address but for value not text. The websites backed ofc revives data about that payment in real time and can change the content of the website based on that.


And then you're either a licensed/regulated business, or a money launderer.


That last part (about denying access) at least can be fixed: If scripts are disabled (or if the features needed are unimplemented in the browser), or if scripts are enabled but an error occurs when trying to activate the web workers or whatever other features it might use, then it can display a link to the documentation and you can enter the response manually (perhaps copying from an external program (which can even be on a different computer), which if it is native code might be faster than the web page). (If you can have some sort of protocol identification attribute, then this substitution can be done automatically.)

It is true that it can still be annoying for extra CPU work though (and may waste energy), and if they both disabled scripts/workers and also won't or can't do otherwise despite that it will still be denied access.


For someone that’s tried to identify trains on Craigslist for 11 failed times in a row, this might be useful to me.

Turns out when you zoom a picture in far enough, large bus windows, train windows, and building windows all look very similar.


I wish I could be given the option to just pay instead of solving a captcha. At the end of the day that's what bots end up doing (pay a human to solve the captcha for them), so why not just cut out the middleman, and let me pay the website.


Payment forms need captchas too, otherwise they'll get millions of transactions from card testers making the costs unsustainable.


Cryptocurrencies would be a good solve for this. Specifically, a layer 2 network on top of a cryptocurrency like Bitcoin's lightning network or payment channels on Ethereum, both of which allow for subcent transactions with subcent fees.

There are obviously UX challenges to making it easy to acquire the crypto, but I could imagine this starting as an optional alternative to captchas.


>which allow for subcent transactions with subcent fees.

Yea micropayments were Satoshi's vision. For example you could pay like 1/100 of a cent to unlock and bypass captcha puzzle.


And draining your phone battery.


CAPTCHA are meant to exclude computers. PoW does not do this at all. This is completely missing the point.

An attacker can easily an cheaply generate way more PoW than a legitimate user by optimizing their system.

This is just an "unskippable" delay timer not a CAPTCHA!


That was never the true goal. The goal was to prevent spam, brute force attacks and similar. This approach can work by making spam cost more. The user is not as affected by the cost (they probably have a mostly unused computer sitting there anyways) but it may be enough to stop attackers. (At least some that buy their own hardware.)


>The goal was to prevent spam

It does not. Its broken the day someone who can code wants to beak it.

>The user is not as affected by the cost...

Its exactly the opposite it affects the user not an attacker who can generate millions of PoW units on toaster for a few bucks. Or even use another systems idle time. No human needed == its super cheap. Unlike real CAPTCHAs where you need to pay real people to solve them.


In short, the Scrypt hash function was designed for this. With SHA256, the "toaster" you were refferring to is called an ASIC and you can buy one for $200 that plugs into a USB port and it would hash faster than 2 million CPUs.

However that's not possible with Scrypt, especially with the relatively large memory cost and block size parameters that this software uses. Even GPUs choke on scrypt at these levels. See: https://www.mobsec.ruhr-uni-bochum.de/media/mobsec/arbeiten/...


You misses the point. It does not matter what kind of technical implementation or algorithm is used. If the average user hardware can solve the "captcha" in a meaningful time on average hardware then an attacker with optimized hardware and on scale can always solve millions of these "captcha" relatively cheap. If you increase hardware demand to slow down an attacker you just exclude more and more legitimate people. Sure the attacker maybe can only spam 500k messages instead of 1MM in the same time but you also reduced the legitimate user by 50%.

Even in the absolute worst case where no optimization is possible at all the attacker can still run a device 24/7 so if a normal user has to wait 20 second on a smartphone an attacker can spam at least 4320 messages per day with the same device. And it scale at least perfectly linear. 2 such devices would double the spam capacity.Aand if the block sizes are increased to slow the attacker down it is exactly as much as it slows down the real user. But the real user actually cares and gets annoyed the attacker does not, he keep the same spam/legit message ratio.


If that was true than why doesn't that person sit down and use his coding talents to optimize bitcoin mining and profit.


Not sure if that is joke but Bitcoin mining is already highly optimized and done on scale.

Beating the average smartphones in-browser hash power does absolutely nothing. You are not competing against them you compete against large scale mining farms with special hardware.


Captcha is proof of human work.

Because of improved interfaces to exploit the poor, a spammer can already pay to have humans solve captchas using an API, just as well as they could pay for computing time to solve hashes.

As such, if you were to tune the difficulty of a computing proof of work to be more expensive to compute than to pay the lowest bidding human farm to solve a captcha, it should be better at decreasing spam.


But humans are expensive and dont scale well even if done by exploiting poor people.

If you would raise the PoW to cost more than that the average user would simply be unable to solve the PoW it in a meaningful time on their hardware.



Not sure whats you point or argument, none questioned the existence of humans captcha solvers.


I don't get this. People can still create 1000s of fake users on my website just by using CPU time?


It's effectively a rate limiter, where before the Bad Person/People could make 10,000 users per unit of time before, now they may only make 100 users. It won't fix the problem entirely, but it's better than nothing.


But people can still run several sessions of this.

So if I have 100 cores available, I can run 100 sessions in parallel.


See my other comment about auto-scaling. Adjusting difficulty for PoW is trivial. Have the server crank up the challenge if it's getting more traffic than normal.


Or base the challenge difficulty based on other parameters. For example if your IP has had a lot of failed login attempts recently the difficulty can be increased.


Right, but they still will be slower than if there was no protection at all. 100 slower cores vs 100 cores churning out requests ASAP.

For a determined or resourceful attacker, this alone won't be good enough defense, but I can see it being a layer of defense in depth.


I don't get this. People can still create 1000s of fake users on my website just by _buying captcha solves_?

Type of spent resource is rather irrelevant, isn't it?


yes, but they cant create 1000s of fake users on EVERY website, unless they wanna shell out millions of dollars per year for the compute power required.


It puts a price on doing so, a price which you could increase based on demand.


The issue with browser based PoW is that browsers are still fairly slow execution environments.

Any waiting period for calculation that won't annoy users is not long enough for an attacker to not still be able to spam, given that they will be solving them 2-100x faster with an optimized native implementation vs in a browser.

It also doesn't work as a turing test, because by their nature computers are good at batch solving proofs of work.

I once started an anonymous email service with browser-based PoW for antispam. It didn't work.

You'd need users to do like, several hours of in-browser PoW to make it viable as an anti-abuse measure. Anything less means a bot farm is posting spam dozens of times per hour.

Frictionless micropayments are still a pipe dream today, as any useful technology available to do so has basically been outlawed in the USA without a multimillion dollar license, and a KYC department, et c. It's a real shame because we have all of the technology for cash-based anti-abuse bonds and the like. It's just illegal to deploy it unless you go full MSB.


According to the README, the implementation is already multi-threaded and uses WASM, which is not too far off from native performance.


How many browsers can run at this speed? How far off is "not too far" - 20-50%?

Spammers aren't sitting there at an interactive session, waiting to create an account while staring at a spinner.


Don't move the goalposts.

Native code is not "2-100x faster" than WebAssembly. That's what I wanted to address.


If it's 50% slower, then native code is 2x faster.

How much slower is it?


Msb?


Money services business, a heavily regulated industry in the USA due to the USG’s insistence upon total identity-linked financial surveillance for all end users of all financial service providers in the country.

Not only is it a total privacy invasion, all the burden is borne by the service providers for implementing the government’s universal financial surveillance.


Maybe not technically a CAPTCHA if it can't "Tell Computers and Humans Apart".


Why do we need work? Since no valuable work product is being made, proof of work is really just a proxy for proof of elapsed time.

The animated demo shows this perfectly. The bar which is showing the progress in the proof of work could just be a simple timer, and it would look exactly the same.

The back end generates the page, and makes a note of the current time. Then it doesn't accept the submission until N seconds have passed since that time. The animated bar on the front end is just for show; the browser isn't what is enforcing it.

Proof of elapsed time requires nothing from the other party. If I want proof that you spent at least 30 seconds waiting from the moment I gave you some starting signal, the only evidence I need to trust are the readings of my own stopwatch.


Captchas are there to prevent bots which are making thousands or millions of submissions. With a timer the bot just needs to make all of the captcha requests first and then wait 30 seconds for all of the timers to expire in parallel. But with proof of work the spammer actually needs to compute all of the work for every submission, which would require a significant amount of computational power, rendering some types of bots uneconomical to run.


Excellent point! It's not hard for a single attacking context to have the resources to generate vast numbers of requests in parallel. If a botnet is involved, it can use different originating IP addresses.

And so, that's why we need proof of work; thank you for bringing my derailed narrative back on the proper technical track.


You're leaving the implementation of your uncheatable POET as an exercise to the reader it seems.


> makes a note of the current time

Makes a note where exactly? In a data store? That means that I'm allowing an untrusted entity to trigger an action that requires me to store and later query my data store. That's a bad idea. The whole reason to have a captcha at all is to stop bots from overloading your system. The data store is a major bottle neck in most systems.

Proof of work is stateless. It's fast to verify. If you sign the challenge input before giving it to the user, you can also statelessly verify it's a legit challenge. No data store needed until after verification is complete.

Edit: also what ahsima said! The point of a captcha is to make it more expensive to use a bot net against you. Timers don't do that.


Another excellent point! A correct implementation of proof of work means that we can hand off some (relatively) cheaply generated cookie and then maintain no state in relation to that while the answer is being calculated on the other end.

This stateless principle is implemented in TCP SYN cookies for warding off SYN flood attacks, for instance.

Thank you, also.


An 8-byte timestamp is not a lot of state. You could have a million concurrent connections and use only 8 MB for a file descriptor -> timestamp lookup table. The TCP send/receive buffers will exhaust your system memory long before the timestamps do.


You need a datastore anyway to prevent replay attacks.


I like this - the hash function is memory-based rather than CPU-based so it's easier on your CPU while being more costly for attackers to spoof en masse.

Good thinking!


It mentions on the widget itself that it's accessible. That makes sense at a high level, since it doesn't require interaction.

But I'm curious if it might need more work in the 'accessible' area. Like, for example, is the progress bar percentage-done exposed in an accessible way? I don't see anything obvious here: https://git.sequentialread.com/forest/pow-captcha/src/branch... , seems like it just changes width via css styling, but I could be missing it. I'm not sure it presents an easily understandable reason why the submit button is disabled, that you need to wait, etc, either.


Yes, unfortunately I don't know anyone who uses a screen reader personally and I've never spent the time to learn how to use one myself.

So I don't really have a great way to make it accessible to blind users at the moment, but it's only a couple code changes away, while most other "Captcha" solutions might require a redesign before they could be considered accessible.


I think proof of work makes bad captchas. CPU power is pretty cheap. Its really hard to have it be expensive enough to deter bad people well being cheap enough to not deter real users


Solving captchas is pretty cheap too and will get increasingly cheaper with tech and AI improvements. ReCaptcha in particular is getting ridiculous to the point where people spend minutes solving it - I'd rather let my device sit idle for 20 seconds mining some coin rather than being part of these absurd picture matching games.


> It uses a multi-threaded WASM (Web Assembly) WebWorker running the Scrypt hash function instead of SHA256. Because of this, it's less succeptible to hash-farming attacks.

That's a problem; captchas need a fallback mechanism for situations when JS is disabled.

(I think that could be arranged; e.g. in the no JS case, the web application just spits out some token, which the user must copy and paste into some program that does the work, and then passes the answer back into the web application.


Yeah I would love to find a way to support non-JS browsers. For now I considered it out of scope. I could very easily make a browser extension or companion app for it though!


My suggestion is that if the script fails (for whatever reason, including non-JS browsers), then in addition to doing what they said (spits out some token), should also link to documentation about how to compute the response. If the user has a program to do it, they can use that one. If not, they might be able to write their own (by following the documentation).


Very nice. Wish there was a demo.

This project is also cool: https://git.sequentialread.com/forest/greenhouse

A reverse proxy that lets you split the "public-visible focal point" part of a web server from the "Holds a lot of private data and runs code" part. So the latter can run in someone's living room.


If you wish to see it in action you can click here: https://picopublish.sequentialread.com/files/aniguns.png

It will redirect you to a unique link tied to your IP/User Agent string so if you want to see it again you will have to click the original link again.


> It is impossible to predict how long a given Proof of Work will take to calculate.

This seems like a very significant limitation. Is there a way around this?

My first though is that if instead of one problem 100x as hard you solved 100 easier problems. That at least would give you a somewhat accurate loading bar, but I'm not sure if that would actually reduce your variance.


It sounds really scary at first, but once you start using it in practice, it's not so bad. As the developer, you simply locate your lowest common denominator device (older cell phone) and test it out a few times, adjusting the difficulty as you go until it generally happens fast enough for the UX to be unaffected. Usually after 10 or so tries you get a good idea for the feel of it.

There are tons of things in nature that are like this, for example how long it takes a spinning quarter to topple over on the table and land heads or tails. Its theoretically possible it could balance perfectly and never fall, but how many times have you seen that IRL???


Solving many lower difficulty problems instead would give you a progress bar that actually shows progress, but the total time to complete would still not be predictable.


It is GPL licensed, which means you can’t integrate it into any non-GPL licensed application (Apache, MIT or commercial).


Sure you can! You just can’t distribute the application without also distributing its entire source code under the GPL license.

It is completely fine to use it to build an application for a client. Such an application cannot become a product without becoming open source, however.


So you need to change your application into a GPL licensed application. That’s what I meant.

GPL is a great license, but for libraries (or „building blocks“) it greatly limits their usage. Not everybody wants to license their application as GPL. Just like not everybody likes bananas. Some prefer peaches or oranges.


That’s fair, but it is a part of the license. To me, it makes sense that collaborative open source projects are licensed under the GPL: it forbids taking fruit from the community garden for free and selling it for profit.


I know, I also use GPL on some on my projects.

But for a library project, GPL is quite an unusual choice. I don't know any commonly used library, that has such a restrictive license. Most of them use LGPL, Apache, MIT, BSD or something similar. Otherwise your library is commonly doomed to be a stillbirth.

Ghostscript is one example, a lot of users ran into legal trouble using it.


The end user experience isn't too terrible, big improvement over other captcha's I've had to use. Though I imagine it might get frustrating for things like logging in, where you might get your password wrong and have to start over. Or maybe it supports caching the idea that you've already proved yourself?


I usually solve the remember they've solved a captcha outside of the captcha solution itself - eg: session state, cookie, etc.


I can see how this could pay for scalable hosting of several popular websites.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: