Agreed, they could have done this much more gracefully. Same as the reddit API. Average user? Who cares. Sending millions of requests? Okay we’re going to clamp down pretty hard on you
Comment on Exciting news! The free API you were using is no more free!
ricecake@sh.itjust.works 11 months agoIf they’re storing them in something like Amazon s3, there is a cost (extremely low, but not free) associated with retrieving data regardless of size.
Even if they were an entirely free service, it’d make sense to put hard rate limits on unauthenticated users and more generous rate limits on authenticated ones.
Leaving out rate limits is a good way to discover that you have users who will use your API real dumb.
Their pricing model seems fucked, but that’s aside from the rate limits.
scrubbles@poptalk.scrubbles.tech 11 months ago
tgxn@lemmy.tgxn.net 11 months ago
Yeah this is absolutely not an insignificant fee. Especially if they have millions of requests… There be plenty of caching solutions to save on this though, especially since they wouldn’t change often.
ricecake@sh.itjust.works 11 months ago
Oh, I’m pretty sure it’s close to trivial. $0.0004 per thousand requests is $400 per billion, or $0.40 per million.
That’s as close to insignificant as you can get and still pay attention to. Caching solutions are probably going to end up costing you more in the long run. An HA setup that can handle a billion requests a year is going to cost you at least $100 a month, and still provide less availability than s3.
You don’t want unmetered access, but their pricing is unlikely to be based on access rates, and more likely on salary costs and other infrastructure costs, like indexing and search.