Preferences

At that point, why should I use serverless at all? If I have to think about the lifetime of the servers running my serverless functions?

Serverless only makes sense if the lifetime doesn't matter to your application, so if you find that you need to think about your lifetime then serverless is simply not the right technology for your use case.
Because it is still less management effort than taking full control of the whole infrastructure.

Usually a decision factor between more serverless, or more DevOps salaries.

I would doubt that this is categorically true. Serverless inherently makes the whole architecture more complex with more moving parts in most cases compared to classical web applications.
> Serverless inherently makes the whole architecture more complex with more moving parts

Why's that? Serverless is just the generic name for CGI-like technologies, and CGI is exactly how classical web application were typically deployed historically, until Rails became such a large beast that it was too slow to continue using CGI, and thus running your application as a server to work around that problem in Rails pushed it to become the norm across the industry — at least until serverless became cool again.

Making your application the server is what is more complex with more moving parts. CGI was so much simpler, albeit with the performance tradeoff.

Perhaps certain implementations make things needlessly complex, but it is not clear why you think serverless must fundamentally be that way.

Depends pretty much where those classical web applications are hosted, how big is the infrasture taking care of security, backups, scalability, failovers, and the amount of salaries being paid, including on-call bonus.
Serverless is not a panacea. And the alternative isn't always "multiple devops salaries" - unless the only two options you see are server serverless vs outrageously stupid complicated kubernetes cluster to host a website.
There's a huge gap between serverless and full infra management. Also, IMO, serverless still requires engineers just to manage that. Your concerns shift, but then you need platform experts.
A smaller team, and from business point of view others take care of SLAs, which matters in cost center budgets.
Pay 1 devops engineer 10% more and you'll get more than twice the benefit of 2 average engineers.
It can be good for connecting AWS stuff to AWS stuff. "On s3 update, sync change to dynamo" or something. But even then, now you've got a separate coding, testing, deployment, monitoring, alerting, debugging pipeline from your main codebase, so is it actually worth it?

But no, I'd not put any API services/entrypoints on a lambda, ever. Maybe you could manufacture a scenario where like the API gets hit by one huge spike at a random time once per year, and you need to handle the scale immediately, and so it's much cheaper to do lambda than make EC2 available year-round for the one random event. But even then, you'd have to ensure all the API's dependencies can also scale, in which case if one of those is a different API server, then you may as well just put this API onto that server, and if one of them is a database, then the EC2 instance probably isn't going to be a large percentage of the cost anyway.

Actually I don't even think connecting AWS services to each other is a good reason in most cases. I've seen too many cases where things like this start off as a simple solution, but eventually you get a use case where some s3 updates should not sync to dynamo. And so then you've got to figure out a way to thread some "hints" through to the lambda, either metadata on the s3 blob, or put it in a redis instance that the lambda can query, etc., and it gets all convoluted. In those kinds of scenarios, it's almost always better just to have the logic that writes to s3 also update dynamo. That way it's all in one place, can be stepped through in a debugger, gets deployed together, etc.

There are probably exceptions, but I can't think of a single case where doing this kind of thing in a lambda didn't cause problems at some point, whereas I can't really think of an instance where putting this kind of logic directly into my main app has caused any regrets.

For a thing, which permanently has load it makes little sense.

It can make sense if you have very differing load, with few notable spikes or on an all in on managed services, where serverless things are event collectors from other services ("new file in object store" - trigger function to update some index)

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal