I would doubt that this is categorically true. Serverless inherently makes the whole architecture more complex with more moving parts in most cases compared to classical web applications.
> Serverless inherently makes the whole architecture more complex with more moving parts
Why's that? Serverless is just the generic name for CGI-like technologies, and CGI is exactly how classical web application were typically deployed historically, until Rails became such a large beast that it was too slow to continue using CGI, and thus running your application as a server to work around that problem in Rails pushed it to become the norm across the industry — at least until serverless became cool again.
Making your application the server is what is more complex with more moving parts. CGI was so much simpler, albeit with the performance tradeoff.
Perhaps certain implementations make things needlessly complex, but it is not clear why you think serverless must fundamentally be that way.
Depends pretty much where those classical web applications are hosted, how big is the infrasture taking care of security, backups, scalability, failovers, and the amount of salaries being paid, including on-call bonus.
Serverless is not a panacea. And the alternative isn't always "multiple devops salaries" - unless the only two options you see are server serverless vs outrageously stupid complicated kubernetes cluster to host a website.
There's a huge gap between serverless and full infra management. Also, IMO, serverless still requires engineers just to manage that. Your concerns shift, but then you need platform experts.
A smaller team, and from business point of view others take care of SLAs, which matters in cost center budgets.
Pay 1 devops engineer 10% more and you'll get more than twice the benefit of 2 average engineers.
Usually a decision factor between more serverless, or more DevOps salaries.