The vast majority of the modern internet falls into one of those two buckets though, no?
I mostly scrape government data so the sites are a little 'behind' on that trend, but no. Even JS heavy sites are almost always pulling from a JSON or graphql source under the hood.
At scale, dropping the heavier dependencies and network traffic of a browser is meaningful.
IF you can use crawlers, definitely do.
They aren't enough for anything that's login-protected, or requires interacting with wizards (eg JS, downloading files, etc)
If a website isn't using Cloudflare or a JS-only design, it's generally better to skip playwright. All the major AIs understand beautifulsoup pretty well, and they're likely to write you a faster, less brittle scraper.