Preferences

rightbyte parent
What is old is new again.

My employer is so conservative and slow that they are forerunning this Local Cloud Edge Our Basement thing by just not doing anything.


radu_floricica
> What is old is new again.

Over the years I tried occasionally to look into cloud, but it never made sense. A lot of complexity and significantly higher cost, for very low performance and a promise of "scalability". You virtually never need scalability so fast that you don't have time to add another server - and at baremetal costs, you're usually about a year ahead of the curve anyways.

hibikir
A nimble enough company doesn't need it, but I've had 6 months of lead time to request one extra server in an in-house data center due to sheer organizational failure. The big selling point of the cloud really was that one didn't have to deal with the division lording over the data center, or have any and all access to even log in by their priesthood who knew less unix than the programmers.

I've been in multiple cloud migrations, and it was always solving political problems that were completely self inflicted. The decision was always reasonable if you looked just at the people the org having to decide between the internal process and the cloud bill. But I have little doubt that if there was any goal alignment between the people managing the servers and those using them, most of those migrations would not have happened.

mgkimsal
I've been in projects where they're 'on the cloud' to be 'scalable', but I had to estimate my CPU needs up front for a year to get that in the budget, and there wasn't any defined process for "hey, we're growing more than we assumed - we need a second server - or more space - or faster CPUs - etc". Everything that 'cloud' is supposed to allow for - but ... that's not budgeted for - we'll need to have days of meetings to determine where the money for this 'upgrade' is coming from. But our meetings are interrupted by notices from teams that "things are really slow/broken"...
pdimitar
About sums up my last job. Desperation leads to micromanaging the wrong indicators. The results are rage-inducing. I am glad I got let go by the micromanagers because if not I would have quit come New Year.
AtlasBarfed
Yeah, clouds are such a huge improvement over what was basically an industry standard practice to say oh you want a server fill out this 20 page form and will get you your server in 6 to 12 months.

But we don't need one minute response times from the cloud really. So something like hetzner that may just be all right. We'll get it to you within an hour. It's still light years ahead of what we used to be.

And if it makes the entire management and cost side and performance with bare metal or closer to bare metal on the provider side, then that is all good.

And this doesn't even address the fact that yeah, AWA has a lot of hidden costs, but a lot of those managed data center outsourcing contracts where you were subjected to those lead times for new servers... really weren't much cheaper than AWS back in the day.

bstsb
in my experience i can rescale Hetzner servers and they'll be ready in a minute or two
AtlasBarfed
Yes, sorry, I didn't mean to impugn Hetzner by saying they were an hour delay, just that there could be providers that are cheaper that didn't need to offer AWS-level scaling.

Like a company should be able to offer 1 day service, or heck 1 week with their internal datacenters. Just have a scheduled buffer of machines to power up and adapt the next week/month supply order based on requests.

0cf8612b2e1e
The management overhead in requesting new cloud resources is now here. Multiple rounds of discussion and TPS reports to spin up new services that could be a one click deploy.

The bureaucracy will always find a way.

tracker1
Worst is when one of those dysfunctional orgs that does the IT systems administration tries to create their own internal cloud offerings instead of using a cloud provider. It's often worse than hosted clouds or bare metal.

But I definitely agree, it's usually a self-inflicted problem and the big gamble attempting to work around infrastructure teams. I've had similar issues with security teams when their out of the box testing scripts show a fail, and they just don't comprehend that their test itself is invalid for the architecture of your system.

jiggawatts
Running away from internal IT works until they inevitably catch up to the escapees. At $dayjob the time required to spin up a single cloud VM is now measured in years. I’ve seen projects take so long that the cloud vendor started sending deprecation notices half way through for their tech stacks but they forged ahead anyway because it’s “too hard to steer that ship”.

The current “runners” are heading towards SaaS platforms like Salesforce, which is like the cloud but with ten times worse lock in.

bluedino
> At $dayjob the time required to spin up a single cloud VM is now measured in years.

We have a Service Now ticket that you can complete that spins the server up at completion. Kind of an easy way to do it.

jiggawatts
Then you end up with too-large servers all over the place with no rhyme or reason, burning through your opex budget.

Also, what network does the VM land in? With what firewall rules? What software will it be running? Exposed to the Internet? Updated regularly? Backed up? Scanned for malware or vulnerabilities? Etc…

Do you expect every Tom, Dick, and Harry to know the answers to these questions when they “just” want a server?

This is why IT teams invariably have to insert themselves into these processes, because the alternative is an expensive chaos that gets the org hacked by nation states.

The problem is that when interests aren’t forced to align — a failure of senior management — then the IT teams become an untenable overhead instead of a necessary and tolerable one.

The cloud is a technology often misapplied to solve a “people problem”, which is why it won’t ever work when misused in this way.

odie5533
Complexity? I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks and I don't have to worry about OS upgrades and patches. Or a highly available load balancer with infinite scale.
codegeek
This is how the cloud companies keep you hooked on. I am not against them of course but the notion that no one can self host in production because "it is too complex" is something that we have been fed over the last 10-15 years. Deploying a production db on a dedicated server is not that hard. It is about the fact that people now think that unless they do cloud, they are amateurs. It is sad.
speleding
I agree that running servers onprem does not need to be hard in general, but I disagree when it comes to doing production databases.

I've done onprem highly available MySQL for years, and getting the whole master/slave thing go just right during server upgrades was really challenging. On AWS upgrading MySQL server ("Aurora") is really just a few clicks. It can even do blue/green deployment for you, where you temporarily get the whole setup replicated and in sync so you can verify that everything went OK before switching over. Disaster recovery (regular backups to off site & ability to restore quickly) is also hard to get right if you have to do it yourself.

If you are running k8s on prem, the "easy" way is to use a mature operator, taking care of all of that.

https://github.com/percona/percona-xtradb-cluster-operator https://github.com/mariadb-operator/mariadb-operator or CNPG for Postgres needs. They all work reasonable well, and cover all the basic (HA, replication, backups, recovery, etc).

klooney
It's really hard to do blue/green on prem with giant expensive database servers. Maybe if you're super big and you can amortize them over multiple teams, but most shops aren't and can't. The cloud is great.
cameronh90
Doing stuff on-prem or in a data centre _is_ hard though.

It's easy to look at a one-off deployment of a single server and remark on how much cheaper it is than RDS, and that's fine if that's all you need. But it completely skips past the reality of a real life resilient database server deployment: handling upgrades, disk failures, backups, hot standbys, encryption key management, keeping deployment scripts up to date, hardware support contracts and vendor management, the disaster recovery testing for the multi-site SAN fabric with fibre channel switches and redundant dedicated fibre, etc. Before the cloud, we actually had a staff member who was entirely dedicated to managing the database servers.

Plus as a bonus, not ever having to get up at 2AM and drive down to a data centre because there was a power failure due to a generator not kicking in, and it turns out the data centre hadn't adequately planned for the amount of remote hands techs they'd need in that scenario...

RDS is expensive on paper, but to get the same level of guarantees either yourself or through another provider always seems to end up costing about the same as RDS.

matt-p
I have done all of this also, today I outsource the DB server and do everything else myself, including a local read replica and pg_dump backups as a hail mary.

Essentially all that pain of yonder years was essentially storage it was a F**ing nightmare running HA network storage before the days of SSDs. It was slower than RAID, 5X more expensive than RAID and generally involved an extreme amount of pain and/or expense (usually both). But these days you only actually need SANs or as we call it today block storage when you have data you care about, again you only have to care about backups when you have data you care about.

For absolutely all of us the side effect of moving away from monolithic 'pets' is that we have made the app layer not require any long term state itself. So today all you need is N X any random thing that might lose data or fail at any moment as your app servers and an external DB service (neon, planetscale, RDS), plus perhaps S3 for objects.

Database is one of those places where it's justified, I think. Application containers do not need the same level of care hence are easy to run yourself.
fridder
I guess that is the kicker right? "same level of guarantees".
AtlasBarfed
I'd much rather deploy cassandra, admittedly a complex but failure resistant database, on internal hardware than on AWS. So much less hassle with forced restarts of retired instances, noisy nonperformant networking and disk I/O, heavy neighbors, black box throttling, etc.

But with Postgres, even with HA, you can't do geographic/multi-DC of data nearly as well as something like Cassandra.

lelanthran
> I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks

It's "only a few clicks" after you have spent a signficant amount of time learning AWS.

AznHisoka
As a self hosting fan, i cant even fathom how hard it would be to even get started running a Postgres or redis cluster on AWS.

Like, where do I go? Do i search for Postgres? If so where? Does the IP of my cluster change? If so how to make it static? Also can non-aws servers connect to it? No? Then how to open up the firewall and allow it? And what happens if it uses too much resources? Does it shutdown by itself? What if i wanna fine tune a config parameter? Do I ssh into it? Can i edit it in the UI?

Meanwhile, all that time finding out, and I could ssh into a server, code and run a simple bash script to download, compile, run. Then another script to replicate. And i can check the logs, change any config parameter, restart etc. no black box to debug if shit hits the fan

nkozyra
Having lived in both worlds, there are services wherein, yeah, host it yourself. But having done DB on-prem/on-metal, dedicated hosting, and cloud, databases are the one thing I'm happy to overpay for.

The things you describe involve a small learning curve, each different for each cloud environment, but then you never have to think about it again. You don't have to worry about downtime (if you set it up right), running a bash script ... literally nothing else has to be done.

Am I overpaying for Postgres compared to the alternatives? Hell yeah. Has it paid off? 100%, would never want to go back.

Volundr
> Do i search for Postgres?

Yes. In your AWS console right after logging in. And pretty much all of your other setup and config questions are answered by just filling out the web form right there. No sshing to change the parameters they are all available right there.

> And what happens if it uses too much resources?

It can't. You've chosen how much resources (CPU/Memory/Disk) to give it. Run away cloud costs are bill by usage stuff like redshift, s3, lambda, etc.

I'm a strong advocate for self (for some value of self) hosting over cloud, but your making cloud out to be far more difficult than it is.

mschuster91
Actually... for Postgres specifically, it's less than 5 minutes to do so in AWS and you get replication, disaster recovery and basic monitoring all included.

I hated having to deal with PostgreSQL on bare metal.

To answer your questions should someone ask these as well and wish answers:

> Does the IP of my cluster change? If so how to make it static?

Use the DNS entry that AWS gives you as the "endpoint", done. I think you can pin a stable Elastic IP to RDS as well if you wish to expose your RDS DB to the Internet although I have really no idea why one would want that given potential security issues.

> Also can non-aws servers connect to it? No?

You can expose it to the Internet in the creation web UI. I think the default the assistant uses is to open it to 0.0.0.0/0 but the last time I did that is many years past so I hope that AWS asks you about what you want these days.

>Then how to open up the firewall and allow it?

If the above does not, create a Security Group, assign the RDS server to that Security Group and create an Ingress rule that either only allows specific CIDRs or a blanket 0.0.0.0/0.

> And what happens if it uses too much resources? Does it shutdown by itself?

It just gets dog slow if your I/O quota is exhausted, it goes into an error state when the disk goes full. Expand your disk quota and the RDS database becomes accessible again.

> What if i wanna fine tune a config parameter? Do I ssh into it? Can i edit it in the UI?

No SSH at all, not even for manually unfucking something, for that you need the assistance of the AWS support - but in about six years I never had a database FUBAR'ing itself.

As for config parameters, there's an UI for this called "parameter/option groups", you can set almost all config parameters there, and you can use these as templates for other servers you need as well.

infecto
This smells like “Dropbox is just rsync”. No skin in the game I think there are pros and cons to each but a Postgres cluster can be as easy as a couple clicks or an entry into a provisioning script. I don’t believe you would be able to architect the same setup with a simple single server ssh and a simple bash script. Unless you already wrote a bash script that magically provisions the cluster across various machines.
pavel_lishin
> As a self hosting fan, i cant even fathom how hard it would be to even get started running a Postgres or redis cluster on AWS. Like, where do I go? Do i search for Postgres? If so where?

Anything you don't know how to do - or haven't even searched for - either sounds incredibly complex, or incredibly simple.

wahnfrieden
It is not as simple as you describe to set up HA multi-region Postgres

If you don't care about HA, then sure everything becomes easy! Until you have a disaster to recover and realize that maybe you do care about HA. Or until you have an enterprise customer or compliance requirement that needs to understand your DR and continuity plans.

Yugabyte is the closest I’ve seen to achieving that simplicity with self host multi region and HA Postgres and it is still quite a bit more involved than the steps you describe and definitely more work than paying for their AWS service. (I just mention instead of Aurora because there’s no self host process to compare directly there as it’s proprietary.)

cortesoft
Your comment seems much more in the vain "I already learned how to do it this way, and I would have to learn something to do it the other way"

Which is of course true, but it is true for all things. Provisioning a cluster in AWS takes a bit of research and learning, but so did learning how to set it up locally. I think most people who know how to do both will agree it is simpler to learn how to use the AWS version than learning how to self host it.

trenchpilgrim
A fun one in the cloud is "when I upgrade to a new version of Postgres, how long is the downtime and what happens to my indexes?"
mschuster91
For AWS RDS, no big deal. Bare metal or Docker? Oh now THAT is a world of pain.

Seriously I despise PostgreSQL in particular in how fucking annoying it is to upgrade.

icedchai
Yep. I know folks running their own clusters on AWS EC2 instead of RDS. They're still on 3 or 4 versions back because upgrading Postgres is a PITA.
icedchai
If you can self host postgres, you'll find "managing" RDS to be a walk in the park.
AtlasBarfed
Did you try ChatGPT for step by step directions for an EC2 deployed database? It would be a great litmus test to see if it does proper security and lockdown in the process, and what options it suggests aside from the AWS-managed stuff.

It would be so useful to have an EC2/S3/etc compatible API that maps to a homelab. Again something that Claude should allegedly be able to vibecode give then breadth of documentation, examples, and discussions on the AWS API.

whstl
If you are talking about RDS and ElasticCache, it’s definitely NOT a few clicks if you want it secure and production-ready, according to AWS itself in their docs and training.

And before someone says Lightsail: is not meant for highly availability/infinite scale.

naasking
> I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks and I don't have to worry about OS upgrades and patches

Last I checked, stack overflow and all of the stack exchange sites are hosted on a single server. The people who actually need to handle more traffic than that are in the 0.1% category, so I question your implicit assumption that you actually need a Postgres and Redis cluster, or that this represents any kind of typical need.

trenchpilgrim
SO was hosted on a single rack last I checked, not a single box. At the time they had an MS SQL cluster.

Also, databases can easily see a ton of internal traffic. Think internal logistics/operations/analytics. Even a medium size company can have a huge amount of data, such as tracking every item purchased and sold for a retail chain.

naasking
They use multiple servers for redundancy, but they are using only 5-10% capacity per [1], so they say they could run on a single server given these numbers. Seems like they've since moved to the cloud though [2].

[1] https://www.datacenterdynamics.com/en/news/stack-overflow-st...

[2] https://stackoverflow.blog/2025/08/28/moving-the-public-stac...

binary132
If you don’t find AWS complicated you really haven’t used AWS.
trenchpilgrim
If you were personally paying the bill, you'd probably choose the self host on cost alone. Deploying a DB with HA and offsite backups is not hard at all.
fun444555
I have done many postgres deploys on bare metal. The IOPS and storage space saved (zfs compression because psql is meh) is huge. I regularly used hosted dbs but largely for toy DBs in GBs not TBs.

Anyway, it is not hard and controlling upgrades saves so much time. Having a clients db force upgraded when there is no budget for it sucks.

Anyway, I encourage you to learn/try it when you have opportunity

benjiro
> I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks

I haven ever setup a AWS postgres and redis, and know its more then a few clicks. there is simply basic information that you need to link between services, where it does not matter if its cloud or hardware, you still need to do the same steps, be it from CLI or WebInterface.

And frankly, these days with LLMs, its no excuse anymore. You can literally ask a LLM to do the steps, explain them to you, and your off to the races.

> I don't have to worry about OS upgrades and patches

Single command and reboot...

> Or a highly available load balancer with infinite scale.

Unless your google, overrated ...

You literally rent from places like Hetzner for 10 bucks a load balancer, and if your old fascion, you can even do a DNS balancing.

Or you simply rent a server 10x the performance what Amazon gives (for the same price or less), and you do not need a load balancer. I mean, for 200 bucks, you rent a 48 core 96 thread server at Hetzner... Who needs a load balancer again... You will do millions or requests on a single machine.

icedchai
For anything "serious", you'll want a load balancer for high availability, even if there's no performance need. What happens when your large server needs an OS upgrade or the power supply melts down?
prmoustache
Well you can have managed resources on premises.

It costs people and automation.

People are usually the biggest cost in any organisation. If you can run all your systems without the sysadmins & netadmins required to keep it all upright (especially at expensive times like weekends or run up to Black Friday/Xmas), you can save yourself a lot more than the extra it'll cost to get a cloud provider to do it all for you.
ecshafer
Every large organization that is all in on cloud I have worked at has several teams doing cloud work exclusively (CICD, Devops, SRE, etc), but every individual team is spending significant amounts of their time doing cloud development on top of that work.
rcxdude
This. There's a lot of talk of 'oh you will spend so much time managing your own hardware' when I've found in practice it's much less time than wrangling the cloud infrastructure. (Especially since the alternatives are usually still a hosting provider that mean you don't have to physically touch the hardware at all, though frankly that's often also an overblown amount of time. The building/internet/cooling is what costs money but there's already a wide array of co-location companies set up to provide exactly that)
epistasis
I think you are very right, and to be specific, IAM roles, connecting security groups, terraform plan/apply cycles, running Atlantis through GitHub, all that takes tremendous amounts of time and requires understanding a very large set of technologies on top of the basic networking/security/PostGRES knowledge.
ecshafer
The cost to run data-centers for a large company that is past the co-location phase, I am not sure where those calculations come out to. But yeah in my experience, running even a fairly large amount of bare metal nix servers in colocation facilities are really not that time consuming.
chatmasta
I can’t believe this cloud propaganda remains so pervasive. You’re just paying DevOps and “cloud architects” instead.
codegeek
Exactly. It's sad that we have been brain washed by the cloud propaganda long enough now. Everyone and their mother thinks that to setup anything in production, you need cloud otherwise it is amaeteurish. Sad.
spatley
Exactly, for the narrowly defined condition of running k8s on digital ocean with a managed control plane compared to Hetzner bare metal:

AWS and DigitalOcean = $559.36 monthly or Hetzner = $132.96 The cost of an engineer to set up and maintain a bare metal k8s cluster is going to far exceed the roughly $400 monthly savings.

If you run things yourself and can invest sweat equity, this makes some sense. But for any company with a payroll this does not math out.

mjr00
Yeah I always just kinda laugh at these comparisons, because it's usually coming from tech people who don't appreciate how much more valuable people's time is than raw opex. It's like saying, you know it's really dumb that we spend $4000 on Macbooks for everyone, we could just make everyone use Linux desktops and save a ton of money.
wredcoll
If "cloud" took zero time, then sure.

It actually takes a lot of time.

mjr00
"It's actually really easy to set up Postgres with high availability and multi-region backups and pump logs to a central log source (which is also self-hosted)" is more or less equivalent to "it's actually really easy to set up Linux and use it as a desktop"

In fact I'd wager a lot more people have used Linux than set up a proper redundant SQL database

grim_io
Honestly, I don't see a big difference between learning the arcane non-standard, non-portable incantations needed to configure and use various forks of standard utilities running on the $CLOUD_PROVIDER, and learning to configure and run the actual service that is portable and completely standard.

Okay, I lied. The later seems much more useful and sane.

KronisLV
> It's like saying, you know it's really dumb that we spend $4000 on Macbooks for everyone, we could just make everyone use Linux desktops and save a ton of money.

Ohh idk if this is the best comparison, due to just how much nuance bubbles up.

If you have to manage those devices, Windows and Active Directory and especially Group Policy works well. If you just have to use the devices, then it depends on what you do - for some dev work, Linux distros are the best, hands down. Often times, Windows will have the largest ecosystem and the widest software support (while also being a bit of a mess). In all of the time I’ve had my MacBook I really haven’t found what it excels at, aside from great build quality and battery life, it feels like one of those Linux distros that do things differently just for the sake of it, even the keyboard layout, the mouse acceleration feeling the most sluggish (Linux distros feel the best, Windows is okay) even if the trackpad is fine, as well as stuff like needing DiscreteScroll and Rectangle and some other stuff to make generic hardware feel okay (or even multi display work), maybe creative software is great there.

It’s the kind of comparison that derails itself in the mind of your average nerd.

But I get the point, the correct tool for the job and all that.

grim_io
What is this?!

You are self-managing expensive dedicated hardware in form of MacBooks, instead of renting Azure Windows VM's?!

Shame!

Don't be silly, - the MacBook Pro's are just used to RDP to the Azure Windows VMs ;)
Ekaros
Wouldn't you want someone watching over cloud infra at those times too? So maybe slightly less, but still need some people being ready.
Arch-TK
What is more likely to fail? The hardware managed by Hetzner or your product?

I'm not saying that you won't experience hardware failures, I am just saying that you also need to remember that if you want your product to keep working over the weekend then you must have someone ready to fix it over the weekend.

grim_io
Cloud providers and even cloudflare go down regularly. Relax.
Sure - but when AWS goes down, Amazon fixes it, even on the weekends. If you self-host, you need to pay a person to be on call to fix it.
wredcoll
I mean, yes, but also I get "3 nines" uptime by running a website on a box connected to my isp in my house. (it would easily be 4 or 5 nines if I also had a stable power grid...)

There's a lot, a lot of websites where downtime just... doesn't matter. Yes it adds up eventually but if you go to twitter and its down again you just come back later.

icedchai
"3 nines" is around 8 hours of downtime a year. If you can get that without a UPS or generator, you already have a stable power grid.
HPsquared
That's how they can get away with such seemingly high prices.
exe34
except you now have your developers chasing their own tails figuring out how to insert the square peg in the round hole without bankrupting the company. cloud didn't save time, it just replaced the wheels for the hamsters.
icedchai
Right, because cloud providers take care of it all. /s Cloud engineers are more expensive than traditional sysadmins.
Lalabadie
I'm a designer with enough front-end knowledge to lead front-end dev when needed.

To someone like me, especially on solo projects, using infra that effectively isolates me from the concerns (and risks) of lower-level devops absolutely makes sense. But I welcome the choice because of my level of competence.

The trap is scaling an org by using that same shortcut until you're bound to it by built-up complexity or a persistent lack of skill/concern in the team. Then you're never really equipped to reevaluate the decision.

ep103
The benefit of cloud has always been that it allows the company to trade capex for opex. From an engineering perspective, it trades scalability for complexity, but this is a secondary effect compared to the former tradeoff.
PeterStuer
"trade capex for opex"

This has nothing to do with cloud. Businesses have forever turned IT expenses from capex to opex. We called this "operating leases".

et1337
I’ve heard this a lot, but… doesn’t Hetzner do the same?
radiator
Hetzner is also a cloud. You avoid buying hardware, you rent it instead. You can rent either VMs or dedicated servers, but in both cases you own nothing.
f1shy
If everything is properly done, it should be next to trivial to add a server. When I was working on that we had a written procedure, when followed strictly, it would just take less than an hour
throwaway894345
If you’re just running some CRUD web service, then you could certainly find significantly cheaper hosting in a data center or similar, but also if that’s the case your hosting bill is probably a very small cost either way (relative to other business expenses).

> You virtually never need scalability so fast that you don't have time to add another server

What do you mean by “time to add another server?” Are you thinking about a minute or two to spin up some on-demand server using an API? Or are you talking about multiple business days to physically procure and install another server?

The former is fine, but I don’t know of any provider that gives me bare metal machines with beefy GPUs in a matter of minutes for low cost.

radu_floricica
Weeks. I'm talking about multiple business weeks to spin up a new server. Sure, in a pinch I can do it in a weekend, but adding up all the stakeholders, talking it over and doing things right it takes weeks. It's a normal timespan for a significant chunk of extra power - a modern day server from Hetzner comes with over 1Tb of RAM and around 100 cores. This is also where all the reserve capacity comes from - you actually do have this kind of time to prepare.

Sure, there are scenarios where you need capacity faster and it's not your fault. Can't think of any offhand, but I imagine there are. It's perfectly fine for them to use cloud.

binary132
It’s kinda good if your requirements might quadruple or disappear tonight or tomorrow, but you should always have a plan to port to reserved / purchased capacity.
Aissen
As an infrastructure engineer (amongst other things), hard disagree here. I realize you might be joking, but a bit of context here: a big chunk of the success of Cloud in more traditional organizations is the agility that comes with it: (almost) no need to ask permission to anyone, ownership of your resources, etc. There is no reason that baremetal shouldn't provide the same customer-oriented service, at least for the low-level IaaS, give-me-a-VM-now needs. I'd even argue this type of self-service (and accounting!) should be done by any team providing internal software services.
abujazar
The permissions and ownership part has little to do with the infrastructure – in fact I've often found it more difficult to get permissions and access to resources in cloud-heavy orgs.
joshuaissac
This could be due to the bureaucratic parts of the company being too slow initially to gain influence over cloud administration, which results in teams and projects that use the cloud being less hindered by bureaucracy. As cloud is more widely adopted, this advantage starts to disappear. However, there are still certain things like automatic scaling where it still holds the advantage (compared to requesting the deployment of additional hardware resources on premises).
rcxdude
I think also this was only a temporary situation caused by the IT departments in these organisations being essentially bypassed. Once it became a big important thing then they have basically started to take control of it and you get the same problems (in fact potentially more so because the expense means there's more pressure cut down resources).
michaelt
"No need to ask permission" and "You get the same bill every month" kinda work against one another here.
Aissen
I should have been more precise… Many sub-orgs have budget freedom to do their job, and not having to go through a central authority to get hardware is often a feature. Hence why Cloud works so well in non-regulatory heavy traditional orgs: budget owner can just accept the risks and let the people do the work. My comment was more of a warning to would-be infrastructure people: they absolutely need to be customer-focused, and build automation from the start.
ambicapter
I'm at a startup and I don't have access to the terraform repo :( and console is locked down ofc.
blibble
don't underestimate the ability of traditional organisations to build that process around cloud

you keep the usual BS to get hardware, plus now it's 10x more expensive and requires 5x the engineering!

datadrivenangel
This is my experience, though the lead time for 'new hardware' on cloud is only 6-12 weeks of political knife fighting instead of 6-18 months of that plus waiting.
kccqzy
That's a cultural issue. Initially at my workplace people needed to ask permissions to deploy their code. The team approving the deployment got sick of it and built a self-service deployment tool with security controls built in and now deployment is easy. All it matters is a culture of trusting other fellow employees, a culture of automating, and a culture of valuing internal users.
Aissen
Agreed, that's exactly what I was aiming at. I'm not saying that it's the only advantage of Cloud, but that orgs with a dysfunctional resource-access culture were a fertile ground for cloud deployments.

Basically: some managers gets fed-up with weeks/months of delays for baremetal or VM access -> takes risks and gets cloud services -> successful projects in less time -> gets promoted -> more cloud in the org.

alexchantavy
> no need to ask permission to anyone, ownership of your resources, etc

In a large enough org that experience doesn’t happen though - you have to go through and understand how the org’s infra-as-code repo works, where to make your change, and get approval for that.

misiek08
You also need to get budget, few months earlier, sometimes even legal approval. Then you have security rules, „preferred” services and the list goes on..
rightbyte OP
Well ye it is more like I frame it as a joke but I do mean it.

I don't argue there aren't special cases for using fancy cloud vendors, though. But classical datacentre rentals get you almost always there for less.

Personally I like being able to touch and hear the computers I use.

darkwater
> What is old is new again.

I think there is a generational part as well. The ones of us that are now deep in our 40s or 50s grew up professionally in a self-hosted world, and some of us are now in decision-making positions, so we don't necessarily have to take the cloud pill anymore :)

Half-joking, half-serious.

olavgg
I'm in my 40s and run my own company. We deliver a data platform, our customers can choose between our self-hosted solution or run it on AWS/Azure for 10x higher cost.
Damogran6
As a career security guy, I've lost count of the battles I've lost in the race to the cloud...now it's 'we have to up the budget $250k a year to cover costs' and you just shrug.

The cost for your first on-prem datacenter server is pretty steep...the cost for the second one? Not so much.

marcosdumay
> What is old is new again.

It's not really. It just happens that when there is a huge bullshit hype out there, people that fall for it regret and come back to normal after a while.

Better things are still better. And this one was clearly only better for a few use-cases that most people shouldn't care about since the beginning.

kccqzy
My employer also resisted using cloud compute and sent staff explanations why building our own data centers is a good thing.
HPsquared
"Do nothing, Win"

This item has no comments currently.