- fock parentGood to see that hands are still not solved...
- > It also strikes me as a uniquely difficult challenge to track down the decision maker who is willing to take the risk on revamping these systems (AI or not).
here that person is a manager which got demoted from ~500 reports to ~40 and then convinced his new boss that it's good to reuse his team for his personal AI strategy which will make him great again.
- I work at a shop (a specialized provider for finance in your eyes) which still has the "transaction" workload on IBM z/OS (IMS/DB2). The parts we manage (in Openshift) interface with that (as well as other systems) and I have heard of people/seen the commits moving PL/I to Cobol. In 2021. Given Cobol's nature, those apps have more than 1k LoC easily.
We also sublease our mainframes to at least 3 other ventures; one of which is very outspoken they have left the mainframe behind. I guess that's true if you view outsourcing as (literally) leaving it behind with the competitor of your new system... It seems to be the same for most banks, none of which are having mainframes anymore publicly, but for weird reasons they still hire people for it offshore.
Given that our (and IBM's!) services are not cheap I think either a) our customers are horribly dysfunctional in anything but earning money slow and steady (...) and b) they actually might depend on those mainframe jobs. So if you are IBM or a startup adding AI to IBM I guess the numbers might add up to the claims.
- we have on-prem with heavy spikes (our batch workload can utilize the 20TB of memory in the cluster easily) and we just don't care much and add 10% every year to the hardware requested. Compared to employing people or paying other vendors (relational databases with many TB-sized tables...) this is just irrelevant.
Sadly devs are incentivized by that and going towards the cloud might be a fun story. Given the environment I hope they scrap the effort sooner rather than later, buy some Oxide systems for the people who need to iterate faster than the usual process of getting a VM and replace/reuse the 10% of the company occupied with the cloud (mind you: no real workload runs there yet...) to actually improve local processes...
- And incidentally all documentation recommends not extending your LPARs beyond what is available on a single CPC-"node" (see [0]-2-23 for a nice (and honest...) block-diagram). If you extend your LPAR across all CPCs I doubt that many of the HA and hotswap-features continue to work (also there is bugs...). E.g.: you won't hotswap memory when it's all utilized: > Removing a CPC drawer often results in removing active memory. With the flexible memory option, removing the affected memory and reallocating its use elsewhere in the system is possible.
So while you can have single-system-images on a relatively large multinode setup I doubt many people are doing that (at the place I know, no LPARs have TB of memory...). Also in the given price-range you easily can get SSI-images for Linux too: https://www.servethehome.com/inventec-96-dimm-cxl-expansion-...
If you don't need the single-system-images, VMWARE and Xen advertise literally the same features on a blade chassis minus redundant hardware per blade, which is not really necessary when you just migrate the whole VM...
Also if you define the whole chassis as having 120% capacity, running it at 100% capacity becomes trivial too. And this is exactly what IBM is doing keeping around spare CPUs and memory in all setups spec'ed correctly: https://en.wikipedia.org/wiki/Redundant_array_of_independent...
You are right though that the hardware was and is pretty cool and that kind of building for reliability has largely died out. Also up until ARM/Epyc arrived maximum capacity was over-average, but that is gone too. Together with the market-segment likely not buying for performance I doubt many people today are running workloads which "require" a mainframe...
- I guess for IMS/CICS/TPF/... the IBM mainframe is a just fine appliance compared to the alternatives. While not exactly transaction processors, SAP HANA, Oracle Exadata and co. all market themselves towards the same customer groups; SAP even sells full banking systems for medium-sized banks.
Your point that TCO is lower than a well executed alternative seems very dubious to me though. Maybe lower than cloud and also certainly lower than whatever crap F100-consultants sold you, but running database unloads with basic ETL for a few dozen terrabytes per month creating a MSU-bill in the millions is just ridiculous. The thing which probably lowers the TCO is that EVERY mainframe-dev/ops-person in existence is essentially a fin-ops-expert formed by decades of cloud-style billing. Also experience on a platform where your transaction processing historically has KB-range size limits, data-set-qualifiers are max. 44 chars, files (which you allocate by cylinders) don't expand by default and whatever else you miss from your 80ties computing experience naturally leads to people creating relatively efficient software.
In general even large customers seem to agree with me on that (see Amadeus throwing out TPF years ago) with even banks mostly outrunning the milking machine called IBM. What is and will be left is governments. Captured by inertia and corruption (at the top) and being kept alive by underpaid lifelong experts (at the bottom) who have never seen anything else.
> during the AWS outage this week.
Also the reliability promises around mainframes are "interesting" from what I've seen so far. The (IBM) mainframe today is a distributed system (many LPARs/VMs and software making use of it) which people are encouraged to run on maximum load. Now when one LPAR goes down (and might pull down your distributed storage subystem) and you don't act fast to drop the load you end up in a situation not at all unlike what AWS experienced this week: critical systems are limping on, while the remaining workload has random latency spikes which your customers (mostly Unix systems...) are definitely going to notice...
The non-IBM-way of running VMs on a Linux box and calling it a mainframe just seems like a scam if sold for anything but decommissioning. So I guess those vendors are left with governments at this point.
- Quite some time ago I implemented NFS for a small HPC-cluster on a 40GBe network. A colleague set up RDMA later, since at start it didn't work with the Ubuntu kernel available. Full nVME on the file server too. While the raw performance using ZFS was kind of underwhelming (mdadm+XFS about 2x faster), network performance was fine I'd argue: serial transfers easily hit ~4GB/s on a single node and 4K-benchmarking with fio was comparable to a good SATA-SSD (IOPS + throughput) on multiple clients in parallel!
- and there have been continous ports since then: https://github.com/Godzil/ftape/tree/master - note the caveats which apparently all disappeared here...
- no. In the short time I work at a z/OS-shop, they had to IPL twice. And the IPL takes ages...
Now, if you can live with the weird environment and your people know how to programm what is essentially a distributed system described in terms noone else uses: I guess it's still ok, given the competition is all executing IBMs playbook too.
- and sadly not too many words on how they made sure those shelves don't vibrate and squeak horribly - which they will if no placed on perfectly smooth surfaces... I somehow could picture something like this to work out nice using metal structural framing - but the pricepoint then probably comes close to quite nice carpentry if you add some bells and whistles.
- at the banking place I work running things in k8s, z/OS-people are actually the ones running custom git clients in go on z/OS. Bonus: they have no nosy Java devs (recurringly producing threading bugs...) saying "but we all use spring boot!!!" and likely no manager asking "is this cloud-ready???"
- and then most of the places I know happily allow employees admin access for "just that piece of software they need" and simultaneously push for "zero-trust". There's no point in it at all and you can just as well use saltstack to rollout apparmor-policies on your locked-down linux (and suddenly the same people wanting GPOs tell you that linux is untenable because of usage restrictions)
- yesterday: I saw the weird CVE for M365 which "exploits" some LLM through messaging embedded in emails.
today: got a very long email, wanted to search for our department in it. Outlook: "Search is a deprecated feature".
Despite all the "but you can't extrapolate to a large org from personal experiences"-FUD around, I think for most orgs (especially governments which are generally far behind on processes) it would be easy to switch from a feature-perspective. The problem is the army of employees and contractors who are very happy to defend Microsoft for keeping their non-automated thiefdoms (such is non-cloud AD administration at most places I extrapolate ....). There is hardly anyone there to implement the necessary processes and rather than to send out their underlings to FOSDEM, leadership is happy to get an invite by Microsoft (or a cloud-provider...) to an "innovation-summit" instead.
- oh, they are available to startups. Startups having the sole purpose of skimming funds by being the technology partner to some academic.
That's the best case. Then there is outright fraud:
https://cordis.europa.eu/project/id/101092295 - European dynamic provides some project management and a wordpress-page for the lump sum of 800k€ and of course there is always "SOCIAL OPEN AND INCLUSIVE INNOVATION ASTIKI MI KERDOSKOPIKI ETAIREIA" headquartered here: https://inclusinn.com/. Probably still in stealth mode, using the 4M€ to "promote innovation".
- So for workflows it's like Airflow, Brigade or hatchet or ...? How do workflows integrate with k8s (ressources, ...)? Camunda can also deploy natively on k8s. However you still develop apps for Camunda and it seems like dapr is no different there? Why is it in CNCF if it doesn't provide a way to build a workflow out of k8s-native artifacts (PVs, Deployments, Jobs, ...)?
- While I don't play Badminton (and so can't test with a racket on hand) this seems very cool! I also thought about something similar for judging bikewheel spoke tension - I guess I have to research this a bit more now.
As for monetization: I personally don't have problems with static ads served from your domain. Find some celebrity or brand and ask them if they want to have you serve their banner.
- as others have mentioned
- institutional inertia - some weird consultant style people in key roles (this happens around cloudy stuff too) - the DBA-team - "we can't move everything!" - "we just migrated off solaris!"
however every new project with sane leadership seems to decide against oracle.