- jayofdoom parentThis is called "digital sovereignty", and it has been a major topic for OpenInfra foundation and other open source cloud foundations. Open source, and open cloud software, is the way to ensure your data can stay inside your own borders and be governed by your local laws. https://www.youtube.com/watch?v=Lvz2PcHq0yY is one example of folks talking about this, but realistically you can find talks from OpenStack/OpenInfra going back 4/5 years on this topic.
- In OpenStack, we explicitly document what our log levels mean; I think this is valuable from both an Operator and Developer perspective. If you're a new developer, without a sense of what log levels are for, it's very prescriptive and helpful. For an operator, it sets expectations.
https://docs.openstack.org/oslo.log/latest/user/guidelines.h...
FWIW, "ERROR: An error has occurred and an administrator should research the event." (vs WARNING: Indicates that there might be a systemic issue; potential predictive failure notice.)
- It is. Most distros have a verify built into their packaging systems. For example; https://docs.redhat.com/en/documentation/red_hat_enterprise_...
- I spent a lot of time in my career, honestly some of the most impactful stuff I've done, mentoring college students and junior developers. I think you are dead on about the skills being very similar. Being verbose, not making assumptions about existing context, and generalized warnings against pitfalls when doing the sort of thing you're asking it to do goes a long long way.
Just make sure you talk to Claude in addition to the humans and not instead of.
- This is a radical misunderstanding of how things work.
They might (I'm assuming based on usual foundation policies) own or enforce the trademark, but Linux is owned collectively by everyone who ever contributed to it at all -- there's no copyright assignment in the project whatsoever.
Additionally, Linux was a large, successful commercial project LONG before LF existed.
- I don't see how it's worse?
https://github.com/microsoft/garnet/blob/main/LICENSE
It's MIT licensed?
- Doing 'the cloud' right at scale has to involve running your own cloud at some point. We should not pollute the good ideas around API-delivered infrastructure with the more questionable idea of outsourcing your infrastructuree.
OpenStack has been around 15 years powering this idea at scale for huge organizatons, including Wal-Mart, Verizon, Blizzard and more.
- Don't just give Dell the credit here :)
OpenStack Ironic (ironicbaremetal.org) now ships a redfish driver that we have tested working on: - Dell machines - HPE machines (ilo 5 and 6) - SuperMicro - Most generic redfish endpoints as deployed by bulk "off label" server places
A lot of folks in the provisioning automation space worked hard to try and enhance the state of the art (including some folks at Dell who were great :D) and get redfish accepted.
It's a bit weird seeing this post; from our perspective, IPMI is a dangerous, attractive nuisance: it's nearly impossible to properly secure and has a lot more bad failure scenarios than a protocol built on http, that most of the time can support TLS with custom CAs and similar.
- I find the Gentoo ebuild repository, mirrored at http://github.com/gentoo/gentoo often is a stress test for git clients due to the sheer number of commits.
It's interesting to see the definition of scale stretched in all kinds of different directions.
- This ignores the primary reason why I, and many other people who I know who travel frequently: reducing the amount of time spent traveling.
Checking a bag is another line you have to account for when arriving to the airport. It also means that you have to wait an extra 30 minutes on the far side to get home.
I happily check my bag, charge or no charge, if they found some way to get them to myself and the other passengers more quickly.
- I was originally not going to respond to this, as I don't think this was written in a tone to encourage good discourse; however I wrote something on one of my socials on Friday which is relevant, which I'll paraphrase here.
It's been an extremely long time since I've seen an extreme scale Kubernetes environment which didn't have some OpenStack components, and it's been a long time since I've seen an OpenStack installation which didn't include (at least one) Kubernetes cluster.
And even if this wasn't true... would it matter? We're all working towards a vision of open infrastructure, trying to serve the infrastructure needs of developers who are counting on us. Why do we have to be so tribal about it? It doesn't matter if you're using OpenStack or Kubernetes to automate your infrastructure; it matters that your infrastructure is automated and that you automated it with open source software.
In the end, it's all better when we cooperate. That's happening all over the place! Metal3.io is bringing the power of OpenStack Ironic to an API design more familiar to the CNCF/K8s community. The OpenStack Magnum community continues to make installing Kubernetes clusters on top of OpenStack easier.
Tribalism is just another form of taking your eye off the ball. Open infrastructure is winning; we can't lose focus now!
- There's an interesting thing about complicated tools: they often exist to serve complicated requirements.
OpenStack is a large, complex piece of infrastructure software used at scale that many people, even on HN, will never experience directly. Why don't you hear about us much? Because people running at that scale frequently don't enjoy talking about the details of their infrastructure.
Ask the question next time you see something in a software tool, e.g. gerrit, that seems over-complicated: What can this software do because of this complication that I'm missing?
I don't think the tooling we use is what everyone should use, by any stretch, but don't assume things are being done in an antiquated way just because they're different than you're previous experiences or because they've been done that way for a long time.
- So time for a history lesson.
First, who am I? I'm JayF -- I've been working on OpenStack for about a decade, and am the currently serving TC Chair. This is not an official history, but my attempt to give you the story behind this from my perspective.
OpenDev collaboratory originally started as part of OpenStack itself -- as the team that enabled us to test the giant software suite. When it became clear that the tooling created and used by that team had general interest, it became time for them to spin off into their own group. I'll note that this maps similarly to what happened with Zuul -- a tool created around OpenStack, that was generally useful and split into it's own project. OpenDev was infrastructure services created around OpenStack and for OpenStack, that are generally useful and split into it's own project.
This context is important as to where it puts things on the timeline: OpenDev, and the associated technologies, were not created as an alternative to some incumbant power like Github or Gitlab -- when it was created, there were few, if any, other tools that could operate at that level of scale. (It's unclear to me if Github, as we think of it today, even existed or was at any level of popularity at this point in time.)
So as this thread hopefully picks up steam, lets avoid terms like "alternative to" or things that miss the context. OpenDev has been around, and open, for a long time. It will continue to be around, and open, for a long time. They aren't going to feed your code into an AI to train it, and they aren't going to try and turn your collaboration toolset into a social media platform. It's a tool to get work done, and I've been lucky to work in it for the last ten years.
- Stop traveling internationally with that phone: yes, very possible.
Stop keeping that information on their phone: not always possible. Many highly-secure places require you to coordinate with an application on your phone when authenticating. In my experience, there is rarely a non-mobile option presented to users.
- I think there were some internal docs too, but all Okta's HR stuff is on a public website: https://rewards.okta.com/
- I did a video https://www.youtube.com/watch?v=cAkMVIBTFbQ converting, as a proof of concept, an OpenStack project to use nox. I liked it, and anytime I went "oh no, it doesn't do X" I found it does, and I learned how.
It's always hard to evaluate a young project vs an older one; but nox seems good and I would likely use it for a greenfield project in the future.