https://omenos.dev/
Online Accounts:
Git{Hub,Lab}, Codeberg, SourceHut: @omenos
Bluesky: @omenos.dev
Matrix: @mroche:fedora.im
Mastodon: @omenos@fosstodon.org
- That's the purpose of the forge platform, to provide a way to prevent changes to these files from being accept into the source repository. For example:
https://docs.github.com/en/repositories/configuring-branches...
https://docs.gitlab.com/user/project/protected_tags/
https://forgejo.org/docs/latest/user/protection/#protected-t...
- I 100% agree on the latter (the tag != release is more of a project management issue), and the same concept applies to containers and their digest hashes. The main issue at the end of the day is the human one: most people don't like looking at hashes, nor do they provide context of progression. I would say "give both" and make sure they match on the end user side of things, but tags are the most common way (open source) software releases are denoted.
- The purpose of the forge is to be able to prevent this. Protected tags are usually a feature which provides a way to mark tags as untouchable, so removal would require a minimum level of trust to the repository on the platform. Otherwise, attempts to push tag deletions or changes for tags matching the protected pattern would be rejected/ignored.
Of course, the repository owner has unlimited privilege here, hence the last part of my prior comment.
- Luckily commonly used forges for collaboration have the ability to make tags immutable. Any repository where multiple people collaborate on a project should have that feature enabled by default. I'm still waiting for the day where tags are immutable by default with no option exposed to change it.
I'm sure that would cause problems for some, but transitive labels already exist in Git: branches.
- 2 points
- This is obviously kicking the can down the road, but I "solve" this problem by storing passkeys in a third-party credential manager that supports them. That way I can use them on any device that I've installed the client app or browser extension on. I have this working on Fedora, macOS, Windows, and iOS.
But again, kicking the can down the road.
- I gave Notes/Plume a try a year or so ago, it was an interesting experience. I ended up falling back to Joplin as I could use it on macOS, iOS, and Fedora with synchronization via Dropbox.
I've always been curious about productizing apps like these, from a financial/business perspective have you found Daino worthwhile or enough of a success (by your standards) to continue developing it as a proprietary application?
- I found gitea's interface to be so unusably bad that i switched to full-fat GitLab.
Was this Gitea pre-UI redesign or after? 1.23 introduced some major UI overhauls, with additional changes in the following releases. Forejo currently represents the Gitea 1.22 UI, reminiscent of earlier GitHub design.
- eBPF is restricted when booted in a SB environment, but it's not nonfunctional. The default config puts the kernel into "integrity" mode of Kernel Lockdown, which reduces scope of access and enforces read-only usage.
Whether or not the specific functions needed to replicate this tool are impacted is beyond my knowledge.
- The Python API is limited by Python itself. You're restricted to a GIL environment, so your ability to maximize throughput and reduce latency will be limited. For small/average scenes this may not matter for your addon, however larger scenes will suffer. There are a few popular options to developing Blender functionality:
1. Extend Blender itself. This will net you the maximum performance, but you essentially need to maintain your own custom fork of Blender. Generally not recommended outside of large pipeline environments with dedicated support engineers.
2. Native Python addon. This is what 99% of addons are, just accessing scene data via Blender's Python interface. Drawbacks mentioned above, though there are some helper utilities to batch process information to regain some performance.
3. Hybrid Python Addon. You use the Python API as a glue layer to pass information between Blender and a natively compiled library via Python's C Extension API. With the exception of extracting scene data info, this will give you back the compute performance and host resource scalability you'd get from building on Blender directly. Being able to escape the GIL opens a lot of doors for parallel computation.
- > If you had multiple shows in production, I would expect that standards be set to use the same platforms and versions across the board.
Considering productions span years, not months, artists would never get to use newer tools if studios operated that way. And it really only works if shows share similar end dates, which is not the reality we live in. Productions can start and end at any point in another show's schedule, and newer tools can offer features that upcoming productions can take advantage of. Each show will freeze their stacks, of course, but a studio could be juggling multiple stacks simultaneously each with their own dependency variants (see the VFX Reference Platform).
> Also, if the company is more than a boutique shop, I would hope it would be at a level and budget that the Python performance bottlenecks would be well addressed with competent internal pipeline and production engineering teams.
That would be the ideal, something that can be difficult to achieve in practice. You'll find small teams of quality engineers overwhelmed with the sheer volume of work, and other larger teams with less experience who don't have enough senior folks to guide them. The industry is far from perfect, but it does generally work.
> But then again, if the company is more than a boutique shop, they would just pay for the Maya licensing. :-)
And back to reality XD
That being said a number of studios have been reducing their Autodesk spend over the past few years because it's honestly a sick joke the way the M&E division is run. It's a free several hundred million a year revenue earner, but they foist the CAD business operations onto it and the products suffer. Houdini's getting really close, but if another AIO can cover effectively everything in a way that each team sees is better, you will start to see the ramp up of migrations occur. Realistically this comes down to the rigging and animation departments more than any other. But Maya will never go away completely as it'll still need to be used for referring to and opening older projects from productions that used it, beyond just converting assets to a different format. USD is pretty much that intermediary anyways, it's the training and migration effort that becomes the final roadblock.
- > on the most demanding real-world production workloads (think Pixar/Weta), which for now it hasn't been.
Super small nit (or info tidbit), but it doesn't take away from your overall message regarding production and scene scale.
Pixar does not and has not used Maya as the primary studio application, it's really only used for asset modeling and some minor shading tasks like UV generation and some Ptex painting. The actual studio app is Presto, which is an in-house tool Pixar has developed over the years since its earliest productions. All other DCCs are team/task specific.
Dreamworks is similar with their tool, Presto, IIRC. Walt Disney Animation Studio (WDAS) does use Maya as the core app last I saw, but I don't know if they've made any headway with evaluating Presto since 2019...
- > And these "most people" who are scared of a Python API? Weak! It should have been a low level C API! ;-)
I wouldn't frame it as "scared". The issue is that at a certain scene scale Python becomes the performance bottleneck if that's all you can use.
> You pick a (stable) version, and use that API. It doesn't change if you don't. If it truly is a _major_ project, then constantly "upgrading" to the latest release is a big no-no (or should be)!
This is fine if you only ever have one show in production. Most non-boutique studios have multiple shows being worked on in tandem, be it internal productions or contract bids that require interfacing with other studios. These separate productions can have any given permutation of DCC and plugin versions, all of which the internal pipeline and production engineering teams have to support simultaneously. Apps that provide a stable C/C++ SDK and Python interface across versions are significantly more amenable to these kinds of environments as the core studio hub app, rather than being ancillary, task specific tools.
- Blender gives you two paths for extension: a) fork it and layer your changes directly onto the app, or b) you create a plugin via the Blender Python API.
For vendors, the former is obviously a no-go. The latter has the issue of be throttled by Python, so you have to effectively create a shim that communicates with an external library or application that actually performs compute intensive tasks.
Most (if not all) industry DCCs provide a dedicated C++ SDK with Python bindings available if desired.
- > That said, if pressed, I’d recommend AsciiDoc over any Markup flavor for a greenfield project _today_.
Likewise for me as well, and I am a massive Material for MkDocs fan. Markdown is certainly simple to use and gets the job done, but AsciiDoc just provides so much out of the box without hurting my eyes like reStructuredText (used by Sphinx) does. It also helps that's there's effectively one type of AsciiDoc I'm aware of, whereas there's a number of Markdown flavors atop CommonMark to be cognizant of. I will concede, however, it's learning curve is not as simple as MarkDown's...
A powerful framework for working with AsciiDoc for documentation purposes is Antora[0]. The Red Hat ecosystem (Fedora and CentOS projects) uses it for their public facing docs. That being said, it is a beast to understand if starting from scratch rather than contributing to project's existing docs. It designed to be able to consolidate large projects with multiple component repositories and versions per component into a single docs site. Typical balance of more capabilities, more up-front cost of adoption.
The AsciiDoc WG also maintains an Awesome AsciiDoc[1] page of projects within the ecosystem.
[1] https://gitlab.eclipse.org/eclipse-wg/asciidoc-wg/asciidoc.o...
- I do this as well, but there are a number of service providers that just do not handle subaddressing at all. Like creating an account will result in never receiving a confirmation or verification code because the system failed to parse the address.
I've started using grouped aliases instead for a bunch of things.
- > But a somewhat high-flying (albeit hardware) company was recruiting me for a CA job and they basically admitted it would be a lifestyle downgrade in terms of salary.
I've been given second hand accounts of similar situations. One was team consolidation, and the business was offering Boston-area engineers positions in San Jose. One of the folks who moved with his family was back in MA within 5 years. His salary was not adjusted as much as it should have been for the cost of living difference.
- I migrated from a 13 Mini to a 17 Pro last week. Updated the Mini to 26 beforehand to mitigate any potential 18->26 issues with data transfers/backups.
I'm still getting accustomed to the device size, the Mini was such a perfect device. If only app and web developers would actually preview their work on its dimensions, I probably would have just replaced the battery (76%).
Reduced Transparency is a hard requirement for iOS 26.
- > Outlook is the lone exception where that team decided to have Outlook for the web, Windows Outlook, and Mac Outlook be identical, so those are getting their rewrites with removal of Win32-specific features where applicable.
I wish they didn't. Outlook on macOS is abysmal nowadays and I still find myself resorting to the legacy view just to change some settings that both iterations can read but only one exposes.
I significantly prefer using Thunderbird or the web views for Gmail and Zoho Mail over any version of Outlook. Is the integration across O365 apps nice? Sure, but the platforms themselves are miserable to use.
In a similar vein, I was cautiously optimistic about Teams V2 for unifying the client. But then they completely dropped the Linux client for their PWA which does not have feature parity with the "native" platforms and has a significantly worse UX.
- Nearly every widely-used commercial or in-house tool in the VFX and Animation sector of M&E are Qt based. The main difference compared to tradition desktop developers is the general attitude of design; the industry takes the stance of providing the same application experience across platforms, rather than trying to adhere to each platforms' UI/UX guidelines.
Examples: Autodesk Maya, 3DS Max, Mudbox Foundry Nuke, Mari, Katana SideFX Houdini Substance Painter, Designer
- Though from 2017, check out this blog post explaining Flatpak's architecture.
https://blogs.gnome.org/alexl/2017/10/02/on-application-size...
I can't speak for the others, but Flatpak is a layered solution, so files are deduped and shared across the layers (runtimes, applications) that need them.
- My solution to this is:
1) Subscribe to the GitHub repo for tag/release updates.
2) When I get a notification of a new version, I run a shell function (meup-uv and meup-ruff) which grabs the latest tag via a GET request and runs an install. I don't remember the semantics off the top of my head, but it's something like:
Of course this implies I'm willing to wait the ~5-10 minutes for these apps to compile, along with the storage costs of the registry and source caches. Build times for ruff aren't terrible, but uv is a straight up "kick off and take a coffee break" experience on my system (it gets 6-8 threads out of my 12 total depending on my mood).cargo install --jobs $(( $(nproc) / 2 )) --tag ${TAG} --git ${REPO} [uv|ruff]
I know my phrasing may come off wrong, I apologize for that. But I'm asking genuinely; I've only ever seen Zuul in the wild in the Red Hat and OpenStack ecosystems.