- > Every endpoint returns 3–5 different HTML fragments. Frontend and backend must agree on every scenario — success, validation errors, system errors, partial updates, full reloads.
And why would that differ from React?
When I was building a website with React, I needed to model a "apply coupon" endpoint with different states (coupon applied, coupon does not exist, coupon exists but has reached its max usage limit) and it was so annoying because you needed to
1. The backend route that returns JSON with a different model depending on the coupon state
2. The JSON models for each response type
3. And then on the frontend you need to load the data, parse the JSON, figure out which "response state" it is (http status code? having a "type" field on the JSON?) convert the JSON to HTML and then display it to the user
In my experience it added a lot of extra "mental overhead". It is something that should be extremely simple that ends up being unnecessarily complex, especially when you need to do that for any new feature you want to add.
When using htmx, a simple implementation of that would be
1. A backend route that returns HTML depending on the coupon state
2. Some htmx attributes (hx-post, hx-swap) on the frontend to make the magic happen
Don't get me wrong, there are places that you wouldn't want to use htmx (heavily interactive components) but that's why htmx recommends the "islands of interactivity" pattern. This way you can make the boring things that would add unnecessary complexity when using React with htmx, and then you can spend the unused "mental overhead" with the interactive components. (which, IMO, makes it a more enjoyable experience)
At the end of the day it is just choices: Some people may prefer the React approach, some people may prefer the htmx approach. All of them have their own upsides and downsides and there isn't a real answer to which is better.
But for my use case, htmx (truth to be told: I use my own custom library that's heavily inspired by htmx on my website, but everything that I did could be done with htmx + some htmx extensions) worked wonderfully for me, and I don't plan on "ripping it all out" anytime soon.
- I haven't checked Hetzner's prices in a while, but OVHcloud has dedicated servers and they do have dedicated servers in the US and in Canada (I've been using their dedicated servers for years already and they are pretty dang good)
- While true, at least with open source you can actually go into the code and try to fix the code if you really want to.
With a closed source business you are at the mercy of them to decide if they really want to fix your issue, even if you are a paid customer.
- If I had to guess, they were talking about the DeepSeek iOS app: https://apps.apple.com/br/app/deepseek-assistente-de-ia/id67...
- True, but I don't have the need to run applications that require GPU under WSL, while I do need to run applications that require the GPU under my current host OS. (and those applications do not run under Linux)
- Sadly I'm not one of those people because I have a desktop with an AMD Ryzen 7 5800X3D, which does not have an integrated graphics card.
However now that AMD is including integrated GPUs on every AM5 consumer CPU (if I'm not mistaken?), maybe VMs with passthrough will be more common, without requiring people to spend a lot of money buying a secondary GPU.
- Except that if you require anything that is GPU-related (like gaming, Adobe suite apps, etc) you'll need to have a secondary GPU to passthrough it to the VM, which is not something that everyone has.
So, if you don't have a secondary GPU, you'll need to live without graphics acceleration in the VM... so for a lot of people the "oh you just need to use a VM!" solution is not feasible, because most of the software that people want to use that does not run under WINE do require graphics acceleration.
I tried running Photoshop under a VM, but the performance of the QEMU QXL driver is bad, and VirGL does not support Windows guests yet.
VMWare and VirtualBox do have better graphics drivers that do support Windows. I tried using VMWare and the performance was "ok", but still not near the performance of Photoshop on "bare metal".
- If it is how I think it is, then yes, it is a proof that you attended the event.
I'm not sure how it is in other countries, but in some countries (example: Brazil) some courses (like Computer Science) require you to have "additional hours", where these hours can be courses, lectures, etc related to the course.
To prove to the university that you did these courses, you need a certificate "proving" that you participated. Most of the time they are a PDF file with the name of the event, the date and your name in it.
- > One thing I (in general) miss from those days, was how easy it was to get into modding. Whether that be to make your own maps, or more involved game mods.
Another game from that time that was also easy to mod was The Sims 1.
For a bit of context, EA/Maxis released modding tools BEFORE the game was released, to let players create custom content for the game (like walls and floors) before the game was even released!
And installing custom content was also easy, just drag and drop files in folders related to what you downloaded and that's it.
Imagine any game nowadays doing that? Most games nowadays don't supporting modding out of the box, but of course, there are exceptions, like Minecraft resource packs/data packs. I don't think Fortnite and Roblox fit the "modding a game" description because you aren't really modding a game, you are creating your own game inside of Fortnite/Roblox! Sometimes you don't want to play a new game inside of your game, you just want to add new mods to enhance your experience or to make it more fun. There isn't a "base game Roblox", and while there is a "base game Fortnite" (Battle Royale... or any of the other game modes like Fortnite Festival or LEGO Fortnite) Epic does not let you create mods for the Battle Royale game. You can create your own Battle Royale map, but you can't create a "the insert season here Battle Royale map & gameplay but with a twist!".
Of course, sadly EA/Maxis didn't release all of the modding tools they could (there isn't a official custom object making tool for example, or a official way of editing the behavior of custom objects) but they still released way more modding tools than what current games release.
I think that most modern games don't support that easiness of modding because the games themselves are complex, because as an example: The Sims 1 walls are like, just three sprites, so you can generate a wall easily with a bit of programming skill, the skin format is in plain text in a format similar to ".obj", so on and so forth.
Lately I've been trying to create my own modding tools for The Sims 1, and it is funny when you are reading a page talking about the technical aspects of the game file formats and the author writes "well this field is used for xyzabc because Don Hopkins said so".
- > If I'm reading a blog or a discussion forum, it's because I want to see writing by humans. I don't want to read a wall of copy+pasted LLM slop posted under a human's name.
This reminds me of the time around ChatGPT 3's release where Hacker News's comments was filled with users saying "Here's what ChatGPT has to say about this"
- > Resource packs can change the music played by discs. The duration of the music disc stays fixed even if the audio is replaced
You can change the music disc duration with data packs since Minecraft: Java Edition 1.21, you can even add new music discs definitions without replacing any of the vanilla music discs.
I know that one of the rules was "no data packs", but hey, it is a cool thing if someone doesn't know about it. (also, in my opinion this wouldn't break the "no data packs" rule, because the "no data packs" rule seems mostly related to not using data packs to set blocks in the world)
- My use case was a bit different: I was trying to use Chromium Headless in Playwright as a simple way to render a element on a page, I experienced tons of random "Page crashed" and "Timed out after 30s" from Playwright.
Switched to Firefox Headless and these issues stop happening, in fact, switching to Firefox made the renderer ~3x FASTER than Chromium Headless!
The Blitz project seems very interesting and is actually what I needed, because I'm using a headless browser as an alternative because rendering everything manually using Java Graphics2D would be a pain because the thing I'm rendering has a bit of a complex layout and I really didn't want to reinvent the wheel by creating my own layout engine.
- While it isn't a "leak", the reason it has popped up recently is because people found out that there are beta versions of various apps and games in the dump.
As far as I know, at the time, no one had made a list of what apps were included, and if the apps included are prototype versions of famous apps (Angry Birds, Cut the Rope, etc), which is what people are doing right now. So now people are scrapping and tracking which apps are in the dumps and if the apps are special (prototype, unreleased, etc).
Dismissing the project as "straight stupid" is dumb. Because if you also think about it, are the archive team also "straight stupid" because they didn't extract and check that the dump had prototype and unreleased versions of apps? I don't think so.
- I may be wrong about the "TS_USERSPACE" environment variable, but I think that you don't need to disable it.
If you were using userspace networking, you wouldn't be able to connect to other services in your tailnet without setting up a HTTP/SOCKS5 proxy https://tailscale.com/kb/1112/userspace-networking/
- For those who want to run Tailscale on their Docker containers, but don't want to switch to images based off linuxserver.io, you can still run Tailscale as a sidecar container, and use "network_mode: service:tailscale"
I do that for my containers and it is incredibly useful for cross containers communication, especially for containers that are hosted in different dedicated servers.
https://mrpowergamerbr.com/us/blog/2023-03-20-untangling-you...
- One reason that makes me dislike NOTIFY/LISTEN is that issues with it are hard to diagnose.
Recently I had to stop using it because after a while all NOTIFY/LISTENS would stop working, and only a database restart would fix the issue https://dba.stackexchange.com/questions/325104/error-could-n...
- Are you sure? I created a JavaScript Polyglot context in Java with GraalJS CE 21 and...
> Exception in thread "main" Polyglot sandbox limits can only be used with runtimes that support enterprise extensions. The runtime 'GraalVM CE' does not support sandbox extensions.
- They didn't call it out as a concern, they said at the beginning of the section "Halloween Problem is a database error that a database system developer needs to be aware of."
To be honest, just like you I also thought that it was related to PostgreSQL or other popular SQL databases somehow, until I re-read the section multiple times.
- I think that LXD as a Proxmox alternative is a very underrated idea to be honest.
Don't get me wrong, Proxmox is a pretty good piece of software, but if your workload isn't tailored to VMs + you have private links between your clusters + you have shared storage, you end up adding way too much complexity to your stack even tho you aren't using the real useful features that Proxmox provides.
So if you are in the "I just want to run my services" crowd, an Ubuntu Server (or any other distro, really) running Docker on baremetal + LXD for anything that can't run on Docker is way simpler to manage. Especially because running Docker on Proxmox is not fun (too cumbersome to run it within a LXC container + ZFS, running a big fat VM with Docker defeats the point since you can't backup individual containers with Proxmox anymore, and running Docker on the hypervisor is a big no-no)
At the end of the day, nothing in Proxmox has a special magic sauce that makes it tick, and sometimes that complexity may be super cumbersome when you just want to run some dang Docker containers for your swifty new app. https://mrpowergamerbr.com/us/blog/2022-11-20-proxmox-isnt-f...
- > Tell that to people that run databases with docker and kubernetes
To be honest my original message wasn't clear enough: What I wanted to mean is that I like using Docker when I have a container image ready to run, "stateless" in the meaning that the image itself is unmodifiable and any changes to the image will be lost after a restart. (This doesn't include bind mounts, and stuff like that)
- > Docker / kubernetes is eating it up
Is LXD's purpose the same as Docker/Kubernetes? I use both LXD and Docker and to me they are tools that use the same technology (containerzation) but for different purposes. Docker is for stateless containers, used to containerize services. LXD is for stateful containers, used to containerize operating systems. LXD can also run VMs while Docker can't.
I think about LXD as a sort of Proxmox alternative (because you can manage LXC containers and VMs with it) instead of a Docker/Kubernetes alternative. In fact, I actually have migrated my systems from Proxmox to Ubuntu Server with Docker, for stateless applications, and LXD, for when I need to containerize an entire OS or VMs for my friends.
- While it is obfuscated, newer versions provide unobfuscation mappings to make moodders' lifes easier
- While I haven't found an "issue" talking about this (sorry, I don't know how PostgreSQL tracks open bugs), they do know about it since I already seen that issue being talked about on PostgreSQL's mailing list.
Here's my issue on StackExchange, for anyone that wants to delve deeper into my issue: https://dba.stackexchange.com/questions/325104/error-could-n...
Here's an thread talking about the issue, while OP's issue doesn't seem to match exactly what I was experiencing, one of the replies describes my exact issue: https://postgrespro.com/list/thread-id/2546853
- If you are using NOTIFY/LISTEN, keep track to check if your database does not have any long running queries. If you end up getting PostgreSQL to vacuum freeze your tables while the long running query is active, PostgreSQL will delete files from the pg_xact folder and that will bork out any LISTEN query, until you fully restart the database.
- This is weird, currently I'm using the Free plan but I always wanted to upgrade to financially help Tailscale, but now that the Starter plan doesn't have SSH and Funnel, it would make more sense to stay on the Free plan instead.
It doesn't even make any sense, if it is available on the Free plan, why not give it to the Starter plan too?
Also, I may be misunderstanding the billing page, but it looks like Tailscale removed soft limits? On my billing page, it shows "Your tailnet has 3 more users than you are paying for. That’s fine, we have soft limits. Play around and upgrade your plan before April 30th 2024."
- I think a better way to think about it is instead of adopting the title outright, think about "what would an athlete/writer/artist/etc do in this situation?" instead
- You can tame Docker with iptables by using the DOCKER-USER chain
There are a bunch of tutorials about how to tame Docker, but this one is the solution that I use and it was the simplest that I've found https://unrouted.io/2017/08/15/docker-firewall/
This way, even if you mistakenly expose your container ports to 0.0.0.0, you won't be able to access them until exposing them with iptables
- While yes, EA did let them continue the project, after EA's request the project has slowed down considerably.
And about FreeSO: FreeSO is a The Sims Online reimplementation. The Sims Online is an improved version of The Sims 1's engine. Due to the similarities, riperiperi decided to make a The Sims 1 reimplementation based off FreeSO's engine, with support for mobile devices.
And besides, in theory EA couldn't even request that: The engine is a clean room implementation, the implementation doesn't use any of the original game assets.
- One thing that always fascinated me is how Maxis liked to push tools to create custom content for the game, and how content creation tools were even released before the game was released, to allow users to create content that they could use for the game when it was officially released.
When I was younger I spent a lot of time using your The Sims Transmogrifier tool, at the time I didn't know that the tool was created by one of The Sims 1's developers.
Will Wright even talked about the The Sims 1 modding community, and how it shaped the future of The Sims and its expansion packs: https://youtu.be/hLHnmRtqNno
Allow the server to return a modal/toast in the response and, in your frontend, create a "global" listener that listens to `htmx:afterRequest` and check if the response contains a modal/toast. If it does, show the modal/toast. (or, if you want to keep it simple, show the content in an alert just like you already do)
This way you create a generic solution that you can also reuse for other endpoints too, instead of requiring to create a custom event listener on the client for each endpoint that may require special handling.
If you are on htmx's Discord server, I talked more about it on this message: https://discord.com/channels/725789699527933952/909436816388...
At the time I used headers to indicate if the body should be processed as a trigger, due to nginx header size limits and header compression limitations. Nowadays what I would do is serialize the toast/modal as a JSON inside the HTML response itself then, on `htmx:afterRequest`, parse any modals/toasts on the response and display them to the user.