https://djharper.dev
my public key: https://keybase.io/djhworld; my proof: https://keybase.io/djhworld/sigs/sEHKtywvFxREf5CuJZS2pkZQ4u_1XktewiSAE764kbM
- I think there's definitely something in it around there's a huge learning curve.
Double entry book keeping isn't that difficult but that's easy to say once you've been doing it a while
I've been doing PTA since around 2018 and there's definitely lessons I've learned along the way along with plenty of mistakes.
I think the main benefit for me is just the system gives you a complete picture of your finances. The commercial services you can pay for just give you a view into a certain slice (e.g. open banking in UK/Europe to see your current account(s)) - I think mint.com did something similar in the US but it never came over here, I don't know if it still exists. Maybe that's enough for most people, but for me I want everything, investments, liabilities, assets etc. None of these commercial offerings have that because it's so complex and niche, e.g. your open banking provider won't tell you how your pension is doing.
It's also just nice to have the provenance of transactions, e.g. if you receive some shares from work, and you sell the shares and the money ends up in your bank account - the incoming transaction will just be the net proceeds but it won't tell you if you paid any tax prior to that - PTA gives you a more of a complete picture that tracks the whole chain of events that led up that transaction into your bank happening. Overkill for most people? Probably.
- I've been beancount'ing for years now
As we've crossed into the new year I've switched to a similar directory setup as the OP with 1 file per year. Previously I just had one file that was from 2022 which ended up being like 2 million lines of text, which was starting to bog down the emacs plugin.
What I appreciate the most about this approach to personal finances is it just tracks everything. Investments, pensions, RSUs, bank accounts. You could even go as far as accounting for any resource that's modellable, e.g. energy usage in kwh vs. bills. I probably wouldn't go that far though :D
Also you can build a bunch of tooling around it too, with the advent of LLMs my toolset for beancount management has expanded quite significantly. Most recently I got claude to rewrite my transaction rules engine https://djharper.dev/post/2025/08/19/using-llms-to-turn-scri... into something nicer with a UI. This would have taken days to build in the before times, and I probably would not have bothered because it's overkill for 1 user (me)
- My gaming PC sits next to the TV in my living room and I use it like a console, I have one of those cheap blutooth wireless keyboards with trackpad for the really basic iteractions and then I just use a game controller for playing games.
Windows 11 has been fine for me, I don't interact with it much other than seeing it for a bit when launching games.
I honestly wouldn't mind giving Linux a go, the only downside is I made the mistake of buying an nvidia graphics card, I'm not sure how much of a pain it is these days but last time I tried it was a bit of a nightmare - the general wisdom at the time was to go with an AMD card.
- There's something quite pleasing about writing a message and living, at least, with the thought of it causing some physical action (printing) in the real world. I mean, for all we know Andrew probably ran out of printer paper hours ago so the message has gone into the ether, but it's nice to think it happened at least!
- Interesting and fun
> Workers download, decompress, and materialize their shards into DuckDB databases built from Parquet files.
I'm interested to know whether the 5s query time includes this materialization step of downloading the files etc, or is this result from workers that have been "pre-warmed". Also is the data in DuckDB in memory or on disk?
- I watched the video and enjoyed it, I think the most interesting part to me was running the distributed Llama.cpp, Jeff mentioned it seems to work in a linear fashion where processing would hop between nodes.
Which got me thinking about how do these frontier AI models work when you (as a user) run a query. Does your query just go to one big box with lots of GPUs attached and it runs in a similar way, but much faster? Do these AI companies write about how their infra works?
- 1 point
- Over the past year or two I've just been paying for the API access and using open source frontends like LibreChat to access these models.
This has been working great for the occasional use, I'd probably top up my account by $10 every few months. I figured the amount of tokens I use is vastly smaller than the packaged plans so it made sense to go with the cheaper, pay-as-you-go approach.
But since I've started dabbling in tooling like Claude Code, hoo-boy those tokens burn _fast_, like really fast. Yesterday I somehow burned through $5 of tokens in the space of about 15 minutes. I mean, sure, the Code tool is vastly different to asking an LLM about a certain topic, but I wasn't expecting such a huge leap, a lot of the token usage is masked from you I guess wrapped up in the ever increasing context + back/forth tool orchestration, but still
- I've been tempted to use NixOS for my self hosted setup but I just can't bring myself to do it.
My setup is quite simple, it's just a few VMs with one docker compose file for each. I have an ansible playbook that copies the docker compose files across and that's it. There's really nothing more to it then that, and maintenance is just upgrading the OS (Fedora Server) once the version reaches EOL. I tend to stay 1 version behind the release cycle so upgrade whenever that gets bumped.
I do use nix-darwin on my macs so I do _see_ the value of using a nix configuration, but I find it difficult to see if the effort in porting my setup to Nix is worth it in the long run, configuration files don't get written in a short time. Maybe LLMs could speed this up, but I just don't have it in me right now to make that leap
- I've been reading the FAQ on the stop killing games website - they are not arguing for refunds, they're arguing for a EOL plan to be put in place for games
> No, we are not asking that at all. We are in favor of publishers ending support for a game whenever they choose. What we are asking for is that they implement an end-of-life plan to modify or patch the game so that it can run on customer systems with no further support from the company being necessary. We agree that it is unrealistic to expect companies to support games indefinitely and do not advocate for that in any way.
- With system builds like this I always feel the VRAM is the limiting factor when it comes to what models you can run, and consumer grade stuff tends to max out at 16GB or (somemtimes) 24GB for more expensive models.
It does make me wonder whether we'll start to see more and more computers with unified memory architecture (like the Mac) - I know nvidia have the Digits thing which has been renamed to something else
- I would say I was sad to see it go, but I moved to karakeep (formerly known as Hoarder) a few months ago and it's been a perfect replacement. Most importantly, you can self-host it - which is great.
One thing that stood out to me in the article was this this to justify the shutdown
> But the way people use the web has evolved, so we’re channeling our resources into projects that better match their browsing habits and online needs.
I'd be really interested to hear what exactly they mean by this, are people visiting fewer websites? Walled gardens like facebook etc make it useless for bookmarking so I can see how pocket would be a bad fit there
- I still do listen to the radio to discover new music, not live shows though but catch-up episodes. It's definitely worth it, yes some of the songs might not be to my taste but at least you get the chance to make that determination yourself and you get exposed to different stuff.
In my experience the algorithmic recommendation systems don't do this, I mean they might throw you a wildcard in here or there but I tend to find they overfit on some niche and it just becomes tiresome, and you don't get the commentary from the DJ who might add something like describing who the artist is, what the song's name is and maybe some flavour on the DJs interactions with the artist over time.
- I lost faith in iCloud custom domains a few months ago, I was receiving the usual marketing emails etc fine, but actual person to person emails? Sometimes replies would come through, other times nothing.
I thought at first people were just ignoring me, but when a company reached out to me over SMS to respond to a complaint I had, they said their email reply had bounced so was contacting me on SMS instead
Switched to fastmail at that point.
- I get the impression that running LLMs is a pain in general, always seems to need the right incantation of nvidia drivers, linux kernel and a boat load of VRAM, along with making sure the Python ecosystem or whatever you are running for inference has the right set of libaries and then if you want multi-tenant processing across VMs - forget it or pay $$$ to nvidia.
The whole cloud computing world was built on hypervisors and CPU virtualization, I wonder if we'll see a similar set of innovations for GPUs at commodity level pricing. Maybe a completely different hardware platform will emerge to replace the GPU for these inference workloads. I remember reading about Google's TPU hardware and was thinking that would be the thing - but I've never seen anyone other than Google talk about it.
- https://plaintextaccounting.org/
Quite a bit of a ramp up to get used to it, but no other tool outside of spreadsheets comes close to flexibility.
- I wrote this post earlier on the seemingly never ending journey of getting a homelab together a long with a bunch of networky things with mistakes and lessons along the way. It's been somewhat cathartic to write but excrutiating to think about the money spent, but I'm pleased with the final result!
- 3 points
- 2 points
I just followed the documentation in here https://beancount.github.io/docs/the_double_entry_counting_m... - it gives you the general principles to follow and you can just pick it up from there.