Preferences

CraigJPerry
Joined 5,368 karma
I'm @CraigJPerry2 on twitter.

  1. >> barely says anything about private sector money creation

    It opens with the words "Money is created in the Canadian economy in two main ways: through private commercial bank loans..."

    the introduction continues "... It also discusses how private commercial banks create money..."

    And then it goes on to do exactly that in detail. I'm struggling to understand why you wrote that.

    >> Your view of fractional reserve banking is rather.. unorthodox

    Unorthodox is not the word to describe what has at this point been published by most of the central banks in western economies with a sovereign currency. From the Fed to the Canadian central bank, from the Bank of England to the Bundesbank and so on.

    We're unfortunately living in a Copernican moment. We now better understand how money works but we're not permitted to say the earth orbits the sun just yet.

    It's utterly ludicrous that the idea of Fractional Reserve banking is propagated in today's world as having any relevance to how banking works in these economies.

    >> why do banks bother with deposits, then

    Very simply - cost of funds

  2. >> I need 1 agent that successfully solves the most important problem

    In most of these kinds of posts, that's still you. I don't believe i've come across a pro-faster-keyboard post yet that claims AGI. Despite the name, LLMs have no agency, it's still all on you.

    Once you've defined the next most important problem, you have a smaller problem - translate those requirements into code which accurately meets them. That's the bit where these models can successfully take over. I think of them as a faster keyboard and i've not seen a reason to change my mind yet despite using them heavily.

  3. Canada is not an exception and operates via the same mechanism https://lop.parl.ca/sites/PublicWebsite/default/en_CA/Resear...

    Fractional reserve is a model only for textbooks, it is not an accurate model of how the banking system works in most western economies with a central bank and sovereign currency today.

    >> The more useful limitation in economic terms and in legal terms is on the amount of capital banks need to hold

    Well this is usually the biggest of several limitations which impact whether a loan would be profitable to make for a bank or not so i don't entirely disagree but this is a legislative control, there's no "economic terms" here because in general no school of economics understands this or has anything to say about this control which you correctly point out exists and is central to loan decision making. People can argue about the degree of centrality because it's not the only factor so let me put it this way: it's central in a way which any notion of "fractional reserve" is simply not.

  4. > fractional reserve banking and do the math

    all models are wrong but some are still useful. This model isn’t useful at all since the fraction was legislated to be 0 years ago.

  5. only thing i'd add is mas for mac app store apps you want to ensure are installed but otherwise i run pretty much the same setup.

    When i install a fresh macos i have two commands to run - install nix using the determinate systems installer, then apply my nix config.

    It's not quite as streamlined as nixos but good enough.

    My biggest remaining pain point is dev envs - i've been leaning into adding a flake in each project, so for example i have a single project that's written in scala 2.13, when i cd into that project dir, the correct jvm version, sbt, intellij etc are installed, some useful env vars and shell aliases etc. - that's all great (i haven't felt the need to adopt denenv.sh or flox yet) but i do find myself wanting a devcontainer sandbox workflow more often these days (blame cli coding "agents"), i lean on vscode for that rather than nix so far. In python (where i spend a lot more time) uv loses a lot of value in nix and i don't like that.

  6. > Entire categories of illegal states and transitions can be eliminated.

    I have an over-developed, unhealthy interest in the utility of types for LLM generated code.

    When an llm is predicting the next token to generate, my current level of understanding tells me that it makes sense that the llm's attention mechanism will be using the surrounding type signatures (in the case of an explicitly typed language) or the compiler error messages (in the cases where a language leans on implicit typing) to better predict that next token.

    However, that does not seem to be the behaviour i observe. What i see is more akin to tokens in the type signature position in a piece of code often being generated without any seeming relationship to the instructions being written. It's common to generate code that the compiler rejects.

    That problem is easily hidden and worked around - just wrap your llm invocation in a loop, feed in the compiler errors each time and you now have an "agent" that can stochastic gradient descent its way to a solution.

    Given this, you could say well what does it matter, even if an LLM doesn't meaningfully "understand" the relationship between types and instructions, there's already a feedback loop and therefore a solution available - so why do we even need to care about the fact an llm may or may not treat types as a tool to accurately model the valid solution space.

    Well i can't help think this is really the crux of software development. Either you're writing code to solve a defined problem (valuable) or you're doing something else that may mimic that to some degree but is not accurate (bugs).

    All that said, pragmatically speaking, software with bugs is often still valuable.

    TL;DR i'm currently thinking humans should always define the type signatures and test cases, these are too important to let an LLM "mid" its way through.

  7. >> Coding AIs design software better than me

    Absolutely flat out not true.

    I'm extremely pro-faster-keyboard, i use the faster keyboards in almost every opportunity i can, i've been amazed by debugging skills (in fairness, i've also been very disappointed many times), i've been bowled over by my faster keyboard's ability to whip out HTML UI's in record time, i've been genuinely impressed by my faster keyboard's ability to flag flaws in PRs i'm reviewing.

    All this to say, i see lots of value in faster keyboard's but add all the prompts, skills and hooks you like, explain in as much detail as you like about modularisation, and still "agents" cannot design software as well as a human.

    Whatever the underlying mechanism of an LLM (to call it a next token predictor is dismissively underselling its capabilities) it does not have a mechanism to decompose a problem into independently solvable pieces. While that remains true, and i've seen zero precursor of a coming change here - the state of the art today is equiv to having the agent employ a todo list - while this remains true, LLMs cannot design better than humans.

    There are many simple CRUD line of business apps where they design well enough (well more accurately stated, the problem is small/simple enough) that it doesn't matter about this lack of design skill in LLMs or agents. But don't confuse that for being able to design software in the more general use case.

  8. > The statute says "or" and an a) b) c) bullet point

    There is no c) bullet point, the part you misinterpreted as an or is an AND:

    "with intent to cause that person to believe that immediate unlawful violence will be used against him..."

    >> A plasterer who admitted to stirring up racial hatred...

    Admitted?

  9. >> What you cannot do is calling for violence against them.

    > This is blatantly disingenuous. The Public Order Act 1986 ... <snip>... criminalize "insulting" and "abusive" words ...

    Do you know what i find disingenuous here, you hooked me with the words i quoted above so i went to the legislation:

    https://www.legislation.gov.uk/ukpga/1986/64

    And the thing to stand out was the change of meaning when the full quote is provided:

    ____

    Fear or provocation of violence. (1)A person is guilty of an offence if he—

    (a)uses towards another person threatening, abusive or insulting words or behaviour, or

    (b)distributes or displays to another person any writing, sign or other visible representation which is threatening, abusive or insulting,

    with intent to cause that person to believe that immediate unlawful violence will be used against him or another by any person, or to provoke the immediate use of unlawful violence by that person or another, or whereby that person is likely to believe that such violence will be used or it is likely that such violence will be provoked.

    ____

    If you have to rely on this kind of disingenuous trickery to make a point, then you don't have a point.

    The GP is correct in their statement:

    >> What you cannot do is calling for violence against them.

    You are incorrect in yours:

    > This is blatantly disingenuous.

  10. >> You could argue that NixOS hides a lot of complexity

    They both have the same complexity in that scenario. Underneath it's very comparable configuration for both but Nixos provides an easy abstraction for that specific case.

    If you can stay on the happy path with nixos then it's pretty lovely. I've even adopted nix-darwin for my mac's too.

    I'd still deploy Redhat/Fedora over nixos on anything revenue generating though. The problem is when you have to come off the happy path in nixos and now you're debugging some interestingly written c++ code that evaluates a language that has a derivation expressing what you wanted done. Contrast with the redhat situation, it's simpler but less convenient in the general case.

  11. I don't usually so i don't know unfortunately, i was just curious in this case
  12. Thats quite an impressive amount of functionality for not much code. Tokei says 4.4k sloc in the ui dir which contains the editor implementation. I was over 25k sloc for a less ambitious editor in typescript recently.

    I'm also a bit jealous of how clean the reframe usage model is, i really liked the dominoes explanation when i first learned about it. https://day8.github.io/re-frame-wip/dominoes-60k/

  13. How does that work? Is that an open source solution like the ZCRX stuff with io uring or does it require proprietary hardware setups? I'm hopeful that the open source solutions today are competitive.

    I was familiar with Solarflare and Mellanox zero copy setups in a previous fintech role, but at that time it all relied on black boxes (specifically out of tree kernel modules, delivered as blobs without DKMS or equivalent support, a real headache to live with) that didn't always work perfectly, it was pretty frustrating overall because the customer paying the bill (rightfully) had less than zero tolerance for performance fluctuations. And fluctuations were annoyingly common, despite my best efforts (dedicating a core to IRQ handling, bringing up the kernel masked to another core, then pinning the user space workloads to specific cores and stuff like that) It was quite an extreme setup, GPS disciplined oscillator with millimetre perfect antenna wiring for the NTP setup etc we built two identical setups one in Hong Kong and one in new york. Ah very good fun overall but frustrating because of stack immaturity at that time.

  14. > but LLMs can only learn languages that programmers write a sufficient amount of code in

    i wrote my own language, LLMs have been able to work with it at a good level for over a year. I don't do anything special to enable that - just front load some key examples of the syntax before giving the task. I don't need to explain concepts like iteration.

    Also llm's can work with languages with unconventional paradigms - kdb comes up fairly often in my world (array language but also written right to left).

  15. You're arguing that accounting is misleading, your argument is that we can ignore the balance and count only the assets column. A summation of assets ignoring liabilities is not a measure of wealth.
  16. Refuelling a cargo ship can take over a day. Quite a boring but well paid job.

    How many kwh are you lifting at a time with a container? How many kwh are you pumping in the same period?

  17. 6k month over the past decade is circa 1.7m today depending on which index fund you chose.

    Assuming a 4% draw down (conventionally agreed to be safe) is over 5.5k a month.

  18. €4000 euros plus tax to replace the module that contains the fuse. Insane.

    The ford transit custom PHEV costs £4500 to replace the timing belt. Access issues mean dropping the hybrid battery and parts of the sub frame. Compare with the mk8 transit, i've done the wet belt myself on that and it requires no special tools (well, i bought a specific crank pulley puller for £20) and can be done in a day on the driveway. I believe in some markets the replacement schedule is down to 6 years for the new phev due to all the wet belt failures on older models.

    So far my favourite brand to work on has been Mazda, the engineering is very thoughtfully done with consideration for repairs.

    I hear a lot of praise for toyota but it's from people who haven't worked on a car themselves rather than mechanics and they must be talking about toyotas from a bygone era because i'm not impressed with a 2019 corolla engineering at all, specifically various parts of the electrical system. I believe that was the most popular car in the world at that time.

    Tesla is remarkably well done. Simplicity is under rated. So much so i bought one with the intention to keep for a looooong time.

  19. >> you have a price ceiling by definition

    Price ceiling definition: a government-imposed legal maximum price

    My original comment: First come first served is a better <snipped for brevity>

    This is not a legal maximum price, this is a legal maximum in the derivative of price.

    > I didn't mention or allude to supply at all

    >> get what I need with 100% chance than get what I need cheaply with 5% chance

    How do you square these two statements? One claims 100% supply certainty when no such thing exists in this context. Without making certain assumptions (unstated, but ludicrous, assumptions are rife in economics discourse), you can't state much of value about which buyer will get the goods in the surge pricing model, you especially cannot say that the buyer with the larger wallet will always win. Think for a second what assumptions you've made to this point in the conversation - you're still down the rabbit hole of price ceilings in the comment chain thus far.

    >> The empirical history of price ceilings is there

    Not disputed but as per your call out of definition above, not relevant.

    To make the point further - the name for a limit on rate of change of price is not a price ceiling, anymore than the 0-60 time of a car is its top speed limit.

    >> you thought that I was talking about supply instead of resource allocation

    My challenge to you is to name the assumptions you've identified in your reasoning around resource allocation. I'm confident i can point out the deficiencies in your model because that is the nature of models.

    >> you should study elementary economics

    That's a great idea, a really good follow on from that is to identify logical fallacies you discover i. that process, especially those so accepted that it's not a stretch to say they are underpinning the discipline. A good example of that would be the conjectural origin theory of money but i digress.

  20. >> Thinking that retailers could keep RAM prices low and also keep it in stock for you is irrational.

    Not a claim i made

  21. > This is called a price ceiling

    The act of eliminating surge pricing is not a price ceiling. That's a different thing. That requires more than simply swapping surge pricing with first come first served. You've created a strawman.

    > I'd rather pay extra and get what I need with 100% chance

    False dichotomy. Neither approach increases supply. Of course according to economists who can hand wave away bullwhip effects with simple "this model assumes X" statements that go unquestioned in the conversations which cite the findings of the given model but i digress. According to economists, both approaches do increase supply, the theory goes that the price gouging retailer invests in more factory capacity. Or the factory owner buoyed by vibrant secondary market activity views increased production investment as a safe bet. Maybe there's some truth in the latter...

    > If you're concerned with wealth inequality

    I'm concerned with lazy financial engineering over hard work. Why should the scrappy but innovative startup be excluded from resources over the sclerotic incumbent with a deeper wallet?

  22. It's speculative price gouging. Calling it "surge pricing" doesn't stop erosion of consumer trust in the market. Watch now as more people more readily jump to price fixing conclusions. Not helped by the inevitable further increase in speculation through feedback loops and the resultant volatility.

    First come first served is a better principle than "surge pricing". A lottery is a better principle than "surge pricing". In the case that someone over purchased, they're free to dispose via secondary market if the value to them is lower than the out of stock price. I.e. decentralised pricing (and profits). Secondary market sales are just more efficient, they occur at negotiated prices that reflect true individual valuation, not the retailer's speculation.

    I'd rather reward diligence and personal responsibility - if you monitor market trends, anticipate needs, and act quickly, such as buying RAM ahead of a known upcoming supply crunch, you're rewarded with access at the original price. Rather than passive reliance on wealth to solve problems. First come first served values effort and foresight. Scarcity is managed through time and effort rather than money.

  23. I think that original post was taken down after a short while but antirez was similarly nerd sniped by it and posted this which i keep a link to for posterity: https://antirez.com/news/150
  24. What would be a good example of the kinds of things a 100 line function would be doing?

    I don't see that in my world so i'm naively trying to inline functions in codebases i'm familiar with and not really valuing the result i can dream up.

    For one, my tests would be quite annoying, large and with too much setup for my taste. But i don't think i'd like to have to scroll a function, especially if i had to make changes to the start and end of the function in one commit.

    I'm curious of the kinds of "long script" flavoured procedures, what are they doing typically?

    I ask because some of the other stuff you mentioned i really strongly agree with like "Focus on separating pure code from stateful code" - this is such an under valued concept, and it's an absolute game changer for building robust software. Can i extract a pure function for this and separately have function to coordinate side effects - but that's incompatible with too long functions, those side effectfull functions would be so hard to test.

  25. >> You can see my instructions in the coding session logs

    such a rare (but valued!) occurrence in these posts. Thanks for sharing

  26. At 650tb it's not a memory bound problem:

    working memory requirements

        1. Assume date is 8 bytes
        2. Assume 64bit counters
    
    So for each date in the dataset we need 16 bytes to accumulate the result.

    That's ~180 years worth of daily post counts per gb ram - but the dataset in the post was just 1 year.

    This problem should be mostly network limited in the OP's context, decompressing snappy compressed parquet should be circa 1gb/sec. The "work" of parsing a string to a date and accumulating isn't expensive compared to snappy decompression.

    I don't have a handle on the 33% longer runtime difference between duckdb and polars here.

  27. Yeah that's the one and the Grafana one is Tanka
  28. Imagine 1,000s of helm charts. Your only abstraction tools are an umbrella chart or a library chart. There isn't much more in helm.

    I liked KRO's model a lot but stringly typed text templating at the scale of thousands of services doesn't work, it's not fun when you need to make a change. I kinda like jsonnet plus the google cli i forget the name of right now, and the abstraction the Grafana folks did too but ultimately i decided to roll my own thing and leaned heavily into type safety for this. It's ideal. With any luck i can open source it. There's a few similar ideas floating around now - Scala Yaga is one.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal