Preferences

eis
Joined 4,339 karma

  1. I think you can absolutely compare them and there is no added flexibility, in fact there is less flexibility. There is added convenience though.

    For the huge factor in price difference you can keep spare spot VMs on GCP idle and warm all the time and still be an order of magnitude cheaper. You have more features and flexibility with these. You can also discard them at will, they are not charged per month. Pricing granularity in GCP is per second (with 1min minimum) and you can fire up firecracker VMs within milliseconds as another commenter pointed out.

    Cloudflare Sandbox have less functionality at a significantly increased price. The tradeoff is simplicity because they are more focused for a specific use case for which they don't need additional configuration or tooling. The downside is that they can't do everything a proper VM can do.

    It's a fair tradeoff but I argue the price difference is very much out of balance. But then again it seems to be a feature primarily going after AI companies and there is infinite VC money to burn at the moment.

  2. Cloudflare Containers (and therefore Sandbox) pricing is way too expensive. The pricing is a bit cumbersome to understand by being inconsistent with pricing of other Cloudflare products in terms of units and split between memory, cpu and disk instead of combined per instance. The worst is that it is given in these tiny fractions per second.

    Memory: $0.0000025 per additional GiB-second vCPU: $0.000020 per additional vCPU-second Disk: $0.00000007 per additional GB-second

    The smaller instance types have super low processing power by getting a fraction of a vCPU. But if you calculate the monthly cost then it comes to:

    Memory: $6.48 per GB vCPU: $51.84 per vCPU (!!!) Disk: $0.18 per GB

    These prices are more expensive than the already expensive prices of the big cloud providers. For example a t2d-standard-2 on GCP with 2 vCPUs and 8GB with 16GB storage would cost $63.28 per month while the standard-3 instance on CF would cost a whopping $51.84 + $103.68 + $2.90 = $158.42, about 2.5x the price.

    Cloudflare Containers also don't have peristent storage and are by design intended to shut down if not used but I could then also go for a spot vm on GCP which would bring the price down to $9.27 which is less than 6% of the CF container cost and I get persistent storage plus a ton of other features on top.

    What am I missing?

  3. Isn't that a bit of a holy grail though? If your software can fact check the output of LLMs and prevent hallucinations then why not use that as the AI to get the answers in the first place?
  4. I don't think they are a hint that spacetime is not fundamental. But I do think spacetime has to be some kind of real physical reality.

    The modifications of spacetime that we see as effects of gravity are relative changes to our immediate surroundings or reference frame.

    Similarly how you can't tell who is actually stationary and who is moving when two objects are in freefall and all you can note is the relative speed between the two, it would be equally valid to say the objects inside spacetime are getting distorted relative to spacetime.

  5. Even on my Macbook with Firefox the site has a strange feel when scrolling. It's not exactly struggling but it feels unnatural and slightly off/slow/uneven. Like it's on the edge of struggling. Bit hard to describe. The effect gets worse towards the mid section of the page with the side scrolling logo circles. I removed that section via dev tools which helped with performance. When I have that part of the page in view I get 80-90% CPU usage of one core. But even after removing it I can saturate a core by scrolling around, especially towards the lower part of the page.

    It is indeed one of the worst optimized CSS I've seen in a while. Weird for a project that is all about speed.

  6. If every site did that then it would be harder to quickly spot one in a long list of tabs. A neat trick but I don't think it is a particularily good idea.
  7. The title on HN at the current time [0] says the police chief was raided.

    There is only one person mentioned and therefor "his" can only refer to that person. "His" can not refer to the newspaper.

    [0] "Paper investigating police chief prior to the raids on his office and home."

  8. The german site of the source speaks of 0.1mm so you were correct

       > bei Toleranzen von teilweise nur 0,1 Millimeter
    
    https://www.ipp.mpg.de/de/aktuelles/presse/pi/2020/01_20
  9. There is some meat to the story, I agree. But it's not surprising. The fine tuning model of course will be small in file size and not take too long to train because by definition it is applying changes to a small subset of the main model and is trained only on a small amount if input data. You can't use the small tuning model for "Teddies" with a query that has nothing to do with Teddies. You could see these small tuning models as a diff file for the main model. And depending on the user query one can choose an appropriate diff to be applied to improve the result for that specific query.

    When you train a model with new inputs to fine tune you can save the weights that got changed to a separate file instead of the main file.

    In other words one can see the small tuning models as selectively to be applied updates/patches.

  10. It's not a 100kb model. It's 100kb config files for a several GB model. A small trained layer to stick on top of the real model for fine tuning.
  11. One difference is that you are aware that you can't do it and state so. Our current LLMs will just give whatever result they think it should be. It might be correct, it might be off by a bit or it might be completely wrong and there's no way for the user to tell apart from double checking with some non-LLM source wich kinda defeats the purpose of asking the LLM in the first place.
  12. Q1 and Q2 of 2022 were negative growth. The past 4 quarters were not and Q2 of 2022 was just barely. So technically there was a brief recession in H1 of 2022. Right now there is no clear sign of a recession as per definition.

    I think recessions are also widely misunderstood as being a binary thing. Like going from "everything is A-OK" to "OMG it's all going to shite". There can be a recession which people barely feel. It's not like an event horizon from which there is no turning back.

  13. I'm giving Orion a try every couple months because the premise is great but unfortunately for me it's so buggy that it's unusable. But then again I rely on a lot of very modern web APIs like WebRTC. Hopefully one day it'll get there but it's a very long road ahead. Not sure where those bugs come from either because Safari doesn't suffer from the same issues.
  14. I don't think the article claims A). The claim is that YC prefers companies that match a certain pattern and if yours does not then chances are not great. It mentions that other accelerators could be a fit though. It does not claim to have figured out how to not get rejected.

    What would be valuable though is if they posted the feedback and rejection reason that they got from YC.

  15. The shareholders of Mozilla own Mozilla. And that's not Google. If Google stopped the search deal then Bing would take over in a second.
  16. It always comes down to trust. With WorldCoin you have to trust a private company which in turn trusts random "Orb Operators" to operate the Orb devices which you again have to trust. For a system which wants to be used for UBI and voting the incentives are too strong to not to abuse this big amount of required trust.

    It doesn't matter if the source code and hardware plans of the Orbs are made public if we can't inspect a given Orb. Who's to say if that Orb doesn't generate 10% more IDs for someone than real ones.

    You think those concerns are theoretical? Well there has been already abuse before the project officially was launched: https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

    And then there is the inflationary nature of the coin itself due to the weekly issueance of coins for each user. This gives an incentive for the people behind the project like Sam Altman and Andressen Horowitz to cash out their part. They allocated 20% of the whole supply to themselves. A quarter billion dollars was put into the project and you can bet they'll want a good return on it while trying to portrait it as something like a charitable project.

    I remain very sceptical.

  17. Elon thinks X represents a variable. So a "super app" that will do anything.

    I think X represents closing a dialog/window or deleting things.

    We'll see which one is more apt in the near future.

  18. I'm not sure if Penrose suggests that time stops once there are only photons left. He says it is equivalent to the situation at the Big Bang and time certainly didn't stop there. In fact, in his conformal cyclical cosmology spacetime goes on indefinitely but just reboots regularily.
  19. Which does not mean that time stops. If no one hears a tree fall that does not mean the tree isn't falling.
  20. There is an internal page chrome://topics-internals where you can see the topics but I don't think you can fudge them

    https://developer.chrome.com/docs/privacy-sandbox/topics/#ob...

  21. While it's true that the afterlife was not eternal (Ragnarök) the same goes for the fame or legacy because the whole world is pretty much rebooted in their belief and with that the legacy vanishes.
  22. Well he mentioned Norse mythology and particularily the Vikings. The whole Pre-Christian Scandinavia of course is a way too wide a time span with very spotty historic records the further back we go.
  23. But the Vikings did have believes in afterlife. Later records of Valhalla or Folkvangr speak of that very clearly. There is also the fact that items were added to burried bodies which usually is done with the believe that these items would be useful for the deceased person in the afterlife.
  24. The report [0] this graph was based on also has major issues as it does not consider versions of the frameworks and includes sites using old versions of Next or Nuxt compared to only new sites for Astro as it's a new framework. There have been major changes for example between Nuxt2 (Vue2) and Nuxt3 (Vue3). They at least disclose that at the end of the report but it still leaves a bad taste because it should have been possible to differentiate these without too much work and they were aware of the problem. Another issue is that Astro seems to be used a lot for static sites whereas the others mostly for dynamic ones. I feel like it's really not a fair comparison.

    [0]: https://astro.build/blog/2023-web-framework-performance-repo...

  25.     > Creating a future in Rust does not have any side effects like running the future in background. This is not JS. Creating a future is just creating an object representing future (postponed) computation. There is nothing spawned on the executor. There are no special side effects (unless you code them explicitly). It works exactly as any other function returning a value, hence why should it be syntactically different?
    
    Fair point.

        > Contrary, an `await` is an effectful operation. It can potentialy do a lot - block execution for arbitrary long time, switch threads, do actual computation or I/O... So I really don't understand why you want to hide this one.
    
    I disagree here. Any normal function call can do these things. On the other hands an async function returning a future does nearly nothing. It sets up an execution context but doesn't execute (in Rust). But they usually look like a function call that actually performs the action - not so! An explicit "async" in front of it would make the program flow more clear instead of hiding it.

        > Maybe the naming is confusing - because `await` does not really just `await`. It runs the future till completion. You should think about it more as if it was named `run_until_complete` (although it is still not precise, as some part of that "running" might involve waiting). 
    
    That's exactly speaking to my previous point. The program flow is not 100% immediately obvious anymore. One could argue that "await" is fine as is but maybe adding "async" to the call and not just function signature would add clarity.
  26. You can't change the async/await rules of Rust anymore. I get that. But if it started like I described from the beginning I don't see why that wouldn't work. It's just a question of syntax. Someone adding a blocking call 5 layers down wouldn't be any different than someone adding an "await foo()" right now. Code would still compile fine. As long as everything follows the same rules. Can't mix them obviously.
  27.     > foo();          <-- doesn't block
    
    Only if you know that foo is an async function. You can't tell by the function call itelf.

        > warning: unused implementer of `futures::Future` that must be used
    
    Interesting, I haven't seen this warning in the Rust codebase I worked a little with. I'll have to check the compiler settings. Anyways wouldn't it make sense to actually throw an error instead of just a warning?

        > Additionally there are certain things you are not allowed to keep across await points, e.g. mutex guards or other stuff that's not safe to switch between threads. E.g. using a thread-local data structure across await points might break, because you could be on a different thread after await. If await was hidden, you'd likely be much more surprised when the compiler would reject some code due to "invisible" await. 
    
    Why couldn't the compiler clearly state the reason for the error though?
  28. It would just make things more explicit. Whenever you want to obtain a future you'd have to add "async". The execution of async stuff would work the same just instead of having to explicitly "await" things you'd have to explicitly "async" things. Of course you can't change the way Rust does async/await now without having to rewrite all the async code so not going to happen.
  29. The issue arises if you don't use the returned value. Lets say there's a function "async fn saveToDisk()". You call this function before you exit the program. Now if you forget to use await on it, your program will exit without having saved the data to disk.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal