- Cloudflare Containers (and therefore Sandbox) pricing is way too expensive. The pricing is a bit cumbersome to understand by being inconsistent with pricing of other Cloudflare products in terms of units and split between memory, cpu and disk instead of combined per instance. The worst is that it is given in these tiny fractions per second.
Memory: $0.0000025 per additional GiB-second vCPU: $0.000020 per additional vCPU-second Disk: $0.00000007 per additional GB-second
The smaller instance types have super low processing power by getting a fraction of a vCPU. But if you calculate the monthly cost then it comes to:
Memory: $6.48 per GB vCPU: $51.84 per vCPU (!!!) Disk: $0.18 per GB
These prices are more expensive than the already expensive prices of the big cloud providers. For example a t2d-standard-2 on GCP with 2 vCPUs and 8GB with 16GB storage would cost $63.28 per month while the standard-3 instance on CF would cost a whopping $51.84 + $103.68 + $2.90 = $158.42, about 2.5x the price.
Cloudflare Containers also don't have peristent storage and are by design intended to shut down if not used but I could then also go for a spot vm on GCP which would bring the price down to $9.27 which is less than 6% of the CF container cost and I get persistent storage plus a ton of other features on top.
What am I missing?
- I don't think they are a hint that spacetime is not fundamental. But I do think spacetime has to be some kind of real physical reality.
The modifications of spacetime that we see as effects of gravity are relative changes to our immediate surroundings or reference frame.
Similarly how you can't tell who is actually stationary and who is moving when two objects are in freefall and all you can note is the relative speed between the two, it would be equally valid to say the objects inside spacetime are getting distorted relative to spacetime.
- Even on my Macbook with Firefox the site has a strange feel when scrolling. It's not exactly struggling but it feels unnatural and slightly off/slow/uneven. Like it's on the edge of struggling. Bit hard to describe. The effect gets worse towards the mid section of the page with the side scrolling logo circles. I removed that section via dev tools which helped with performance. When I have that part of the page in view I get 80-90% CPU usage of one core. But even after removing it I can saturate a core by scrolling around, especially towards the lower part of the page.
It is indeed one of the worst optimized CSS I've seen in a while. Weird for a project that is all about speed.
- 3 points
- The german site of the source speaks of 0.1mm so you were correct
https://www.ipp.mpg.de/de/aktuelles/presse/pi/2020/01_20> bei Toleranzen von teilweise nur 0,1 Millimeter - There is some meat to the story, I agree. But it's not surprising. The fine tuning model of course will be small in file size and not take too long to train because by definition it is applying changes to a small subset of the main model and is trained only on a small amount if input data. You can't use the small tuning model for "Teddies" with a query that has nothing to do with Teddies. You could see these small tuning models as a diff file for the main model. And depending on the user query one can choose an appropriate diff to be applied to improve the result for that specific query.
When you train a model with new inputs to fine tune you can save the weights that got changed to a separate file instead of the main file.
In other words one can see the small tuning models as selectively to be applied updates/patches.
- One difference is that you are aware that you can't do it and state so. Our current LLMs will just give whatever result they think it should be. It might be correct, it might be off by a bit or it might be completely wrong and there's no way for the user to tell apart from double checking with some non-LLM source wich kinda defeats the purpose of asking the LLM in the first place.
- Q1 and Q2 of 2022 were negative growth. The past 4 quarters were not and Q2 of 2022 was just barely. So technically there was a brief recession in H1 of 2022. Right now there is no clear sign of a recession as per definition.
I think recessions are also widely misunderstood as being a binary thing. Like going from "everything is A-OK" to "OMG it's all going to shite". There can be a recession which people barely feel. It's not like an event horizon from which there is no turning back.
- I'm giving Orion a try every couple months because the premise is great but unfortunately for me it's so buggy that it's unusable. But then again I rely on a lot of very modern web APIs like WebRTC. Hopefully one day it'll get there but it's a very long road ahead. Not sure where those bugs come from either because Safari doesn't suffer from the same issues.
- I don't think the article claims A). The claim is that YC prefers companies that match a certain pattern and if yours does not then chances are not great. It mentions that other accelerators could be a fit though. It does not claim to have figured out how to not get rejected.
What would be valuable though is if they posted the feedback and rejection reason that they got from YC.
- It always comes down to trust. With WorldCoin you have to trust a private company which in turn trusts random "Orb Operators" to operate the Orb devices which you again have to trust. For a system which wants to be used for UBI and voting the incentives are too strong to not to abuse this big amount of required trust.
It doesn't matter if the source code and hardware plans of the Orbs are made public if we can't inspect a given Orb. Who's to say if that Orb doesn't generate 10% more IDs for someone than real ones.
You think those concerns are theoretical? Well there has been already abuse before the project officially was launched: https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
And then there is the inflationary nature of the coin itself due to the weekly issueance of coins for each user. This gives an incentive for the people behind the project like Sam Altman and Andressen Horowitz to cash out their part. They allocated 20% of the whole supply to themselves. A quarter billion dollars was put into the project and you can bet they'll want a good return on it while trying to portrait it as something like a charitable project.
I remain very sceptical.
- There is an internal page chrome://topics-internals where you can see the topics but I don't think you can fudge them
https://developer.chrome.com/docs/privacy-sandbox/topics/#ob...
- But the Vikings did have believes in afterlife. Later records of Valhalla or Folkvangr speak of that very clearly. There is also the fact that items were added to burried bodies which usually is done with the believe that these items would be useful for the deceased person in the afterlife.
- The report [0] this graph was based on also has major issues as it does not consider versions of the frameworks and includes sites using old versions of Next or Nuxt compared to only new sites for Astro as it's a new framework. There have been major changes for example between Nuxt2 (Vue2) and Nuxt3 (Vue3). They at least disclose that at the end of the report but it still leaves a bad taste because it should have been possible to differentiate these without too much work and they were aware of the problem. Another issue is that Astro seems to be used a lot for static sites whereas the others mostly for dynamic ones. I feel like it's really not a fair comparison.
[0]: https://astro.build/blog/2023-web-framework-performance-repo...
Fair point.> Creating a future in Rust does not have any side effects like running the future in background. This is not JS. Creating a future is just creating an object representing future (postponed) computation. There is nothing spawned on the executor. There are no special side effects (unless you code them explicitly). It works exactly as any other function returning a value, hence why should it be syntactically different?
I disagree here. Any normal function call can do these things. On the other hands an async function returning a future does nearly nothing. It sets up an execution context but doesn't execute (in Rust). But they usually look like a function call that actually performs the action - not so! An explicit "async" in front of it would make the program flow more clear instead of hiding it.> Contrary, an `await` is an effectful operation. It can potentialy do a lot - block execution for arbitrary long time, switch threads, do actual computation or I/O... So I really don't understand why you want to hide this one.
That's exactly speaking to my previous point. The program flow is not 100% immediately obvious anymore. One could argue that "await" is fine as is but maybe adding "async" to the call and not just function signature would add clarity.> Maybe the naming is confusing - because `await` does not really just `await`. It runs the future till completion. You should think about it more as if it was named `run_until_complete` (although it is still not precise, as some part of that "running" might involve waiting).- You can't change the async/await rules of Rust anymore. I get that. But if it started like I described from the beginning I don't see why that wouldn't work. It's just a question of syntax. Someone adding a blocking call 5 layers down wouldn't be any different than someone adding an "await foo()" right now. Code would still compile fine. As long as everything follows the same rules. Can't mix them obviously.
Only if you know that foo is an async function. You can't tell by the function call itelf.> foo(); <-- doesn't block
Interesting, I haven't seen this warning in the Rust codebase I worked a little with. I'll have to check the compiler settings. Anyways wouldn't it make sense to actually throw an error instead of just a warning?> warning: unused implementer of `futures::Future` that must be used
Why couldn't the compiler clearly state the reason for the error though?> Additionally there are certain things you are not allowed to keep across await points, e.g. mutex guards or other stuff that's not safe to switch between threads. E.g. using a thread-local data structure across await points might break, because you could be on a different thread after await. If await was hidden, you'd likely be much more surprised when the compiler would reject some code due to "invisible" await.- It would just make things more explicit. Whenever you want to obtain a future you'd have to add "async". The execution of async stuff would work the same just instead of having to explicitly "await" things you'd have to explicitly "async" things. Of course you can't change the way Rust does async/await now without having to rewrite all the async code so not going to happen.
For the huge factor in price difference you can keep spare spot VMs on GCP idle and warm all the time and still be an order of magnitude cheaper. You have more features and flexibility with these. You can also discard them at will, they are not charged per month. Pricing granularity in GCP is per second (with 1min minimum) and you can fire up firecracker VMs within milliseconds as another commenter pointed out.
Cloudflare Sandbox have less functionality at a significantly increased price. The tradeoff is simplicity because they are more focused for a specific use case for which they don't need additional configuration or tooling. The downside is that they can't do everything a proper VM can do.
It's a fair tradeoff but I argue the price difference is very much out of balance. But then again it seems to be a feature primarily going after AI companies and there is infinite VC money to burn at the moment.