- 2 points
- davidjfelix parentIt may result in an outsized penalty to bootstrapped companies but being VC funded doesn't make you immune to this. VC funded companies with revenue will not be able to offset their revenue by reinvesting in R&D (software development) expenses, so in some cases they may be seen as having a profit when they previously wouldn't have. In those cases they'd have a tax burden.
- Or Cap'n'Proto, Protobuf, Cloudflare workers, Cloudflare Durable Objects. The LAN house is cool too.
- Is that the case? It's just a product you like?
- Yeah -- I see nobody but you mentioning HTAP and I see every comment on your account talking about the same product. That's astroturfing.
- So what's the goal here? Astroturf SingleStore ads in any post about databases?
- Unwrap is fine if used sparingly and as mentioned, to indicate a bug, but in practice it requires discipline and some wisdom to use properly - and by that I mean not just "oh this function should be a `Result` but I'll add that later (never).
I think relying on discipline alone in a team is usually a recipe for disaster or at the very least resentment while the most disciplined must continually educate and correct the least disciplined or perhaps least skilled. We have a clippy `deny` rule preventing panics, excepts, and unwraps, even though it's something we know to sometimes be acceptable. We don't warn because warnings are ignored. We don't allow because that makes it too easy to use. We don't use `forbid`, a `deny` that can't be overridden, because there are still places it could be helpful. What this means is that the least disciplined are pushed to correct a mistake by using `Result` and create meaningful error handling. In cases where that does not work, extra effort can be used to add an inline clippy allow instruction. We strongly question all inline clippy overrides to try to avoid our discipline collapsing into accepting always using `unwrap` & `allow` at review time to ensure nothing slips by mistakenly. I will concede that reviews themselves are potentially a dangerous "discipline trap" as well, but it's the secondary line of defense for this specific mistake.
- Ideal customer profile
- My understanding was that the "enemy" was McKinsey, a firm that has a reputation to me as being an expensive consulting firm filled with MBA types who frequently are hired by companies.
My understanding of this reputation is: This often happens at the detriment of either product quality or employee satisfaction. It's debatable if they actually have a reputation of providing value. I think short term? Maybe, albeit expensive. Long term? I'd say no.
- I’m glad you’ve limited your number of concerns to allow yourself to focus on what matters. Just curious where did replying on hacker news rank?
- Agreed. It's wild to me how many people think they need arbitrary queries on their transactional database and then go write a CRUD app with no transactional consistency between resources and everything is a projection from a user or org resource -- you can easily model that with Dynamo. You can offload arbitrary analytical queries or searches to a different database and stop conflating that need with your app's core data source.
- My take: it's a mix of brand bundling and lack of data. They're roughly equivalent but shorts is bundled with youtube which has its own brand perception and reels are bundled with IG/FB and have their own brand perception. Additionally fewer users means less algorithmic data to keep viewers.
Tiktok was allowed to establish its own brand and develop a community while shorts and reels are intrinsically tied to their past. They may be able to escape that history but I don't think it's helping them be fast movers or win "cool" points.
- > They have people putting their fingers on the scales to decide what content gets promoted.
Is this just your belief or is there evidence you can point to.
How would you differentiate manual intervention from algorithmic intervention?
- Context: ByteDance is the parent company of TikTok. Frequently people talk about "the tiktok algorithm". This is that.
- Allegedly it has an LSP and vscode support but I also have never used either.
https://github.com/facebook/buck2/tree/main/starlark-rust/vs...
- At a previous employer we had a pair on call for each of: front end, back end, and infra. We had on-call lasting from Monday midday - Friday midday. Handing off to a "weekend on-call" from the same pool of people from Friday midday to Monday midday. Weekend on-call paid 100 per day, weekday on-call paid 50 per day. You were generally expected to take normal time "off" (but still on call) if paged off hours. Many people would still work if it was just a blip (rare).
I thought this was a pretty good system and despite the cycles being shorter, we had enough engineers to fill a rotation pretty well so that at most you were on call once a month, alternating months between weekend and weekday on-call cycles.
I still do not enjoy being forced into on call and wish I could opt-in. We traded weeks a lot but with smaller rotations or really finicky paging its awful. I still have a sinking feeling in my gut when I hear the work phone ringtone from somebody else's phone in public, and murphy's law definitely applies to being on call -- you always get paged the minute after your beer gets delivered at a restaurant.
- This link also did not work for me.
- I thought they debunked the "Spotify model" as something being promoted by a few consultants but also being phased out at the same time as being exposed to the public (nearly 10 years ago in 2014).
There are dozens of articles about the problems with matrix management that it introduced.
- Membership loyalty cards typically gate "yellow tag" pricing. Some yellow tags are for bulk purchase discounts but some are just "discounted prices". The FTC says that it's deceptive pricing to advertise a sale while raising prices, but since yellow tags are membership pricing, not a discount, it's defensible to raise prices with the original price as a yellow tag rate, available only to members. Yellow tags are not always promotional prices.
- I think the problem is seeing it as an investment. Most people won't say a car is an investment but rather a cost - the investment is transportation and its value is gained external to the sell price of the car. The home itself really ought to be considered a cost, while the land and property as a whole may (but not necessarily) be an investment.
It produces a dividend of shelter for the owner. Assuming labor and material prices are fixed, the asset should be depreciating in value from wear and use which would all point to flat or lowering value.
Assuming that it must make returns, one or more of the following must be true:
* Labor price increases
* Material price increases
* Land value increases
- (I don't agree with this viewpoint) maybe the author views fungibility of employees as a feature because it lessens the risk of a founder sinking an investment over a personal or interpersonal issue.
- They had an ETF that did this called SJIM but just announced it's closing: https://www.etf.com/sections/news/inverse-jim-cramer-etf-shu...
- Use multiple messages.
`git commit -m "Fix actual words that humans read" -m "Fixes: #234353"`
You could probably script this based on branch if you're one of those people who names branches after ticket numbers.
But my actual advice is that I think -m is a bad pattern that leads to bad commit messages, and omitting the message argument opens your EDITOR, and you should use that.
- I can't stand the ticket number in the message. It's an immediate waste of 6-10 characters on meaningless metadata that GitHub can pull from the body and link for you. Further, the numbers are sequential, so frequently VERY similar, and highly prone to typos, which is harmful to anyone even minorly dyslexic.
- 4th link to blog has a link to homepage. Homepage lists country in the footer.
- > This is exactly my point. You bolt on ever more Serverless offerings to accomplish any actual goal of your application. SNS notifications is exactly the kind of thing I don't want to think about, code around, and pay for. I have Phoenix.PubSub.broadcast and I continue shipping features. It's already running on all my nodes and I pay nothing for it because it's already baked into the price of what I'm running – my app.
I think this is fine if and only if you have an application that can subscribe to PubSub.broadcast. The problem is that not everything is Elixir/Erlang or even the same language internally to the org that runs it. The solution (unfortunately) seems to be reinventing everything that made Erlang good but for many general purpose languages at once.
I see this more as a mechanism to signal the runtime (combination of fly machines and erlang nodes running on those machines) you'd like to scale out for some scoped duration, but I'm not convinced that this needs to be initiated from inside the runtime for erlang in most cases -- why couldn't something like this be achieved externally noticing the a high watermark of usage and adding nodes, much like a kubernetes horizontal pod autoscaler?
Is there something specific about CPU bound tasks that makes this hard for erlang that I'm missing?
Also, not trying to be combative -- I love Phoenix framework and the work y'all are doing at fly, especially you Chis, just wondering if/how this abstraction leaves the walls of Elixir/Erlang which already has it significantly better than the rest of us for distributed abstractions.
- This is a very neat approach and I agree with the premise that we need a framework that unifies some of the architecture of cloud - shuttle.rs has some thoughts here. I do take issue with this framing:
- Trigger the lambda via HTTP endpoint, S3, or API gateway ($)
- Write the bespoke lambda to transcode the video ($)* Pretending that starting a fly machine doesn't cost the same as triggering via s3 seems disingenuous.
- Place the thumbnail results into SQS ($)* In go this would be about as difficult as flame -- you'd have to build a different entrypoint that would be 1 line of code but it could be the same codebase. Node it would depend on bundling but in theory you could do the same -- it's just a promise that takes an S3 event, that doesn't seem much different.
- Write the SQS consumer in our app (dev $)* I wouldn't do this at all. There's no reason the results need to be queued. Put them in a deterministically named s3 bucket where they'll live and be served from. Period.
- Persist to DB and figure out how to get events back to active subscribers that may well be connected to other instances than the SQS consumer (dev $)* Again -- this is totally unnecessary. Your application *should forget* it dispatched work. That's the point of dispatching it. If you need subscribers to notice it or do some additional work I'd do it differently rather than chaining lambdas.
So really the issue is:* Your lambda really should be doing the DB work not your main application. If you've got subscribers waiting to be informed the lambda can fire an SNS notification and all subscribed applications will see "job 1234 complete"* s3 is our image database
* our app needs to deploy an s3 hook for lambda
* our codebase needs to deploy that lambda
* we might need to listen to SNS
which is still some complexity, but it's not the same and it's not using the wrong technology like some chain of SQS nonsense.
- Can Mars hold onto atmosphere? I was under the impression that the general consensus was Mars lost its atmosphere when it lost its magnetic field.
- My opinion is there were 2 major issues and a bunch of minor cuts.
The major issues (imo):
* Reliance on extending the runtime via the C-bindings (and that changing)
* A community that had largely gotten accustomed to stability being thrust into an enormous change all at once
I think people tend to be in a camp of either "this was good and needed to happen because python had unshakable warts" or "this was bad and we should have lived with language mistakes forever". I think both of those camps conflicting is really what made it really painful - breaking changes were held for years and then once one got in they all came. I think the reality is that coming up with a migration plan incrementally working towards it would give developers more time to focus on one upgrade rather than a full rewrite. Node gets a bad reputation for being "chaotic" or "constantly changing" but the changes are small and manageable comparatively. Go on the other hand has managed to maintain strong stability, but with an ecosystem that's working primarily in the core language rather than the implementation language (C for python).
I think the python 3 merge did a ton of damage to the community's willingness to encourage breaking changes that are needed and it's why packaging and runtime self hosting have been comparatively weak despite a huge userbase.
Not breaking apis is a great ideal but if you have to, breaking them in planned, bite-sized, frequent bursts is often MUCH MUCH better than once a decade.
- FWIW, Vercel is at least partially backed by cloudflare services under the hood.