- His workload is close to the more typical one i mentioned as scenario b. It will cost them $84/month.
For me, we do about 800,000 build minutes/month, for orchestration alone it is going to be $1600/month. In contrast the runner host we use (Namespace labs) cost $0.0015 / minute[1] which is less than orchestration cost for GH, that is just ridiculous.
---
[1] It is even worse, the first 250,000 minutes is fixed at $250, so the base is $0.001 /minute for the runner.
- > abhorrent and downright abusive move
Is it that egregious?. I read it as they are redistributing the costs. It is in combination dropping the managed runner costs by a good margin and charging for the orchestration infrastructure. The log storage and real time streaming infra isn't free for them (not $84/month/runner expensive perhaps but certainly not cheap )
We don't need to use the orchestration layer at all, even if we want to use rest of the platform, either for orchestration or runners. Github APIs have robust hooks(not charged extra) and third-party services(and self-hostable projects) already provide runners, they will all add the orchestration layer now after this news.
--
Competition is good, free[2] kills competition. Microsoft is the master of doing that with Internet Explorer or Teams today.
Nobody was looking at doing the orchestration layer because Github Actions was good enough at free[1], now the likes of BuildJet, Namespace Labs etc are going to be.
[1] Scheduler issues in Github Actions not withstanding, it was hard to compete against a free product that costs money to build and run.
[2] i.e. bundled into package pricing,
- OP means to say he has many jobs in the merge queue that the runners are always busy 24/7.
This is not uncommon in some orgs - less number of concurrent runners, slow builds, loads of jobs because of automation or how hooks for the runners are setup.
In the context of discussion that doesn't matter, OP's point distills to that they use minimum of 720 hours / month of orchestration time or some multiple of that on self hosted runners running 24x7.
Github will now charge $84 extra per month for single self-hosted runner running 24x7 - i.e. that is the cost for 43,200 build minutes for only their orchestration alone.
In a more typical setup that is equivalent to say 5 self-hosted running running ~4.5 hours a day(i.e 144/hours/runner/month)
- More likely that error handling is not well implemented - i.e Either backend is not throwing the equivalent of 429/402 errors or the gateway is not handling the errors well and returns this message even though a 429 is being thrown.
- If anything llms would be poorer in codegen for static languages because they are more verbose - More tokens to generate and use limited context windows parsing code.
The advantage rather for llms in strongly typed languages is that compilers can catch errors early and give the model early automated feedback so you don’t have to.
With weakly typed (and typically interpreted) languages they will need to run the code which maybe quite slow to do so or not realistic.
Simply put agentic coding loops prefer stronger static analysis capabilities.
- It does not.
Power purchase agreements are priced differently and usually written to guarantee power at a predictable price, think of it like reserved instances and spot on the cloud. Bulk of workloads don’t care or benefit from spot pricing.
Also Modern neoclouds have captive non grid sources like gas or diesel plants for which grid demand has no impact to cost. These sources are not cheap but DC operators have not much choice as getting grid capacity takes years . Even gas turbines are difficult to procure these days so we hear of funky sources like jet engines.
- IaaC code is one of those use cases just throwing LLM is painful for a refactor.
In my experience claude/codex to wrangle CDK constructs be complicated, it frequently hallucinates constructs that simply do not exist, options that are not supported etc.
While they can generate IaaC component mostly okay and these problems can be managed, Iterations can take a lot of time, each checkpoint, goes the deploy/ rollback cycles in CF. CloudFormation is also not particularly fast, other IaaC frameworks are not that different.
Running an agent to iterate until it gets it right is just more difficult with IaaC refactor projects. Hallucinations, stuck loops and other issues, can quickly run the infra bill up not to mention security.
- Yes, but for experienced engineers that is still a huge huge change .
Even 12 months ago simplifying tasks alone was insufficient, you still needed a large group engineers to actually write, review and maintain a typical product for solid startup offering. This came with the associated overhead of hiring and running mid sized teams.
A lot of skilled people (y)our age/experience are forced into doing people management roles because there was no other way to deliver a product that scales(in team and complexity not DAU).
A CTO of mid-stage startup had to be good architect, a decent engineering manager, be deeply involved in product and also effectively communicate with internal and external customers.
Now for startups setting up new you can either defer the engineering manager and people complexity lot latter than you did before. You could have a very senior but small team who can be truly 10x level and be more productive without the overhead of communication, alignment and management that comes with large teams.
----
tldr; Skilled engineers can generate outsized returns to orgs that set them up to be successful(far more than before), I can't say if compensation is reflecting this yet, if not it soon will.
- The parent poster means to say prove it is more economical, not that it is doable.
It is hard to compute the economics of small nuclear reactors that use highly enriched fuel. A lot of it is funded by defense needs.
Mixed use is largely to keep defense manufacturing active not because they are economically effective.
If nuclear civilian ships were cheaper, there would be efforts to make a lot of them (In Russia and China if not other countries etc)
- Less demand these days, In 2015 when it launched there was lot more drive to move to the cloud, lift and shift was the industry mantra. By 2024 the kind of orgs with >100 PB have already moved to cloud or have no plans to do so.
The current solution is we can bring our own devices and reserve ports on a AWS Data Transfer Terminal. It costs $300-500/hour USD for a 100 GbE bandwidth so not really cheap.
While AWS is stopping doing devices for migration (not economical at low volumes these days). They however still support physical transfers so customers can pack their own planes so to speak with hard disks to the AWS terminal.
- There are nuclear icebreakers.
Nuclear powered non military ships do exist, it just not economically feasible .
- AWS Snowmobile exists , I wouldn’t call filling a truck (or a plane) with hard disks insane .
- No need do charge in-situ, the ships (and ports) already transfer several times the battery volume and weight on berthing quickly . The battery systems could be designed to leverage that .
Fire hazards are there for any fuel, Safety systems evolve to handle them. The environmental impact would be more localized than an oil spill.
- Positions are not necessary to be single transaction. They can be multi-step trade.
For global currency risk (meaning on USD), You will have to hedge your shorts with a non currency long position which historically hold value during defaults/ runs etc. Assets like gold (ETFs/Gold bars) or real estate (REITs or physical land holdings) or rights to commodity revenue like oil, copper etc [1].
If the currency risk is not for USD, then mix of other currencies particularly USD would work well as as hedge.
Currency risk is independent of shorting, i.e. it is risk in Long positions as well, current may inflate faster than your position increases in value etc.
---
[1] Commodity come with additional shorter term market volatility and risks - due their own supply/demand volatility and depend performance of economy.
However after assets like Gold, they will have highest correlation of returns against inflation as long the economy doesn't completely crash, because the demand for them is foundational
- As another poster noted, you can disable it by limiting all interactions (6 months at a time). It is not ideal, but it does work to for PRs. You should also close all current PRs when you do that so users cannot push to those branches as well.
- Issues and Pull requests are only optional features . Open source projects could always use GitHub as just git host/mirror like how torvalds/linux is setup .
- It is a little bit wild that 3/5 all came from the same country. Without the partition of ‘47 - India would have by far the largest group of about 600M a full a third of the global Muslims and also at the same time be only a minority in that hypothetical country with 1.1B Hindus
- A Full migration is not always required these days.
It is possible to write adapters to API interfaces. Many proprietary APIs become de-facto standards when competitors start creating those compatibility layers out of the box to convince you it is a drop-in replacement. S3 APIs are good example Every major (and most minor) providers with the glaring exception of Azure support the S3 APIs out of the box now. psql wire protocol is another similar example, so many databases support it these days.
In the LLM inference world OpenAI API specs are becoming that kind of defacto standard.
There are always caveats of course, and switches go rarely without bumps. It depends on what you are using, only few popular widely/fully supported features or something niche feature in the API that is likely not properly implemented by some provider etc, you will get some bugs.
In most cases bugs in the API interface world is relatively easy to solve as they can be replicated and logged as exceptions.
In the LLM world there are few "right" answers on inference outputs, so it lot harder to catch and replicate bugs which can be fixed without breaking something else. You end up retuning all your workflows for the new model.
- The net revenue would be lower than $200M. There are substantial costs associated with ticket revenue collection, from the % payment gateways charge, the maintenance and replacement cost the devices and turnstiles hardware, all the software and people who have to manage and enforce the system.
The issue with SF (unlike Iowa city) is that free for all everybody is going to be harder sell to voters when there is large amount of out of city traffic -travelers and greater Bay Area residents who do not pay city taxes.
What is more realistic is extend subsidies to all residents of the city beyond the current programs for youth/seniors/homeless/low income etc.
Much respect to what have you have achieved in a short time with graphite.
A lot of B2B SaaS is about tones of integrations to poorly designed and documented enterprise apps or security theatre, compliance, fine grained permissions, a11y, i18n, air gapped deployments or useless features to keep largest customers happy and so on and on.
Graphite (as yet) does not any of these problems - GitHub, Slack and Linear are easy as integrations go, and there is limited features for enterprises in graphite.
Enterprise SaaS is hard to do just for different type of complexity