- HolyLampshadeI worked in the Swedish office of a multinational for a couple of years and the one experience I had where Swedes were selling a complex multi-million euro project to Germans was one of the most bureaucratically filled initiatives I’ve ever experienced in my life. Not sure if the project ever really took off, but I’m thankful I was able to avoid it beyond the initial week of discussions.
- This is not uncommon between even allies: https://www.dw.com/en/german-intelligence-spied-on-white-hou...
The issue has less to do with intelligence silliness, and more to do with the fact that the overall geopolitical objectives of the US can not be trusted, and that rift has grown to a point where self-reliance on critical infrastructure may be in Europe’s best interest.
- I’m a tad late to the party, but it’s worth providing a little context to the technical conversation.
Of the many thing trading platforms are attempting to do, the two most relevant here are the overall latency and more importantly where serialization occurs on the system.
Latency itself is only relevant as it applies to the “uncertainty” period where capital is tied up before the result of the instruction is acknowledged. Firms can only have so much capital risk, and so these moments end up being little dead periods. So long as the latency is reasonably deterministic though it’s mostly inconsequential if a platform takes 25us or 25ms to return an order acknowledgement (this is slightly more relevant in environments where there are potentially multiple venues to trade a product on, but in terms of global financial systems these environments are exceptions and not the norm). Latency is really only important when factored alongside some metric indicating a failure of business logic (failures to execute on aggressive orders or failures to cancel in time are two typical metrics)
The most important to many participants is where serialization occurs on the trading venue (what the initial portion of this blog is about; determining who was “first”). Usually this is to the tune of 1-2ns (in some cases lower). There are diminishing returns however to making this absolute in physical terms. A small handful of venues have attempted to address serialization at the very edge of their systems, but the net result is just a change in how firms that are extremely sensitive to being first apply technical expertise to the problem.
Most “good” venues permit an amount of slop in their systems (usually to the tune of 5-10% of the overall latency) which reduces the benefits of playing the sorts of ridiculous games to be “first”. There ends up being a hard limit to the economic benefit of throwing man hours and infrastructure at the problem.
- The issue here for me has always been about the difference between treating a symptom and treating the illness.
Excessive surveillance is necessary when you cannot convince people of the merits of your politics or morals on their own and need to use the power of the State to intimidate and control their access.
For the issue on minors, if you have a child (guilty here) you are obligated to actively raise and educate them on the nature of the world. For access to online interactions this doesn’t necessarily only mean active limits (as one might judge appropriate for the child), but also teaching them that people do not always have positive intent, and anonymity leads to lack of consequence, and consequently potentially antisocial behavior.
A person’s exposure to these issues are not limited to interactions online. We are taught to be suspicious of strangers offering candy from the back of panel vans. We are taught to look both ways when entering a roadway.
The people demanding the right to limit what people can say and who they can talk to do so under the guise of protecting children, but these tools are too prone to the potential for abuse. In the market of ideas it’s better (and arguably safer, if not significantly more challenging) to simply outcompete with your own.
- For what it’s worth I went through the upgrade last weekend. There is a compatibility check script and, frankly, the whole process proxmox had described on their site worked precisely as advertised.
5 host cluster; rebooted them all at completion and all of the containers came back up without issue (combination of VMs and LXC)
- In fact, it won’t be. Which is why NYSE was so quick to rebrand NYSE Chicago as NYSE Texas when TXSE made the announcement they were launching in Equinix NY4 in Secaucus. The only real differentiator these guys would have had (outside of listings rules) would have been location, but they opted for the lower resistance of locating with all the other markets.
- I completely agree with you. 21 years ago when it was released it was simply “yet another competitor” to the sort of overlay systems that gamespy and the like were trying to implement. You installed it because Half-Life 2 (and the litany of mods that became empires into themselves) required it, but it took years for it to develop in a direction that pointed to where we are now.
The first time I did a rebuild and now no longer needed the installation media for games, or the license keys in the manual/game jacket, and I was fully sold.
I don’t fully grasp the hatred, because almost every aspect of it is a vast improvement over what existed 20 years ago. But fortunately there are alternatives.
- A long time ago I had a colleague turn me on to Sidney Dekker’s “Drift Into Failure”, which in many ways covers system design taking into account the “human” element. You could think of it as the “realists” approach to system safety.
At the time we operated some industry specific, but national scale, critical systems and were discussing the balance of the crucial business importance of agility and rapid release cycles (in our industry) against system fragility and reliability.
Turns out (and I take no credit for the underlying architecture of this specific system, though I’ve been a strong advocate for this model of operating) if you design systems around humans who can rapidly identify and diagnose what has failed, and what the up stream and down stream impacts are, and you make these failures predictable in their scope and nature, and the recovery method simple, with a solid technical operations group you can limit the mean-time-to-resolution of incidents to <60s without having to invest significant development effort into software that provides automated system recovery.
The issue with both methods (human or technical recovery) is that both are dependent on maintaining an organizational culture that fosters a deep understanding of how the system fails, and what the various predictable upstream and downstream impacts are. The more you permit the culture to decay the more you increase the likelihood that an outage will go from benign and “normal” to absolutely catastrophic and potentially company ending.
In my experience companies who operate under this model eventually sacrifice the flexibility of rapid deployment for an environment where no failure is acceptable, largely because of an lack of appreciation for how much of the system’s design is dependent on an expectation of the fostering of the “appropriate” human element.
(Which leads to further discussion about absolutely critical systems like aviation or nuclear where you absolutely cannot accept catastrophic failure because it results in loss of life)
Extremely long story short, I completely agree. Aviation (more accurately aerospace) disasters, nuclear disasters, medical failures (typically emergency care or surgical), power generation, and the military (especially aircraft carrier flight decks) are all phenomenal areas to look for examples of how systems can be designed to account for where people may fail in the critical path.
- > We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.
This is the bit that really gets me fired up. People (read: system “architects”) were so desperate to “prove their worth” and leave a mark that many of these systems have been over complicated, unleashing a litany of new issues. The original design would still satisfy 99% of use cases and these days, given local compute capacity, you could run an entire market on a single device.
- I’m not sure I follow the logic here. Let’s say a person owns a Tesla outright, and purchased it ignorant to Elon’s behavior (esp if prior to the last year or two). How does selling it benefit some cause? Tesla already has that person’s money. It’s purely a performative action?
- I can’t remember who first said it, but watching crypto evolve is like speedrunning why 150yr of securities laws, practices, and regulations exist.
Counterparty risk (including custodianship) is monstrous in crypto. It’s sort of amusingly ridiculous in the same way most tech trends that are trying to break the status quo stumble into the reasons certain rules and regulations exist.
- I’d posit that closer proximity to drinking establishments would mean increased foot traffic with a less discerning clientele.
Every kebob is a good kebob when you’re a few drinks in.
- I used to run a fluid ops group managing a complicated and (relatively) unstable system.
I always approached this as a difference between incident management vs problem management, the later being the “what actually happened” phase, with lots of bureaucracy and post-mortems.
I always taught people in my group to manage out during incidents if they understood what was happening. In the vast majority of failure modes you don’t need the most technical people working the keyboard performing, for example, a failover. Most of those processes are well documented and well understood. Very technical/operationally minded people tend to want to solve the problem as quickly as possible, but I always found them far more valuable discussing the issue with stakeholders, and playing a blocking move for the more junior guys/gals on the keyboards. This also helps the juniors get the experience necessary to eventually be able to help develop future staff.
- The issue boils down more to consolidation of interest, and 'historical' thinking by people who can't be assed to understand how the market actually works.
Nasdaq and NYSE have significant volumes because people think they have significant volumes (as circular as that reasoning is). There are entire entities on the fund/investment management side of the industry that are content to do their entire risk adjustment (meaning, trading) in the closing auction, and the dominant closing auction is on the primary listing exchange (just because...well, as I highlighted above).
There was a brief period in the mid aughts when Nasdaq (through the INET and BRUT acquisitions) and BATS were able to compete with the more dominant NYSE due to monstrous discrepancies in system performance, but as all of the markets evolved over the last 15-20 years they are all (at least as far as the vast majority of market participants are concerned) effectively identical in pure technical performance.
- No no, wasn't meant to correct you. Just highlighting the insanity of the various exchange nicknames/acronyms/self-definitions.
- I am actually on the direct consumer side, but I feel for you. Occasionally we'll take intermediate data though from third parties, and good god the quality is atrocious.
This is especially true for datasets that don't have a single authoritative source, like corporate calendar actions, where you have to consume data from 3-4 vendors and reconcile for a 'shared consensus' on simple shit like..."When is a company going to release quarterly earnings?"
I'm ever grateful that I do not work in asset classes that are not centrally traded and centrally cleared. I recognize there's more money to be made in the uncertainty, but holy crap it drives me insane.
- It is, sadly. Through the entire chain of custody.
Of the many things I despise in the industry, market data and connectivity costs are near the top. It’s a fully captured market, and customers don’t have a choice when the producer decides to raise prices.
- Bingo. The transactions business has largely been commoditized (esp because of the vast array of ATSes that are available to be traded on), so the only way to force business and guarantee month-over-month revenue is in market data and connectivity (it also explains why NYSE, Nasdaq, and Cboe have leaned very heavily on data/connectivity business lines for generating new revenue over the last ten years; or in Nasdaq's case especially focusing on 'peripheral' businesses). Listings as well generates some revenue, but is a much harder business to get into.
- Not in any way to diminish the Swedish markets, but that's partially because Nasdaq and NYSE know they can sell 'status' in listing on their US markets.
- > So, are the exchanges connected?
Yes, and no. Not in the sense that it's one uniform trading system, but there are various interlinkages (market data via the SIP for example; mostly dictated by RegNMS) and most of the exchanges operate a brokerage running an order routing business.
> Would there be any difference for me buying and selling stocks on “normal NYSE” vs “NYSE Texas”?
Assuming you have the ability to dictate to your broker where they perform the trade there would be absolutely no difference between trading on NYSE vs NYSE TX (minus maybe some currently undecided fee differences). Functionally they are identical (they even run on the same technology stack).
> What benefits would companies see from listing on NYSE Texas vs the other NYSE?
The cultural issue TXSE (and by extension this NYSE TX move) are trying to capture is the 'anti-DEI' / 'the exchange tells us what the composition of our board must be to meet listings standards' type things. There's a subset of the corporate world who see value in capitalizing on these issues. There's also the potential for different financial requirements or incorporation requirements, but those haven't been disclosed yet (and wouldn't be too divergent from the existing differences between listing on the various Nasdaq or NYSE exchanges).