See: https://security.apple.com/blog/memory-integrity-enforcement...
And some interesting excerpts:
Both approaches revealed the same conclusion: Memory Integrity Enforcement vastly reduces the exploitation strategies available to attackers. Though memory corruption bugs are usually interchangeable, MIE cut off so many exploit steps at a fundamental level that it was not possible to restore the chains by swapping in new bugs. Even with substantial effort, we could not rebuild any of these chains to work around MIE. The few memory corruption effects that remained are unreliable and don’t give attackers sufficient momentum to successfully exploit these bugs.
Notably, attackers confront Memory Integrity Enforcement early in the exploitation process. Although some issues are able to survive MIE — for example, intra-allocation buffer overflows — such issues are extremely rare, and even fewer will lend themselves to a full end-to-end exploit. Inevitably, attackers must face MIE at a stage where their capabilities are still very limited, leaving few viable avenues for exploitation. This leads to fragile chains where breaking just one step is often enough to invalidate the entire exploit strategy. When that happens, most of the chain’s components can’t be reused, and the attackers have to restart exploit development with entirely new bugs.
If it costs you millions of dollars for an exploit that gets patched a week after it's deployed, you can't use that for mass surveillance. If it costs you hundreds of millions, you can hardly use it for targeted attacks either. The cost of exploiting phones is constantly going up. It used to be within the ability of a single hobbyist developing a jailbreak. Now it's only in reach of the most well funded hacking groups for highly targeted attacks.
Fail is an overstatement. Apple is part of PRISM and the functionality is working as intended. When a hole becomes public, it is quickly patched.
PRISM was semi voluntary. And the legal immunities it operated under expired in 2017.
Irrelevant to the inaccuracy of the statement “Apple is part of PRISM.” Present tense. (Emphasis mine.)
It’s important in these discussions to separate the nihilists who are convinced all is always lost from those who know what they’re talking about.
Which is important to identify as it separates the eternally hopeful from those who've seen this cycle before.
You say from unfalsifiable supposition.
That’s fine. You may not be wrong. But if the only evidence is mis-citing a shuttered programme, that’s important to note, too.
Enlighten us?
Edit: Looks like for multithreaded code they suggest you use thread sanitiser, so in multithreaded code it doesn't enforce memory safety. At the same time, I don't see a history of memory safety issues with Swift compared to C and C++, I don't see this being a big deal in practice, particularly if you adopt the strict concurrency checking.
- Swift has a feature where you can unwrap a nullable which is basically just unusable, as it completely crashes the entire program if it fails, with no way for you to gracefully handle it or present a message to the user. And it's a massive footgun, since it has such convenient syntax that makes it seem like it should be used. But no, you have to avoid to like the plague.
- There are some Apple APIs where they just disregard their own types, and pass nil to your callback where the type says it's non nullable. This means if you access the var at all, crash.
- Concurrent access of dictionary, crash. And very hard to track down why as well since it can be very intermittent; in our case we were using an asynchronous dispatch queue instead of a sync one, so a single keyword. Oops!
- Stack overflow, crash.
- This isn't really Swift's fault, but in general every single macOS API is riddled with bugs and undocumented behavior. As a matter of fact, I would venture to say that almost every macOS API is virtually undocumented, either since there is literally no documentation or the existing documentation is just names of functions and occasionally an extremely out of date sample app.
So IMO it's about as memory safe as C. We're floating around the idea of just porting everything to Rust and moving on, haven't researched or committed to it yet though.
From that article I linked:
If you have conflicting access to memory from within a single thread, Swift guarantees that you’ll get an error at either compile time or runtime.
Does any of what you said lead to a vulnerability that can be exploited?
Do you really think that with all of the years of iPhone device and account takeovers, from a text message requiring no reading or interaction, Apple with their maximum controlled walled garden aren't facilitating? Apple spent billions moving factories because the US government told them to. They are the keymaker.
Apple could do a lot of things, such as preventing the black market for stolen phones from existing. A single city, London, had 80,000 phones stolen in 2024.
"...Onwurah argued that "robust technical measures" such as blocking stolen phones taken overseas from accessing cloud services could make devices "far less valuable".
"She also pointed to comments by Mobile UK, the trade association of the UK's mobile network operators, who said blocking IMEI in other countries was a "necessary step to dismantle the business model of organised crime".
"However, she said when giving evidence, Apple, Google and Samsung had avoided saying why they would not implement the technology." <--**
Doesn't iCloud lock basically already makes a stolen iPhone unusable? What more do you want?
The solution strikes me as being to make repairability easier and cheaper by flooding the market with parts/components. Someone may say that Apple prefers selling new Apple products, but the repairing is not only still happening in the black market, but they are also not getting a cut of it under this state. Am I missing something?
If you make the mistake of not notifying the carrier immediately, which you won't think to do because everyone thinks the phone was stolen for the phone itself, you're on the hook for the charges.
Carriers know that no legitimate users use (or even know of) shortcodes, yet they have them enabled by default on all plans, exactly because they take a cut from this theft and they can turn a blind eye to it by pretending the charges are consensual.
Any chance you'd have article links?
"46 people were arrested, including two men who were detained in London last month on suspicion of handling stolen goods after 2,000 phones were found in their car and addresses linked to them."
These aren't local street thugs. This is a massive, global criminal enterprise:
"London Metropolitan Police, which had initially assumed that "small-time thieves" were behind the city's wave of phone thefts, got their first major lead on Christmas Eve last year. A woman using "Find My iPhone" had tracked her stolen device to a warehouse near Heathrow Airport."
"We discovered street thieves were being paid up to 300 pounds ($403) per handset and uncovered evidence of devices being sold for up to $5,000 in China."
https://www.timesunion.com/news/world/article/uk-police-unco...
https://timesofindia.indiatimes.com/world/uk/industrial-scal...
TL;DR if the device is stolen from you by a stranger, this is possible. If the device is stolen from you by someone you permitted to use the device, this is not possible
I suspect these kinds of thefts are a small fraction of the "80,000 phones stolen in 2024" that OP was talking about. Moreover the only plausible case I can think of this happening is for corporate devices, which can be MDN enrolled and locked to a particular organization.