Enlighten me.
How does enabling unknown code not create an attack vector? Most of the work of a hacker or pentester is simply to get code running on a machine, then getting it running in an elevated context, and/or then finding a way to inject or extract data from memory to monitor or modify other running code.
So sure, you might be running in a sandbox, but that doesn’t change how the hardware works, you may be limited in sectors/blocks/registers you may read or write to, but not everything must be done directly. You can use things like buffer overflow to push protected data into unprotected sectors or vice versa, you can manipulate the threading and context switching to potentially prevent the buffers from clearing before you read them, you can find trusted code that can move the data if you can’t.
Software is not infallible, and when it comes to secure computing, and unless you are air-gapped you always have some degree of risk (and when it comes down to it, there are even ways to compromise air-gapped devices though it’s certainly an order or a few magnitudes more difficult). Any time you add a new way to run code on a machine, you add an avenue for exploit. Why do you think so many exploits target javascript or flash back in the day? Because it is the easiest code to get running on the targets machine.
Could they add protections to scenarios like a QR code to sideload an app? Absolutely, but that’s just more engineering costs being pushed on to apple for no reason.
All it takes is a single way to push or pull data across the boundaries or elevate your access, just 1 exploit, and until it’s caught by security researchers and patched, every single device is at risk. There is a reason things like shielded VMs exist. But expecting a phone to have the hardware and performance overhead to run things like shielded vms seems a bit excessive.
Regardless, there has been more than one occasion where hypervisor, ones that were fully implementing hardware virtualization like sr-iov and vt-x/d, etc, were exploitable. VMware, Hyper-V, Xen, etc, all have had security issues at one point or another. We even have seen straight hardware level exploits like specter and meltdown which could read protected data straight out of the cpu cache. And just being honest but I’m pretty confident that any full hypervisor is more secure than your average app sandbox, especially if that sandbox needs pretty deep access to your device (running an app store is not some lightweight activity, since you are also the installer, also don’t forget, the App Store app itself could be the compromised thing, and then every app it installs would in turn potentially be compromised as well).
Ironically Forbes of all people posted a pretty good 5 laws of Cybersecurity, there are lots of versions and variations, but I think this covers the important bits.
- If there is a vulnerability, it WILL be exploited.
- There is always a vulnerability. Everything is always vulnerable in some way.
- Humans trust even when they shouldn‘t.
- With innovation (and change) comes opportunity for exploitation.
- If you believe your app/os/device is completely secure, see 1, 2, 3 and 4.
Anything that is going to let you run untrusted code on an iOS device is a brand new avenue for exploits. Yes, right now you can as a developer trust apps you publish, but it’s against the terms of use to use those mechanisms as a means to distribute apps openly, and they do actually shut down developers that openly share their profiles to like beta apps or w/e, so no one has ever really actively done pen-testing through apps that would never pass through the approval process. Because who cares about exploits that can’t be used. It doesn’t matter if you can make an app that can exploit the iPhone because Apple won’t (or at least so far hasn’t) publish it in the app Store. But sideloading changes the game. Suddenly you no longer are bound by what apple will publish, only what you can trick idiots into installing and for that, refer up to 3.
Also, to be clear, I am not saying every device is going to magically get cracked or compromised, just that it’s totally going to increase the attack surface which means it IS going to increase the number of compromised device. All the new potential exploits would either require physical access or tricking the user by embedding it in something they will download, but how many people read security prompts? How many actually listen to them?
Instead of being a strawman, let’s hear the knowledge drop? Please, enlighten me.