Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Nope, those are just facts.
No that is your opinion. I presume when you say "irrelevant" you mean inconsequential, which I would agree with except that Google also said they are informing other OEMs to check for the software. Why would they do that if it was limited to Pixels? Also, by saying that the vulnerability is irrelevant (or inconsequential) and given that Google was dragging their feet on this sends a bad message to customers that they are not taking security seriously enough. So ultimately whether a vulnerability is serious or inconsequential is a degree of opinion, not fact. If even one customer's phone was compromised, which I understand that none have been, but even one would be enough. So they got lucky that didn't happen.
 
No that is your opinion. I presume when you say "irrelevant" you mean inconsequential, which I would agree with except that Google also said they are informing other OEMs to check for the software. Why would they do that if it was limited to Pixels?
No, it's actually just your opinion.

Your assumptions that "other Android OEMs might be affected" don't trump the very simple and obvious fact that the only devices confirmed by anybody to actually be affected are the Pixels. This is what I've said and its 100% accurate, even if you didn't like it.
Also, we don't know what protocols Google has with other Android OEMs why they informed them of this situation (if they actually informed them) in the first place. Maybe this is what they do, standard procedures.
Another obvious fact, if this would have been an Android problem(like you are implying), Google would have issued an Android update not a Pixel update.

Also, by saying that the vulnerability is irrelevant (or inconsequential) and given that Google was dragging their feet on this sends a bad message to customers that they are not taking security seriously enough. So ultimately whether a vulnerability is serious or inconsequential is a degree of opinion, not fact.
Well, it kind of is irrelevant and the reasoning and explanations to why, are simple and easy to understand.
Google even stated that they will remove the app “Out of an abundance of precaution". This makes it clear that they don't actually consider it an issue.
And Google not taking security seriously? This is a funny one. Did you hear about Project Zero? Google discovered more iOS vulnerabilities than anybody else(including a lot of really major ones).
Google has the largest user base in the world, few companies factually take security more seriously than Google.

If even one customer's phone was compromised, which I understand that none have been, but even one would be enough. So they got lucky that didn't happen.
You do understand that in order to use the app to do anything, you need to completely compromise the device first? It's a deactivated component buried in the OS code, it's not an open attack path, you can't just use it to compromise a device.
 
You do understand that in order to use the app to do anything, you need to completely compromise the device first? It's a deactivated component buried in the OS code, it's not an open attack path, you can't just use it to compromise a device.
Your entire argument boils down to no harm, no foul. Which is spin in my book because apparently others in the industry have taken this more seriously than you seem to be.
 
Your entire argument boils down to no harm, no foul. Which is spin in my book
Nope, not at all, my entire argument boils down to the following simple fact:

In order to use the app to do anything, you need to completely compromise the device first. It's a deactivated component buried in the OS code, it's not an open attack path, you can't just use it to compromise a device.

because apparently others in the industry have taken this more seriously than you seem to be.

Security experts confirmed what I wrote above.
Now if by other you mean iVerify and Palantir, taking in consideration they've been trying to portray this as much worse than it actually is, just to try to promote themselves, I don't see why anybody would trust them.

Here are a few more quotes from GrapheneOS maintainers:

"Physical access isn't enough", "They would also need the user's password. This app does not expose any attack surface to a physical attacker for that kind of threat model. It exposes no actual attack surface that's relevant."

"In order to enable and set up this app, you already need to have more control over the device than this app is able to provide by exploiting the insecure way it fetches a configuration file."
 
Security experts confirmed what I wrote above.
Now if by other you mean iVerify and Palantir, taking in consideration they've been trying to portray this as much worse than it actually is, just to try to promote themselves, I don't see why anybody would trust them.
You are portraying this as much better than it actually is. Why should I trust you? Even if it is difficult to exploit, it is a serious problem. You concede that the vulnerability is buried deep in the OS and difficult to detect. That is pretty terrifying and reason to believe that it could be in other Android devices. You say there is no evidence of that, but if it is difficult to detect and exploit then I would not be surprised if it lurking elsewhere and we just don't know. And when it comes to security, it is better for Google to assume the worst and be proactive than to just say we're letting OEMs know out of an abundance of caution, which is pure PR, but it's not on us to fix this beyond the Pixel. As a Samsung owner, I want to know if there is a fix or that the device is clear. I don't want technical explanations from the interweb forums about how unlikely it is to be exploited and that it isn't as bad as other say it is, which is the definition of spin BTW.
 
After so many explanations you still don't understand how this vulnerability works.
It's fascinating how you missed the part where in order to do anything you already need to have more control over the device than this app is able to provide by exploiting the insecure way it fetches a configuration file. So it has no value for an attacker anyway. And this is an objecive fact.
 
After so many explanations you still don't understand how this vulnerability works.
It's fascinating how you missed the part where in order to do anything you already need to have more control over the device than this app is able to provide by exploiting the insecure way it fetches a configuration file. So it has no value for an attacker anyway. And this is an objecive fact.
A vulnerability is a vulnerability. You're just making excuses for Google having created it in the first place.
 
Just one more point: for better or for worse we entrust big tech with security. Having watchdogs like iVerify to keep big tech honest is a good thing. The more eyeballs, the better. There is no reason to trust or distrust them. Let's look at their findings objectively and see whether there is any merit to them. They weren't wrong about the existence of the vulnerability and if it hadn't been noticed then it would still be out there. Being dismissive of the watchdogs because they find a technically difficult and valueless exploit isn't helpful IMHO. If a malicious actor can find a way to exploit the vulnerability, we should assume that they will try and that it could compromise security. And for that reason alone, we need to take security seriously especially when the vulnerability has strong privileges and insecure access as this one does. The average consumer doesn't care about techy explanations, they just want assurances that any known security issues have been resolved.
 
A vulnerability is a vulnerability. You're just making excuses for Google having created it in the first place.
Not all vulnerabilities are the same.
The only one here making excuses is you, I'm just presenting facts.

If a malicious actor can find a way to exploit the vulnerability, we should assume that they will try and that it could compromise security.

They only way this vulnerability can be exploited is if you already have a higher level of access and control than this vulnerability is able to provide. Why is this so hard for you to understand?

The average consumer doesn't care about techy explanations

I'm starting to think that the average consumer would easily be able to understand what I've been explaining here, it's surprising that you aren't able to do so.
 
Last edited:
They only way this vulnerability can be exploited is if you already have a higher level of access and control than this vulnerability is able to provide.
Possibly... but consider that threat actors very rarely use a single vulnerability to achieve their objectives; they commonly chain multiple "zero-day" vulnerabilities that they've discovered and/or purchased on the black market, where one exploit gets the threat actor in the front door, another elevates their privileges and a third grants them full remote access to the device, to cite a very simplistic potential scenario.

Perhaps the Showcase vulnerability would have become one exploit in such a chain, given enough time. Absent this discovery, maybe that would have taken weeks, or years... or maybe it already happened and some threat actor is angrily pulling out their hair right about now. Who can really say for certain?

The bottom line is: finding and eliminating vulnerabilities is always a net positive for security... even if exploitation seems unlikely on the surface of things.
 
Possibly... but consider that threat actors very rarely use a single vulnerability to achieve their objectives; they commonly chain multiple "zero-day" vulnerabilities that they've discovered and/or purchased on the black market, where one exploit gets the threat actor in the front door, another elevates their privileges and a third grants them full remote access to the device, to cite a very simplistic potential scenario.

Perhaps the Showcase vulnerability would have become one exploit in such a chain, given enough time. Absent this discovery, maybe that would have taken weeks, or years... or maybe it already happened and some threat actor is angrily pulling out their hair right about now. Who can really say for certain?

The bottom line is: finding and eliminating vulnerabilities is always a net positive for security... even if exploitation seems unlikely on the surface of things.
This is absolutely true. For security we must assume the bad actors are a few steps ahead and will outsmart us. Don't be complacent because we believe the risk to be low because we could be wrong or lacking information.

Also, for this vulnerability, it is possible that the person with the higher level of access is malicious. Perhaps a disgruntled employee or family member having full access to someone else's device could exploit it in a malicious way, opening some sort of back door?

Whoever designed Showcase decided that it was safe enough for all of the reasons given (no open attack path, etc.), and it was by design. But it is bad practice to do it the way it was done and it was eventually called out. The no harm no foul argument doesn't fly when it comes to security.
 
Not all vulnerabilities are the same.
The only one here making excuses is you, I'm just presenting facts.
This makes no sense. I'm not excusing anyone and certainly not Google. But it seems like you are giving Google a pass on this because you believe the vulnerability to be "irrelevant" or whatever dismissive term you are using to minimize the significance of it. You've got your facts and you are spinning them to make it sound like this whole thing is no big deal, and I'm just saying it is a big deal even if it wasn't a crisis. Security is always a big deal and it is getting more so every day. Stuff like this Showcase thing should be a wakeup call for developers who think they can be clever by a half because of some technicality that makes the vulnerability inaccessible or low risk or whatever rationale they come up with.
 
Possibly... but consider that threat actors very rarely use a single vulnerability to achieve their objectives; they commonly chain multiple "zero-day" vulnerabilities that they've discovered and/or purchased on the black market, where one exploit gets the threat actor in the front door, another elevates their privileges and a third grants them full remote access to the device, to cite a very simplistic potential scenario.

Perhaps the Showcase vulnerability would have become one exploit in such a chain, given enough time. Absent this discovery, maybe that would have taken weeks, or years... or maybe it already happened and some threat actor is angrily pulling out their hair right about now. Who can really say for certain?

The bottom line is: finding and eliminating vulnerabilities is always a net positive for security... even if exploitation seems unlikely on the surface of things.
Instead of wasting time explaining something that should have been clear by now I will do something else.

When would this app be used in a "chained attack" using multiple 0 day exploits and more importantly, why would it be used then?? I'm just curious.
And what value would this app have for somebody that owns multiple "zero day vulnerabilities"?
Did you understand that this app is deactivated and its buried in the OS code?
 
  • Like
Reactions: ToyoCorollaGR
This makes no sense. I'm not excusing anyone and certainly not Google. But it seems like you are giving Google a pass on this because you believe the vulnerability to be "irrelevant" or whatever dismissive term you are using to minimize the significance of it. You've got your facts and you are spinning them to make it sound like this whole thing is no big deal, and
I don't believe anything, I just presented the actual facts, which you obviously try to ignore.

I'm just saying it is a big deal even if it wasn't a crisis. Security is always a big deal and it is getting more so every day. Stuff like this Showcase thing should be a wakeup call for developers who think they can be clever by a half because of some technicality that makes the vulnerability inaccessible or low risk or whatever rationale they come up with.
OK, why exactly is this a big deal?
On my S23U for example, how would somebody be able to exploit this?
What does somebody need to do to get to this app in the first place? Ca you explain? I doubt it.
 
  • Like
Reactions: ToyoCorollaGR
Instead of wasting time explaining something that should have been clear by now I will do something else.

When would this app be used in a "chained attack" using multiple 0 day exploits and more importantly, why would it be used then?? I'm just curious.
And what value would this app have for somebody that owns multiple "zero day vulnerabilities"?
Did you understand that this app is deactivated and its buried in the OS code?
As I am neither a security researcher nor a threat actor, I don't claim to know all of the details that you're (ostensibly) seeking. I would hazard a reasonable guess that neither do you -- but to put it very bluntly, you also don't know for certain that these things can't happen. The fact that neither of us know these things does not mean that there is no potential risk.

In spite of this, you've apparently decided for yourself that your devices are all perfectly safe and secure, and no amount of commentary from me or anyone else is going to sway you from your view. That's fine; you do you. I'm not so easily convinced.

And fortunately for all of us, there are actual security researchers out there who are likewise skeptical, and who have made it their calling to seek out these kinds of issues. Their efforts are why we know about the Showcase issue at all, and -- disabled by default though it may be -- their efforts are also why it is being removed. I, for one, am glad for what they do.
 
  • Like
Reactions: onenorth
As I am neither a security researcher nor a threat actor, I don't claim to know all of the details that you're (ostensibly) seeking. I would hazard a reasonable guess that neither do you -- but to put it very bluntly, you also don't know for certain that these things can't happen. The fact that neither of us know these things does not mean that there is no potential risk.

In spite of this, you've apparently decided for yourself that your devices are all perfectly safe and secure, and no amount of commentary from me or anyone else is going to sway you from your view. That's fine; you do you. I'm not so easily convinced.

And fortunately for all of us, there are actual security researchers out there who are likewise skeptical, and who have made it their calling to seek out these kinds of issues. Their efforts are why we know about the Showcase issue at all, and -- disabled by default though it may be -- their efforts are also why it is being removed. I, for one, am glad for what they do.
I see, you are making excuses now.
Read here from somebody that has knowledge of the subject then:


Again, do you understand that in order to make this app do anything you need to completely compromise the device first? It's not an attack path. And this is a fact.
 
Last edited:
I don't believe anything, I just presented the actual facts, which you obviously try to ignore.
I will leave the first part of your sentence alone.

You keep pointing to the GrapheneOS post as definitive and incontrovertible fact, but it is not. They are an alternative OS for Android and they have their own agenda. The posts are heavily biased against iVerify (and Apple) and when taken at face value they seem suspicious. Also Google has acknowledged the vulnerability and plans to remove it from Pixel phones only, regardless of what GrapheneOS thinks about the issue which they claim to have flagged years ago.

Also, apparently once the APK is enabled it can receive a configuration file that could be hijacked. This seems like a back door that could be opened by someone who has full access to the device. I understand that it is not likely but it is a possibility and therefore the vulnerability should be resolved. Google has known about this for years and only now has decided to address it.
 
Also, apparently once the APK is enabled it can receive a configuration file that could be hijacked. This seems like a back door that could be opened by someone who has full access to the device. I understand that it is not likely but it is a possibility and therefore the vulnerability should be resolved. Google has known about this for years and only now has decided to address it.
Well, well, apparently.
So answer this then:
How can somebody activate that apk? What does it need to do in order to be able to turn on the app and use it?

Can you answer?
 
Well, well, apparently.
So answer this then:
How can somebody activate that apk? What does it need to do in order to be able to turn on the app and use it?

Can you answer?
See post #144. Now tell me what could happen when the app is turned on and someone hijacks the configuration file.
 
  • Disagree
Reactions: Heat_Fan89
So as expected you refuse to answer.

But I can answer very easily.
If somebody is able activate this app, it doesn't matter what he does with it because it means he already completely compromised the device and he can do whatever he wants with the device itself, including putting useless effort to turn on this app(so wasting his time).

Now what's extremely important, crucial actually, is that doing this remotely doesn't seem possible, anyway it definitely hasn't been proven possible.
 
Last edited:
So as expected you refuse to answer.

But I can answer very easily.
If somebody is able activate this app, it doesn't matter what he does with it because it means he already completely compromised the device and he can do whatever he wants with the device itself, including putting useless effort to turn on this app(so wasting his time).

Now what's extremely important, crucial actually, is that doing this remotely doesn't seem possible, anyway it definitely hasn't been proven possible.
That hasn't been proven impossible either. And that is my point.

Also I would not presume a malicious actor is ever wasting time. Maybe he knows something you don't.
 
Well taking in consideration that iVerify needed help to activate this app on a phone they own and were only able to do it physically its safe to deduce that doing it remotely would be monumentaly more challenging. They also didn't show any sign they want to try and see if they can activate the app remotely. Which is expected, they know they can't.

Also I would not presume a malicious actor is ever wasting time. Maybe he knows something you don't.
Well if somebody completely compromise the security of a device and has elevated admin access, why does it matter what it does with this app? Also why would somebody in this case waste time with this app? It doesn't make any sense, this app give him less acces than its needed to activate it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.