Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
No, of course not. I just like to do random Google searches and hope that the results support what I was saying. :rolleyes:

Well they didn't really support your statements. Not with facts anyway. Predictions aren't facts. But your sarcasm was noted. An about as effective as your link.
 
This would never have happened if Steve Jobs was still with us. :eek:

Maybe I'm misunderstanding here, but I tend to agree with you. Why are people down-voting your comment? Steve Jobs set an extremely high bar for Apple, which is part of what made him unique and what turned Apple into a great(er) company. If Tim can maintain that high bar, awesome, but it's a big challenge.
 
Which is why Apple should start educate their customers about security.

Make users aware that just because Apple have created a walled garden for their iOS apps it doesn't mean that everything you download and install is safe and not malicious.

Educate users to think before installing.
Educate users to think before allowing an app to access information or send information.

But that would mean a big break with how iOS is promoted today.
So I'm guessing it won't happen.

I agree.

Edit: Actually this would "security theater" akin to the TSA if apps would be able to circumvent it. Malicious apps are able to circumvent the control process, I have yet to read anything saying that apps can circumvent not being allowed to read or send information.

I think you left out a few words there, but I think I get your point. :)

I disagree that these controls cannot be circumvented. As I said, the straightforward way to get around them is to create an app that appears to or actually justifies the access request.
 
I disagree that these controls cannot be circumvented. As I said, the straightforward way to get around them is to create an app that appears to or actually justifies the access request.


Yeah.

And after iOS 6, they will write apps that have reasons to both read your address book and send information.
And users will grant the apps permission.
And the apps will continue to steal information.

But that's not the apps being able to circumvent the security features directly. If the user blocks app Y from doing X and it still can do X then that's circumventing those features.

This would be a different, since it is instead using the fact that Apple users are uninformed about how well the walled garden is shielding them from malware.
 
But that's not the apps being able to circumvent the security features directly. If the user blocks app Y from doing X and it still can do X then that's circumventing those features.

This would be a different, since it is instead using the fact that Apple users are uninformed about how well the walled garden is shielding them from malware.

Sure, like I said, I'm all for access controls. They work. But malicious apps can easily get the user to bypass them. Security is about trusting the developer. Or not. Plain and simple.
 
Well they didn't really support your statements. Not with facts anyway. Predictions aren't facts. But your sarcasm was noted. An about as effective as your link.

Did you read the link? Part of it was about predicting how bad the problem will be in the future. At the very beginning, though, there was this little part...

"The security firm said at the start of the year, it had found more than 5,000 malicious applications designed to target Google's Android mobile operating system, but the figure has since risen to about 20,000 in recent months."

That's not a prediction. It's an observation they made. So yes, it did support my argument. And your silly question deserved a sarcastic response.
 
Did you read the link? Part of it was about predicting how bad the problem will be in the future. At the very beginning, though, there was this little part...

"The security firm said at the start of the year, it had found more than 5,000 malicious applications designed to target Google's Android mobile operating system, but the figure has since risen to about 20,000 in recent months."

That's not a prediction. It's an observation they made. So yes, it did support my argument. And your silly question deserved a sarcastic response.

They don't list or even define malicious though do they? What was malicious. Was it stealing info and uploading it? Was it spyware? What was it?

So personally - I take their "report" with a grain of salt.
 
Sure, like I said, I'm all for access controls. They work. But malicious apps can easily get the user to bypass them. Security is about trusting the developer. Or not. Plain and simple.

Easily because:
1. The checks before an app is allowed into the AppStore aren't working 100%, thus false security.
2. A lot of Apple users are uniformed

While it is very hard to circumvent the security systems of a bank and steal money, is it a lot easier to contact a customer of the bank and by lying get him to give the information needed to steal his money from the bank.
Which is why the banks tell people not to give up their information willy nilly.
 
They don't list or even define malicious though do they? What was malicious. Was it stealing info and uploading it? Was it spyware? What was it?

So personally - I take their "report" with a grain of salt.

I'm glad to see that we both agree that this article DOES support my argument. Whether or not you agree with the article is a completely different subject, and really wasn't what we were talking about. You accused me of using hyperbole; I posted an article supporting my information. You asked me if I had even read the linked story, basically saying that I posted something that didn't support my point. It clearly does.

I never meant for this to become an argument about how prevalent malware is on Android, and I'm not really trying to rip on Android at all. My original response was to someone who said there goes the myth about iOS being safer than Android. iOS is safer than Android when it comes to the amount of malware available. This isn't one of those "sky is falling" moments for iOS. It's one app.
 
How come apple doesn't have a system in place where a few people from the company actually install the apps, run them a few times on actual ios devices with some monitoring custom software of what was accessed from the private data of the device and when, which is a tangible way to catch malware in action.

I am sure that would warrant a $100 dev. fee and a 30% cut. At the end of the day it's the devs apps that help them sell i-devices like crazy. But if their mentality is the same as it was for selling books, that is we push them to users on our devices and we get a 30% cut (and "the customer pays a little more but that's what you want anyway"...we all know the story), oh and in addition for apps we batch run some testing/reviewing software and then push the apps in the app store, maybe they don't really think they need to do anything to merit that cut and the dev. fee.

Which btw the dev's fee should have been on a per sales basis, I don't see why someone making $0 off the app store should be paying apple a devs. fee. I am sure some will argue it's for apple creating and maintaining the APIs but they are doing this so they can sell more idevices, they shouldn't be looking for the devs themselves to be funding the creation of apple's tools. Unless they want to give devs a cut off of ipod, ipad, and iphone sales, which would be only fair if you think about it, because these devices sell by virtue of the apps they have in the store, not because of anything so inherently special about them.
 
Last edited:
Easily because:
1. The checks before an app is allowed into the AppStore aren't working 100%, thus false security.
2. A lot of Apple users are uniformed

While it is very hard to circumvent the security systems of a bank and steal money, is it a lot easier to contact a customer of the bank and by lying get him to give the information needed to steal his money from the bank.
Which is why the banks tell people not to give up their information willy nilly.

I agree with you, but I'm not sure what you are responding to.
 
How come apple doesn't have a system in place where a few people from the company actually install the apps, run them a few times on actual ios devices with some monitoring custom software of what was accessed from the private data of the device and when which is a tangible way to catch malware in action.

What makes you think they don't? How would that help in this situation? It's not a red flag that a contact app such as "Find and Call" needs access to contacts.
 
Like the "false security" of a spell-checker? ;)

un-i-formed ;) Which I guess is a good way to be, it's when you get too i-formed and enter that reality distortion field that trouble begins.

----------

What makes you think they don't? How would that help in this situation? It's not a red flag that a contact app such as "Find and Call" needs access to contacts.

How that would help?

Surely if they had one guy for the whole of the russian market (which they probably could afford seeing as russia has a population of about 140,000,000 million) actually installing and running the apps that came up on the store, with a custom monitoring tool on their i-device they would have noticed in time the data being send to someone's servers...

Apparently it's so much a red flag btw, that apple didn't notice, and you expect the average user to do so?

To me this is a huge security breach and apple is solely responsible for it. When you can have 1-2 people actually installing and running apps that are very obviously big red flags as you say, AFTER they have been approved you will be able to protect your users. How much would it have costed them to do so? In russian salaries what about 30,000 per annum, not for 2, but 5 people doing this. What's that compared to the money they make off the i-devices as well as the money they have in the bank, pretty close to what one would call a drop in the ocean.

In my mind the proper system should be approve the app, put it in the app, have a few of your own employees actually download a subset of apps that looks much less benign than the rest and tick a few boxes, and run them first to see if the devs will be playing any tricks so you can be at the forefront of protecting your customers before they start downloading said app themselves.

Apple can thank me later btw for finding common sense solutions to their problems btw, :D.
 
How that would help?

Surely if they had one guy for the whole of the russian market (which they probably could afford seeing as russia has a population of about 140,000,000 million) actually installing and running the apps that came up on the store, with a custom monitoring tool on their i-device they would have noticed in time the data being send to someone's servers...

Apparently it's so much a red flag btw, that apple didn't notice, and you expect the average user to do so?

To me this is a huge security breach and apple is solely responsible for it. When you can have 1-2 people actually installing and running apps that are very obviously big red flags as you say, AFTER they have been approved you will be able to protect your users. How much would it have costed them to do so? In russian salaries what about 30,000 per annum, not for 2, but 5 people doing this. What's that compared to the money they make off the i-devices as well as the money they have in the bank, pretty close to what one would call a drop in the ocean.

All the malware developer would have to do is encrypt the data stream. I don't know if that happened in this case or not.
 
Yeah. Although Apple never used the kill switch before, this would be a good case for it.



The developer probably wasn't spamming its users until it was approved.

Oh, how users here freaked out pretty hardcore when the "kill switch" was first mentioned. Well, now do we see its usefulness? I think so.
 
All the malware developer would have to do is encrypt the data stream. I don't know if that happened in this case or not.

You know what, I am sure seeing as life is very ironic indeed, that the developer probably didn't encrypt the data stream in this case.

In any case how come kaspersky spotted it and not apple? You would think apple has enough money if they are really serious about their customer's data safety to buy kaspersky, and every kaspersky on the globe, and any potential kaspersky on our solar system.
 
You know what, I am sure seeing as life is very ironic indeed, that the developer probably didn't encrypt the data stream in this case.

I would be surprised if you didn't assume that. :)

In any case how come kaspersky spotted it and not apple?

Maybe Kaspersky has a larger database than Apple of malicious Russian servers?
 
I would be surprised if you didn't assume that. :)



Maybe Kaspersky has a larger database than Apple of malicious Russian servers?

I am not assuming this because it serves my argument, I am assuming this because I have a good intuition that ironically in crime (in the broadest of senses) it's offenders that could have easily been spotted due to some lack of sophistication that go unnoticed. It would be interested to find out if it did use some encryption or not to see if my hunch is correct.

I am sure they do. Wasn't apple in talks with them over os x security btw a few weeks ago. Surely you will agree here that it's the least they can do to buy a firm like kaspersky at apple. No matter who apple likes to point the finger at whenever a security breach is revealed (and it's habitually not to themselves) I think the wealthiest tech company in the world can very well afford to purchase a specialized security firm after a year where malware crops in the app store and the flashback fiasco that cost, what, 400,000 of their customers at least access to their private data? Apple should have certainly been more pro-active and seen this coming. It would certainly be a wiser investment than paying 50 mil to the dixons guy to run the stores on autopilot.
 
The increase attacks on iOS and OSX is only a sign of more to come. And Apple's bragging in it's marketing about how safe they are doesn't help. Now even Apple are forced to change their tune on how it markets itself. :rolleyes:
 

Attachments

  • Mac.jpg
    Mac.jpg
    541 KB · Views: 84
The increase attacks on iOS and OSX is only a sign of more to come. And Apple's bragging in it's marketing about how safe they are doesn't help. Now even Apple are forced to change their tune on how it markets itself. :rolleyes:

Take a closer look. It CLEARLY states it "doesn't get PC viruses". PC in this case = Windows PC, so it was technically correct.
 
Apple needs to continue pick out the "weeds" from its "walled garden" or it risks losing one of the most appealing aspects of iOS over Android. I'm glad they caught it relatively quick though.
 
Take a closer look. It CLEARLY states it "doesn't get PC viruses". PC in this case = Windows PC, so it was technically correct.

Technically correct? Nope. PC= Personal Computer. The average person thinks that a PC is a Personal Computer. Macs? Yes they are personal computers. Hence the description was very misleading that would clearly be a massive legal headache. It was misleading and Apple knew it Big Time. Hence the change. :rolleyes:
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.