Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You do know that Trojan (dnschanger) is automagically detected by snow leopard when you try to install it. Installation requires your password so not actually a virus.

In those cases, that's true. I was just dispelling the "normal internet activity" part of your myth. I.e., getting hacked is not merely the consequence of hanging out in pr0n or p2p places anymore.

Now, simply combine the pieces: what if those Russian mafia guys had Miller's know-how (in terms of exploiting Safari specifically)? Not a pretty picture.
 
Now, simply combine the pieces: what if those Russian mafia guys had Miller's know-how (in terms of exploiting Safari specifically)? Not a pretty picture.

Ok. In that link to the summary of Miller's book it says writing windows exploits is harder than OSX exploits.

Those russians are one of the primary sources for Windows exploits. So, logically the Russians should be writing OSX exploits.

But they are not? The only rational explanation is OSX exploits require extenuating circumstances that are highly unlikely.
 
Ok. In that link to the summary of Miller's book it says writing windows exploits is harder than OSX exploits.

Those russians are one of the primary sources for Windows exploits. So, logically the Russians should be writing OSX exploits.

But they are not? The only rational explanation is OSX exploits require extenuating circumstances that are highly unlikely.

Its not the only rational explanation and I think you know it isn't.
 
I don't think you are aware of the rules. They are allowed to click on only one link, e.g., this one --> link <-- which leads to some page on the web. But no further clicking is allowed... so therefore, the user doesn't "download" anything (at least not knowingly or willingly). The simple act of visiting the page is sufficient to exploit Safari.

EDIT: in fact i think it's more constrained than that. Instead of "clicking on a link" somewhere, i think what they do is manually type a single URL into the address field. Again, the principle is that they simply load a web page —but do not physically interact with any of its content (buttons, links, popups, or what-have-you).

I wonder what access to physical hardware and social engineering his security holes will need? In the past, many of these exploits required quite a bit of user intervention including the administrator password.

For example,

"No one was able to execute code on any of the systems on Wednesday, the first day of the contest, when hacks were limited to over-the-network techniques on the operating systems themselves. But on the second day, the rules changed to allow attacks delivered by tricking someone to visit a maliciously crafted Web site, or open an e-mail. Hackers were also allowed to target "default installed client-side applications," such as browsers.

The team had attack code already set up on a Web site, and was able to gain access to the MacBook Air and retrieve a file after judges were "tricked" into visiting the site. According to the TippingPoint DVLabs blog, a newly discovered vulnerability in Safari was used to gain control of the Air.

…

Last year's contest was won by exploiting a QuickTime vulnerability, which was patched by Apple in less than two weeks.”

http://news.cnet.com/8301-13579_3-9905095-37.html

By the way, before anyone gets too crazy bashing this guy — I believe the rules of the conference dictate that he sign a NDA and that all exploits will be reported to Apple.

And even with this one there needed to be a user on the other end initiating the process:

"The successful attack on the second and final day of the contest required a conference organizer to surf to a malicious Web site using Safari on the MacBook--a type of attack more familiar to Windows users."


Windows users don't just get hacked while the machine is sitting there unattended either. Nowadays users are more careful about where they surf and what they download. Windows 7 is heading to the farmlands as well.
 
It's not REALLY zero-day, is it?

I hope not. My primary reason in switching with a mac was no viruses and easy to use interface.

Security holes doesn't equal viruses, although I it will create a greater potential for them.

Seems a bit dubious use of "Zero-day" here. As a zero-day hole is one that is actively being exploited for nefarious means (such as spyware/malware). The discovery of these security holes does NOT constitute zero-day by its true definition. Twisting the meaning for a sensationalist headline, now there's a shock :p
 
Just wanted to chime in here and say that just because you type in a subject in google does not mean you are a subject matter expert.
 
It doesn't at all. You can hire the good guys to scan your system if you want. The good guys don't do it for free unless you're already so large that you're beyond obscurity to begin with.

Obscurity is one of many valid layers of security. Look at OSXs 'stealth mode.' That is a form of obscurity by hiding that a machine even exists at a certain address. It's all about hiding yourself as a potential target, then lessening the potential attack surface area (turn off all unneeded services, blocking extranet traffic, etc...) and finally having plans in place for when you do get compromised.

That's not obscurity at all. That's a well known feature that every major OS has. The mechanism that feature uses is well-documented. If an auditor came into look at your system and you explained that you had OS X Stealth Mode enabled, they would immediately know what you're talking about.

Reducing attack surface area isn't obscurity either. Those are orthogonal concepts.

Obscurity is something like, "I'm going to write my own cryptographic hash algorithm and not publish the details of it."

Naive implementers believe that this is more safe, because they figure if the details of their hash algorithm were published, then somebody would be able to figure out how to attack it.

But actually it is *less* safe than using a well known cryptographic hash, like SHA256.

Why?

Because like I said before, bad guys have lots of time to work on this. If you're protecting something worth a million dollars, then they can a thousand hours of labor at this problem and its still worthwhile for them. (Equivalent of billing $1000/hr if they are able to break in.)

Your auditors, on the other hand, bill at $100/hr or more. So if they also spent a thousand hours auditing your system because they had to reverse engineer your crypto just like the bad guys, then you have to shell out $100,000. You spent 1/10th of the value of what you're trying to protect just to have it audited!

If you used SHA256, however, the auditors' work would be greatly reduced, because they would see that you're using a standard which has already been scrutinized by thousands of really smart mathematicians and cryptographers and so they don't need to worry about that aspect of your security. This would save them hundreds of hours and would save you tens of thousands of dollars.

As Joel Spolsky says, "You can prove anything with a contrived example." Hopefully my example isn't so contrived that you don't see my point.
 
Miller is describing how, by bombarding applications with arbitrary data, you might be able to make them crash. He wants us to believe that this gives him an insight into how to uncover and subsequently exploit flaws in those applications. I want some proof he can learn something useful from his 'technique', he hasn't got any such proof.

Think about it (though this might be a stretch for some contributors) I could show how to break down your door by using a massive hydraulic ram or by piling up large boulders against it, is this a practical means of forcing entry? Not in my view, no.

I'm assuming you're not a programmer. I can see why this approach doesn't make sense to a non-programmer, but it is valid. (I think his 30 vulnerabilities are proof that it does something.)

The basic idea is that most security vulnerabilities are based on software which doesn't carefully check the input that it receives. The input might be too large, or too small, or might supplies values that are logically invalid.

Example: a program that divides numbers expects you to type in the dividend and the divisor, then it prints the quotient. You type in "5" and then "0". If the program doesn't check carefully, it will divide 5 by zero which will crash the application.

This sounds like a benign example, but a lot of times programs crash because they were writing to memory without being careful... well if you're an attacker and you discover that a program writes to memory without being careful, then you know you might be able to insert shellcode* into that program's memory and trick it into running your shellcode.

(*Shellcode is code which spawns a new shell. It's similar to the shell you see in terminal. If you trick a program into spawning a shell, that shell inherits whatever privileges the original program had. There are a lot of variety of shellcodes, but one particularly nasty kind is one that opens a remote shell. Now the attacker is using the target computer across a network, or even across the internet. Most firewalls won't block this kind of connection, either.)

Now, if you "fuzz" an app (bombard with random inputs) while it is running in a debugger, then when the program crashes, the debugger will show you what code was executing just at that moment before it crashed. From there you can work backwards and explore the program's internal state to see if that bug might be exploitable for inserting your shellcode.

There's a difference between a vulnerability and an exploit. A vulnerability really means a theoretical means of attack on a piece of software. An exploit, on the other hand

Rather than thinking of security in brute physical terms (breaking in a door), think of software as being a massive labyrinth surrounding your house. You'd like to believe -- but can never be 100% sure -- that the only way in the house is to approach the door with the correct key. But there's always this possibility that somewhere in that massive labyrinth there is another route which goes around the door and bypasses the need to have that key.
 
As a zero-day hole is one that is actively being exploited for nefarious means (such as spyware/malware). The discovery of these security holes does NOT constitute zero-day by its true definition. Twisting the meaning for a sensationalist headline, now there's a shock :p

That's not what 0-day means. Don't know where you guys are getting your info or what field you work in... but most of you are way off base in this thread.

0-day simply means a new exploit that has not been patched. Whether its found by a good or a bad guy doesn't matter... Of course if its found by a bad guy first, then we probably won't hear about it until months later, at which point it is obviously no longer 0-day. :D
 
Ok. In that link to the summary of Miller's book it says writing windows exploits is harder than OSX exploits.

Those russians are one of the primary sources for Windows exploits. So, logically the Russians should be writing OSX exploits.

But they are not? The only rational explanation is OSX exploits require extenuating circumstances that are highly unlikely.

Russian Mafia are still business people. Even if OS X is 5 times easier to exploit, the returns on the effort of writing the exploit are far less than you would get from attacking the many more Windows machines.

Spending the time and money to make an OS secure is also a business decision. Apple doesn't spend more time making OS X secure because they don't have to - at this time. Call this thinking what-ever you want, it's still a business decision.

Just like a rural bank can choose to spend far less on security than a bank in the bad part of town. The rural bank knows that it is possible that a more sophisticated bank robbery may ride into town and take the money..... but they have calculated that it's cheaper to risk the remote chance of being robbed than to spend money every year protecting themselves from a robber who may never appear.

When the gang finally notices the rural bank, one hopes then that the bank bumps their security up. When Apple finally starts being a serious target for malware, I hope they will bump up their security. In the meantime, why should I be paying bank fees to help support a platoon of guards in bank in the middle of wheat belt, 2 hours west of Swift Current?
 
Wondering why a crashing a program is relevant to this topic, thinking that a crossover cable is an unfair and unrealistic cracking tool, google definitions of various terminology, etcetcetc - fun thread!
 
Russian Mafia are still business people. Even if OS X is 5 times easier to exploit, the returns on the effort of writing the exploit are far less than you would get from attacking the many more Windows machines.

I've said this more than a few times this thread people refuse to understand that exploiting machines is more a business than a punk kid's game these days.

That's why most professionals rob armored cars instead of your car. $750,000 is a lot better than the change you have in your cup holder.
 
Apple doesn't spend more time making OS X secure because they don't have to - at this time.

I think they're doing a pretty good job, after all they've already implemented most of Vista's/7's security measures, besides full ADSLR. Considering their market share that's pretty good.

Besides, as Miller said:

"OS X has a large attack surface consisting of open source components (i.e. webkit, libz, etc), closed source 3rd party components (Flash), and closed source Apple components (Preview, mdnsresponder, etc). Bugs in any of these types of components can lead to remote compromise"

Relying on lots of open source components certainly implies more security risks than relying on mostly closed source ones, like Microsoft does.

But they do take their time when it comes to patching exploits though, which is indeed a business decision.
 
20 security issues really isn't that bad. Every OS X security update fixes at least that many. I'm curious if Charlie Miller has submitted these to Apple, or is he sitting on them for his own publicity?

If this is the same guy as the prior article some time last week, that one explicitly said he *hadn't* notified any of the software's authors about the bugs. Announcing them to the world without any prior notification to the developers is irresponsible.

There are two responsible choices:
  1. Tell the developer, and announce after a fix is released. (This runs into the problem of many developers not *caring* until the fix is exploited in the wild.)
  • Tell the developer, and let them know how long they've got until you announce. (This provides a deadline for the developers to work against, and the deadline can be extended if there is evidence that the developer is working on the fix but needs a bit more time.)
 
I think they're doing a pretty good job, after all they've already implemented most of Vista's/7's security measures, besides full ADSLR. Considering their market share that's pretty good.

Besides, as Miller said:

"OS X has a large attack surface consisting of open source components (i.e. webkit, libz, etc), closed source 3rd party components (Flash), and closed source Apple components (Preview, mdnsresponder, etc). Bugs in any of these types of components can lead to remote compromise"

Relying on lots of open source components certainly implies more security risks than relying on mostly closed source ones, like Microsoft does.

But they do take their time when it comes to patching exploits though, which is indeed a business decision.

The irnc ting u jst said is tht FreeBSD n NetBSD r the mst scre OSes by mny wb admns and thy 100% opnsrc.
 
are they STILL using that silly argument that lack of interest keeps hackers from testing Mac Security?


All it takes is ONE person to try it -just for the hell of it-and so far NO LUCK


To me, they are using a 'whistling past the graveyard' argument

Macs are getting more popular everyday

----

"Windows is antiquated technology"-WSJ Sept 2009
How MS/LE spies on you using Windows 7 et al: <www.cryptome.org> scroll to Feb 2010
 
Right. :rolleyes:
This is from 2006 and:
He was trying to do something with Fire.app but I don't know what. Also i know for a fact that it didn't do anything because the permission was denied.
Looks like a fail to me.
Perhaps... but why did it “fail”? Was it due to some blooper — i.e., the programmer made a mistake — or, was it because OSX protected us using ultra-robust security?

If you read that article and understand what parts worked (and why they worked), and what part didn't work (and why not), then you'll see that your dismissive "fail" descriptor was perhaps not the most appropriate. [i.e., just because that particular piece of code contained a bug is no reason to rest on our laurels or act smug. Things could quite easily have turned out much less positive.]
 
are they STILL using that silly argument that lack of interest keeps hackers from testing Mac Security?


All it takes is ONE person to try it -just for the hell of it-and so far NO LUCK


To me, they are using a 'whistling past the graveyard' argument

Macs are getting more popular everyday

----

"Windows is antiquated technology"-WSJ Sept 2009
How MS/LE spies on you using Windows 7 et al: <www.cryptome.org> scroll to Feb 2010

Doesn't matter. They still don't have nearly enough marketshare to be useful to people in Organized Crime rings.
 
Doesn't matter. They still don't have nearly enough marketshare to be useful to people in Organized Crime rings.

The math on this still doesn't work out for me.

So Apple holds what these days? Almost 10% of the market?

Now break it down into WHO owns that 10%

1. Consumers who use their credit cards and store other financial information on the machine
2. People with money to afford an Apple Computer

Now who doesn't own that 10%
1. People at work who are doing mundane tasks
2. People who have information that has a net value of zero.

There are lots of Windows machines out there that hold absolutely nothing of value. The one I'm typing this from for example.

So let us assume for a second that the value of information stored on an Apple computer is equal in value to that on the average Windows machine. How is an 11.1% raise not worth it to organized crime? Especially if OSX takes less effort to crack?

Heck, if I could do something in my job, that was *easier* than my normal job, that would give me an 11.1% raise, I'd do it. Maybe organized crime views money differently...
 
The math on this still doesn't work out for me.

So Apple holds what these days? Almost 10% of the market?

Now break it down into WHO owns that 10%

...

I think that 10% is for the US specifically, and that it is far less world-wide. Of those who own Macs a very large number of them are students, since that seems to be where Macs are most popular (based on published market share stats). Students with not much cash in the bank.

And it doesn't matter whether Macs are easy to exploit or not, in this context. What matters is the potential payoff per hour of work. If you spent a hundred hours writing an exploit, why launch it against a target that will only get you 1/10 of the payoff, especially since there is always the risk of getting caught and facing a trial. It's not a big risk, but if you were doing something illegal why go after the small fish?

At some point someone will figure there is enough profit to attack Macs, but I don't think we are there yet. There is another hurdle to cross first. For Windows there is a wealth of knowledge and tools on how and where to probe to find the soft spots. For OS X anyone contemplating an exploit has to learn a lot more, and to create the skill sets needed to create successful exploits.

What we as a community should be doing is paying much more attention to unsuccessful attempts than we do, because as the saying goes " you learn from your mistakes " and mistakes means that someone is learning.
 
The math on this still doesn't work out for me.

So Apple holds what these days? Almost 10% of the market?

Now break it down into WHO owns that 10%

1. Consumers who use their credit cards and store other financial information on the machine
2. People with money to afford an Apple Computer

Now who doesn't own that 10%
1. People at work who are doing mundane tasks
2. People who have information that has a net value of zero.

There are lots of Windows machines out there that hold absolutely nothing of value. The one I'm typing this from for example.

So let us assume for a second that the value of information stored on an Apple computer is equal in value to that on the average Windows machine. How is an 11.1% raise not worth it to organized crime? Especially if OSX takes less effort to crack?

Heck, if I could do something in my job, that was *easier* than my normal job, that would give me an 11.1% raise, I'd do it. Maybe organized crime views money differently...

So you're saying all PC users do mundane tasks and all Mac users do things that make them super special.

Hate to tell you a good amount of Apples go to Educational sales. Also, most PC users are just your run of the mill user.
 
So you're saying all PC users do mundane tasks and all Mac users do things that make them super special.

Hate to tell you a good amount of Apples go to Educational sales. Also, most PC users are just your run of the mill user.

Apple to education sales may be true... But Windows based machines have a large presence in office buildings with little to no informational value.

Also, the Apple students I knew *did* have more money... that or they were smart.
 
I think that 10% is for the US specifically, and that it is far less world-wide. Of those who own Macs a very large number of them are students, since that seems to be where Macs are most popular (based on published market share stats). Students with not much cash in the bank.

Students may often times have quite a bit of cash in the bank. While many people live week to week, or month to month, many students have a large wad of cash at the start of the school year that they intend to trickle out throughout the year. And besides, most students I knew with Apples had more money than their windows counterpart, on average.

And it doesn't matter whether Macs are easy to exploit or not, in this context. What matters is the potential payoff per hour of work. If you spent a hundred hours writing an exploit, why launch it against a target that will only get you 1/10 of the payoff, especially since there is always the risk of getting caught and facing a trial. It's not a big risk, but if you were doing something illegal why go after the small fish?

Why? Because for starters you are guaranteed that the person hasn't already been robbed. All information gathered is more or less guaranteed to be fresh info. There is no competition, the first person to do it will own the Apple market outright.

And I really don't see the people who are doing this as being in much risk of going to jail :\

At some point someone will figure there is enough profit to attack Macs, but I don't think we are there yet.

Nor do I. But I think this has less to do with the sum total of Macs, and more to do with other reasons.

There is another hurdle to cross first. For Windows there is a wealth of knowledge and tools on how and where to probe to find the soft spots. For OS X anyone contemplating an exploit has to learn a lot more, and to create the skill sets needed to create successful exploits.

Yes, thus it is harder to hack :)

However, for the elite, I don't think this is all that much of a hurdle. In my university days we were told that we were being "Taught how to think, not how to program a specific language". A language is a tool, an environment is a tool. But the hardest part is not learning how to use a tool, but how to carry out the trade.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.