Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
wait, i paid the 9.95 price. what is this full install? am i not getting the full snow leapord? thx

Its the same thing as your restore discs, just updated. No matter what version of SL you purchase you get the same content. There are no "multiple versions" stunt here. The price you pay is based on what OS you currently have or when you purchased your Mac.
 
Who? Any quote/reference?

People claim Bill Gates said it - which he never did.

The same people who claimed that Al Gore said he invented the internet - when in actual fact what he said was that as a congressman he voted for funding the internet.

But then again, when has facts ever gotten in the way of regurgitating the same lie multiple times?
 
From what I understand - NSOperations in Snow Leopard is built upon Grand Central; the issue was addressed a while ago when the question regarding Cocoa applications and Grand Central was raised by me.

I think that's more of kernel taking advantage of the GCD than the application actually using GCD directly. It's like a two prongs thing, there's the GCD built in every part of Snow Leopard with the APIs getting the performance from that already and there's the applications being specifically optimized for GCD.
 
People claim Bill Gates said it - which he never did.

The same people who claimed that Al Gore said he invented the internet - when in actual fact what he said was that as a congressman he voted for funding the internet.

But then again, when has facts ever gotten in the way of regurgitating the same lie multiple times?

Right. I didn't say Bill Gates for this reason. I just want the emphasis on the quote because it does raise the point of predictions and how it can so far off the scale. Even if Bill Gates didn't say it, somebody did somewhere.
 
I think that's more of kernel taking advantage of the GCD than the application actually using GCD directly. It's like a two prong thing, there's the GCD built in every part of Snow Leopard and the APIs getting the performance from that already and there's the applications being specifically optimized for GCD.

Umm, you might want to read up on that; this has been covered numerous times on macrumors in extensive threads and in the Apple documentation; you can either access it directly or you benefit from it via various parts of Cocoa utilising it.

I'd assume that direct access GCD is for those who still have Carbon applications and want them to take advantage of it - with that being said, if you're going to re-write your application using Cocoa, you inherit GCD. Carbon is a dead end so I doubt that any part of it is taking advantage of it.
 
Right. I didn't say Bill Gates for this reason. I just want the emphasis on the quote because it does raise the point of predictions and how it can so far off the scale. Even if Bill Gates didn't say it, somebody did somewhere.

And if you took the time to read my post - where did I state that you attributed it to Bill Gates? I was answering his question/post - not yours.
 
And if you took the time to read my post - where did I state that you attributed it to Bill Gates? I was answering his question/post - not yours.

Oh no, don't take it seriously, I wasn't talking to you specifically. I was expanding on what you said to explain the purpose of what I was trying to say.
 
Umm, you might want to read up on that; this has been covered numerous times on macrumors in extensive threads and in the Apple documentation; you can either access it directly or you benefit from it via various parts of Cocoa utilising it.

I'd assume that direct access GCD is for those who still have Carbon applications and want them to take advantage of it - with that being said, if you're going to re-write your application using Cocoa, you inherit GCD. Carbon is a dead end so I doubt that any part of it is taking advantage of it.

I have read many of those threads and Apple Docs. You need to remember that GCD APIs are designed for programmers to take advantage for their applications specifically, it is not done automatically for every application.

Snow Leopard itself, all the parts of it already have GCD optimized code, that's why when an application is based on Cocoa, it'll take advantage of already inherit GCD optimized APIs. But the actual application itself can gain MORE from GCD by optimizing the code to be more "blockified", so that the GCD dispatcher knows which code that can be dispatched following the code inside the application.

In other words, any Cocoa applications on SL will run a bit efficient because the Cocoa APIs itself is already more efficient by being GCD/OpenCL optimized. However the applications can be even more GCD optimized by using blocks and units and that's done by using ObjC's block syntax, ^= and so on that's explained in the PDF that I sent you.
 
Easy, any application that uses NSOperations automatically inherits Grand Central goodness.

As for OpenCL, I am unsure as to whether the CODECs by Apple are utilising OpenCL, however, I have a feeling that this is a foundation release with future releases of Mac OS X taking advantage of OpenCL.

I'm looking at a new MacBook but I am wary of Nvidia given the high failure rate of their products.



Yes, just memory addressing - you're ignoring the many pages which refute exactly what you said; heck you ignore this page you posted your opinion on where considerably more knowledgeable people have clearly outlined why your assertion is false.

You did a great job of nicely calling me stupid. But in the long run I'm right.
 
All I was trying to say is that everyone is hyped about 64bit and its really not performance increase.... for most of us. The other Tech is going to be more important. Thats it... and no need to ban anyone here not trying to do that.

There is more to 64bit mode than just '64bit', there is a whole host of security and performance features as well; if all these features existed in 32bit mode - I wouldn't care a flying continental about 64bit mode.

Maybe I should have rephrased it given that my emphasis was on 64bitness rather than the features that come with moving to Long Mode - of which one of the benefits is 64bit along with a flat memory model, XD bit, more registers, performance enhancing related to SSE and other extensions.

Edit: I've had a look at:

http://en.wikipedia.org/wiki/NX_bit#Mac_OS_X

And I am confused as to whether something compiled into 64bit gains XD bit or whether one has to boot into 64bit mode for a 64bit application to gain XD bit support.
 
There is more to 64bit mode than just '64bit', there is a whole host of security and performance features as well; if all these features existed in 32bit mode - I wouldn't care a flying continental about 64bit mode.

Only Apple reserves the XD bit for 64-bit code, Windows has been enabling no-execute security protections in 32-bit XP for five years.

XD is not related to bitness - although I don't believe that any 32-bit only CPUs have XD. It just happened to come along at the same time as x64 - it doesn't require x64 mode.

(Correction: Later revisions of the Dothan had XD, so it is available in some 32-bit only CPUs.)


...one of the benefits is 64bit along with a flat memory model, XD bit, more registers,...

x86 has a flat memory model for 32-bit, nothing new there. (It's bigger, but no flatter, for x64.)


And I am confused as to whether something compiled into 64bit gains XD bit or whether one has to boot into 64bit mode for a 64bit application to gain XD bit support.

Confusion is easy here. The XD bit is the high bit (bit 63) of the 64-bit page table entry.

On Windows, any system with PAE (36-bit memory addressing) support uses 64-bit page table entries - even on 32-bit operating systems.

Starting with XP SP2, Windows will use the 64-bit PAE page tables if the CPU supports XD. XP doesn't support full PAE including greater than 4 GiB of RAM, however.
______________

I find it amusing that something that Apple touts as a major feature of their latest "for sale" major OS version is a feature that Microsoft shipped for free in a service pack in August 2004. Five years later, Apple is charging for support of an important hardware security feature....
 
Mac OS X

http://en.wikipedia.org/wiki/Mac_OS_X
Mac OS X for Intel supports the NX bit and PAE on the i386 arch, supported by Apple from 10.4.4, the first Intel release – onwards. Note, 10.4 only supported NX stack protection.
10.5: All 64-bit executables have NX stack and heap; W^X protection. This includes i386 (Core 2 or later) and the PowerPC (G5 only).




Microsoft Windows
Starting with Windows XP Service Pack 2 and Windows Server 2003 Service Pack 1, the NX features were implemented for the first time on the x86 architecture.
Microsoft Windows uses NX protection on critical Windows services exclusively by default. Under Windows XP or Server 2003, the feature is called Data Execution Prevention (abbreviated DEP), and it can be configured through the advanced tab of "System" properties. If the x86 processor supports this feature in hardware, then the NX features are turned on automatically in Windows XP/Server 2003 by default. If the feature is not supported by the x86 processor, then no protection is given.
"Software DEP" is unrelated to the NX bit, and is what Microsoft calls their enforcement of Safe Structured Exception Handling. Software DEP/SafeSEH checks when an exception is thrown to make sure that the exception is registered in a function table for the application, and requires the program to be built with it.
Early implementations of DEP provided no address space layout randomization (ASLR), which allowed potential return-to-libc attacks that could have been feasibly used to disable DEP during an attack. The PaX documentation elaborates on why ASLR is necessary; a proof-of-concept was produced detailing a method by which DEP could be circumvented in the absence of ASLR. It may be possible to develop a successful attack if the address of prepared data such as corrupted images or MP3s can be known by the attacker. Microsoft added ASLR functionality in Windows Vista and Windows Server 2008 to address this avenue of attack.
Source

Remember that even 32bit kernel with PAE can actually get most of the features of Long Mode 64bit, just there's going to be overhead and so on that a pure 64bit kernel will get rid of and NX can be emulated by software, which is what MS did as well. So it's not exclusive to 64 bit OS, it's just built in to every 64 bit CPU.
 
has anyone tried iLife 09 with this latest build of SL?
or maybe final cut express?

I still have my friends iLife '09 and my copy of final express is getting here like in a week :D

I'm just wondering if these apps work for this latest build of SL

because i remember the 2 builds before this, iLife 09 didn't work.

My DVD drive is still not working so, I wouldn't know if iLife '09 works or not...

thanks
 
has anyone tried iLife 09 with this latest build of SL?
or maybe final cut express?

I still have my friends iLife '09 and my copy of final express is getting here like in a week :D

I'm just wondering if these apps work for this latest build of SL

because i remember the 2 builds before this, iLife 09 didn't work.

My DVD drive is still not working so, I wouldn't know if iLife '09 works or not...

thanks

I never had any issues with iLife '09 SL. There is some GUI glitch but nothing to be concerned about.
 
If you read carefully that quote you posted, you'll see there there are also overhead for processing 64bit calls with 32bit kernel and processing 32bit calls with 64bit kernel. there'll be less overhead for 64bit processes with 64bit kernel.

I know all of this. However, the overhead is not as large as you suggest since what's computationally intensive in applications is not made, usually, of system calls. And, back to my GeekBench results, you can see that the overhead you're speaking of is less then 1%.

And you've skipped the fact that 64-bit applications do have access to the new registers regardless of the kernel :)
 
No need with SSDs that'll slowly take over.

Isn't RAM going to be always infinitely faster than an SSD (I know it doesn't have to be, but with our current system architecture)? RAM is in direct communication with the CPU, whereas data has to still be transferred to the RAM, even for an SSD.
 
I know all of this. However, the overhead is not as large as you suggest since what's computationally intensive in applications is not made, usually, of system calls. And, back to my GeekBench results, you can see that the overhead you're speaking of is less then 1%.

And you've skipped the fact that 64-bit applications do have access to the new registers regardless of the kernel :)

If the sources that I just read specifically for OS X is accurate then you're right, the OS X has a clever method of a pass-thur method of calling long mode 64bit calls. My bad for being inaccurate in that.


I was sure that it was only possible in a pure 64bit kernel and drivers. The info that I read was insisting (especially x86-64 spec) that the only way to get full access to the additional registers is to run a pure 64bit OS, drivers and so on. The 32bit applications would continue to run in a comparability sub mode on the CPU. OS X engineers and probably linux if they did the same thing were clever of taking advantage of the 64 bit long mode compatibility mode for running a 32bit kernel, and even more clever was AMD who came up with that specs.

Even so, I still think Geekbench doesn't tell the whole story.
 
Isn't RAM going to be always infinitely faster than an SSD (I know it doesn't have to be, but with our current system architecture)? RAM is in direct communication with the CPU, whereas data has to still be transferred to the RAM, even for an SSD.

SSD has less than .1 latency to read and send data. There's no need to preload data to RAM. It's so fast, it's not even funny how much faster it'll become over time. People are not going to notice if there's preloaded data in RAM or an SSD, not without a benchmark.

SSD can easily go 1GBps within 3-4 years and 10GBps within 15 years. HD can't scale like that. It'll always be limited by the spinning platters and with that the 10ms latency and 5ms for some superfast 15K drives.


RAM isn't always going to be "infinitely" faster. There'll be other technology to replace it. SSD is the "RAM" for the long term storage.
 
SSD has less than .1 latency to read and send data. There's no need to preload data to RAM. It's so fast, it's not even funny how much faster it'll become over time. People are not going to notice if there's preloaded data in RAM or an SSD, not without a benchmark.

SSD can easily go 1GBps within 3-4 years and 10GBps within 15 years. HD can't scale like that. It'll always be limited by the spinning platters and with that the 10ms latency and 5ms for some superfast 15K drives.


RAM isn't always going to be "infinitely" faster. There'll be other technology to replace it. SSD is the "RAM" for the long term storage.

While I agree with that being a possibility, it would still require a change in modern computer architecture to make CPUs have access to secondary storage in the same way that they have access to primary.
 
While I agree with that being a possibility, it would still require a change in modern computer architecture to make CPUs have access to secondary storage in the same way that they have access to primary.

RAM will remain the short term memory for CPU for a long time while SSD will become the long term memory. Even our brain acts like this and we got the most complicated computer in the universe right inside our head. SSD has a limit of how much it can write, that's what the RAM is very good for, it can last a long time with massive amount of data changes that doesn't reduce the lifespan like MLC/SLC NANDs would. Also random speed is still low for SSD.

The storage subsystem will have to be radically changed anyway for SSD as the SATA standards are too slow for SSD. But SSD is too small of a market to justify the cost for doing so. We have to wait it out until the market gets big enough to start thinking about this.

Now I am getting off the main point of your original point, I don't think there's a need to preload apps and data to RAM as the SSD's extremely low latency remove the need in the first place because you originally said that it would eliminate the waiting for HD to respond. Intel has been trying to do this with the technology they called Turbo Memory (aka Robson) in which they used super fast flash memory to cache data. What I do agree is placing this flash cache right next to the CPU and use that to store the entire OS within as readonly so that it'll actually be protected as well. Configurations can be stored in the SSD to customize the OS but I think this would be much better to move toward.
 
Guys.. i am a little... confused...

I just upgraded to 10A421a today and after the installation the computer booted as normal.. i was astonish to notice that.... ahn... my quicktime icon was the same as previous builds!!
How the F*** is it possible?

Just i case, i didnt restart the computer a second time as i was late to get to work and yes, my system profiler is idicating the correct version... 10A421a.

Is anyone in this world who faced that problem?? Is a second restart necessarily to change the icon (i know.. its a stupid questuion...)

Any ideas?
 
Guys.. i am a little... confused...

I just upgraded to 10A421a today and after the installation the computer booted as normal.. i was astonish to notice that.... ahn... my quicktime icon was the same as previous builds!!
How the F*** is it possible?

Just i case, i didnt restart the computer a second time as i was late to get to work and yes, my system profiler is idicating the correct version... 10A421a.

Is anyone in this world who faced that problem?? Is a second restart necessarily to change the icon (i know.. its a stupid questuion...)

Any ideas?


Sometime a reboot or icon rebuild will fix the problem.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.