Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Although we have monster hard disks these days, it still counts in the RAM. Furthermore, mobile devices will truly benefit from this.
This is impressive. I wonder what they had to scrap from the apps in order to get these results. :rolleyes:

It doesn't make any difference to RAM. Apple has left out all localizations except the English one. That is a bit of memory on the hard drive, but none of it gets ever loaded into RAM except the one localization that the user actually wants. Same for universal binaries: There can be four versions of the code in the file, but only one will ever be used. There is no cost whatsoever in RAM or in execution time.
 
Right, but the question is, how much value does this have to the average consumer?

Yeah, one could ask that question, but another one that comes to mind is "Does Apple have a strategic goal that is of a higher priority than the Leopard-->SnowLeopard upgrade rate?" Wouldn't it be nice if their operating system enjoyed a technological lead of the same order of magnitude that OSX/iPhone has over its competitors?

The work they're doing on Snow Leopard could get them there.

And the long-term benefits could dwarf the effect of losing a small percentage of Leopard users who didn't understand why they should care.
 
This is precisely why Snow Leopard is a priority to Apple. They're not doing this work for the desktop, they're doing it to primarily to make the iPhone/iPod Touch platform snappy and performance increases for their laptop and desktop machines are a nice side effect.
While it's possible you are correct, I don't believe so based on the press release's emphasis on making multi-core programming simpler and taking advantage of GPUs for something other than rendering 3D on screen.

With Nehalem bearing down on Apple, it's entirely possible they need Snow Leopard simply to take advantage of a 2009 Mac Pro that could have 32 logical processors (2 logical processors per core x 8 cores per socket x 2 sockets). There were claims that Tiger's scheduler didn't really scale past four cores and that needed fixed for Leopard. By that time, Apple knew that Nehalem was coming and would add lots of cores and switch to a NUMA architecture. What if Apple took a short cut for Leopard and did just enough to get it working well on the 2008 Mac Pro hardware with the intent on revisiting it when Nehalem came along in late 2008/early 2009?

Apple could be going after the 3D modeling and rendering space where XP 64 still reigns supreme. Most of the apps I've seen have resisted going to Vista and maybe Apple sees this as a place where they could take primo high margin market share away from Microsoft before Linux has a chance to gain significant traction? My skin tingles at the idea of a 64-bit modo running on a 2009 Mac Pro with 32 logical processors running at 3+GHz and sporting 48GB of memory (4GB DIMM x 6 slots per socket x 2 sockets). Okay, I couldn't afford it, but I can dream. And unlike some of the other fantasies people post, data from Intel confirms this is a realistic configuration for the Nehalem Xeons. :)
 
I think I know what you're saying. Multicore applications may still appear as universal, I haven't seen any indication they may arrive at a new name.

Or that he is more likely observing that the reduction in size was not due to cutting PPC support.

Nice to know that PPC is still supported... for now...
 
More than anything I would hope that Snow Leopards push support would help Apple to actually displace Exchange.
I think Apple is smart enough to recognize that's a fight they can't win on their own. IBM has about a bazillion times more credibility in the enterprise space than Apple does and they don't seem to be able to stop Notes losing market share to Exchange. If the best IBM can do is to lose slowly, Apple has no hope.
 
With Nehalem bearing down on Apple, it's entirely possible they need Snow Leopard simply to take advantage of a 2009 Mac Pro that could have 32 logical processors (2 logical processors per core x 8 cores per socket x 2 sockets). There were claims that Tiger's scheduler didn't really scale past four cores and that needed fixed for Leopard. By that time, Apple knew that Nehalem was coming and would add lots of cores and switch to a NUMA architecture. What if Apple took a short cut for Leopard and did just enough to get it working well on the 2008 Mac Pro hardware with the intent on revisiting it when Nehalem came along in late 2008/early 2009?
I'm actually interested if Snow Leopard's scheduler will be able to tell the difference between logical processors of the same core, cores on the same die, and cores on different dies and schedule tasks accordingly. Since the performance characteristics are quite different so just being able to scale to 32 processors, but treating them all the same isn't that helpful. Proper core identification and scheduling will be of great benefit even to dual cores with 4 logical processors.
 
So, I just scanned the article on Wikipedia about ZFS and not being too computer illiterate, I'm not gonna lie... it came out as a LOT of greek to me. It just seems like a terrific file system when it comes to handling TB's of information... lots and lots of TB's.

So how exactly does it benefit the average, normal end user? I can start to understand the need on a server side of things, so building it into SL would be advantageous, but why would the avg iMac consumer benefit from it any better than HFS+ is doing?

not to be nitpicky, but i think you meant "not being too computer literate", because your double negative means you're actually very computer literate ;) :D

anyways, i like the sound of this snow leopard, because it shows they are building for the future as opposed to just bringing in flashy apps to get people to go "ohh, ahh" like at the fireworks. I do hope it isnt full OS X upgrade price, but, not much i can do about that (except with my super secret superpower, but keep that on the down low) Anyways, more speed, utilizing the cores properly, and less memory space for apps are always good.
and sorry about the rambling post, I am in a strange mood today for some unknown reason.
 
I'm actually interested if Snow Leopard's scheduler will be able to tell the difference between logical processors of the same core, cores on the same die, and cores on different dies and schedule tasks accordingly. Since the performance characteristics are quite different so just being able to scale to 32 processors, but treating them all the same isn't that helpful. Proper core identification and scheduling will be of great benefit even to dual cores with 4 logical processors.
Not only that, but it will need to tell the difference between the current memory access through an MCH versus NUMA to better take advantage of the significantly reduced memory access times.

Given the fact that Leopard slipped quite a bit, the timing of Nehalem and Snow Leopard just seems too coincidental to me.
 
I think I know what you're saying. Multicore applications may still appear as universal, I haven't seen any indication they may arrive at a new name.
Why would they need one? You can make multi-core applications now, but depending on what you're trying to do it can either be very easy, or very complex to do. Some tasks are easy to make multi-threaded and thus make better use of multiple-cores/processors, while others are horrendous due to the amount of synchronising you have to do to prevent two tasks trying to change the same piece of data, causing inconsistencies.

The only difference with OS X.6 is that Apple seems to be pushing forward some new API or set of libraries or something that will be make developing applications that make the most of multiple-cores a lot easier. In which case it's no different from things like Core Animation; if you use it then you're tied to the version of OS X that introduced it (Leopard).

Unless this multi-core thing somehow allows you more direct control over how your program's threads are assigned to processor cores or something, but I'm not sure that's really something application programmers should be concerned with.
 
8 cores

Multicore CPUs have nothing to do with GPU programming. The technology buzzword here is OpenMP.
 
Where in the previous post (which is the first post) that you are replying to does it mention the end of PPC?

Every thread about 10.6 has had dozens of posts about whether PPC will still be sorted or not. It is a big thing to see a sign that PPC may still be supported. Have you been living in a cave or something?
 
While it's possible you are correct, I don't believe so based on the press release's emphasis on making multi-core programming simpler and taking advantage of GPUs for something other than rendering 3D on screen.

I totally agree with that aim, I just can't believe that Leopard on Mac needs it anywhere nearly enough to justify directing the resources of pretty much their entire OS division to achieve it. If you were running Apple, does Leopard feel slow enough to justify putting out a feature freeze for 15 months to make it "faster"? When I'm using the built in apps on Leopard (Safari, Mail, etc), my CPU usage never even seems to go above 20% and it's only slowed down by the hard drive limitations. We've already seen how many people on this forum would be reluctant to pay full price for what doesn't feel like tangible benefits.

However when I look at the iPhone I see a different story. I see Safari held back by a slow CPU. I see a lack of video recording, with encoding only possible at 5-10 fps and well short of the 30fps which would be acceptible to Steve. I see a whole host of functions which could be added if there was more power available. A small GPU-esque co-processor on the iPhone could open up all sorts of possibilities (including high speed video encoding, HQ full screen video conferencing, better games, etc). The old solution would have been to ramp up clock speed, but this can't be done without dramatically increasing power usage. However this co-processor would have a tiny power budget compared to the general purpose ARM CPU, but be incredibly powerful within it's limited scope. This is why Apple needed to buy PA Semi, to make this kind of stuff come alive. And right now it's incredibly hard to program for, hence the need for Snow Leopard. Snow Leopard will teach us how to write these kinds of apps in preparation for iPhone v3 or v4.

With Nehalem bearing down on Apple, it's entirely possible they need Snow Leopard simply to take advantage of a 2009 Mac Pro that could have 32 logical processors (2 logical processors per core x 8 cores per socket x 2 sockets)...

The only apps which use this many cores are ones like (as you said) 3D Modelling/Photoshop/Final Cut Pro. However, these are apps which are easiest to schedule for. You stick each of their threads on a seperate core, don't move them around too much to preserve cache consistency, and you've pretty much reached the peak performance you can get.

Apple could be going after the 3D modeling and rendering space where XP 64 still reigns supreme.

This might be possible, but if you look at Apple's big push at the moment it seems to be into the "Enterprise space". I know that "creative arts" will always be a passion, but it doesn't seem like what's top on Steve's mind.
 
The Nehalem processors will go up to 8-cores? Does this mean the new Mac Pro will have one Intel processor? Which would be faster? Two quad-cores or one 8-cores? Which is cheaper?

Anyways, this is good news! I'm glad Apple is focusing on the core of the OS, instead of flashy features and eye candy.
 
commander.data said:
I'm actually interested if Snow Leopard's scheduler will be able to tell the difference between logical processors of the same core, cores on the same die, and cores on different dies and schedule tasks accordingly. Since the performance characteristics are quite different so just being able to scale to 32 processors, but treating them all the same isn't that helpful. Proper core identification and scheduling will be of great benefit even to dual cores with 4 logical processors.

Not only that, but it will need to tell the difference between the current memory access through an MCH versus NUMA to better take advantage of the significantly reduced memory access times.

Given the fact that Leopard slipped quite a bit, the timing of Nehalem and Snow Leopard just seems too coincidental to me.

Windows has a set of functions to map cores to local memory nodes, and distinguish full cores from hyperthreaded cores that share resources.

This information is collected by the BIOS and passed to the OS.

Most likely EFI provides a similar configuration to OSX.

But possibly one feature coming with Snow Leopard is an API set like the Windows one so that an application would be able to determine this mapping. (I see that search engines return lots of technical info when you look for "NUMA windows" and very little for "NUMA OSX".)
 
The only apps which use this many cores are ones like (as you said) 3D Modelling/Photoshop/Final Cut Pro. However, these are apps which are easiest to schedule for.
That's not true at all if you are trying to get the best performance out of a NUMA architecture. With the number of cores and the amount of memory we're talking about, balancing things out is very challenging. Fortunately Intel is the last to the NUMA game, so Apple can build on everyone else's knowledge.

This might be possible, but if you look at Apple's big push at the moment it seems to be into the "Enterprise space".
I disagree that it's a big push. It is a push, but the things that are announced seem primarily aimed at making sure the iPhone, desktops and laptops aren't rejected out of hand by Windows-centric IT departments. Apple will continue to target creative professionals because these are the people who buy Mac Pros and MacBook Pros and contribute much higher margins than your typical enterprise customer.

If I look at my own company, Apple isn't doing anything to make our messaging and desktop support teams want their stuff. But Apple is cutting the legs out from under their arguments against departments demanding iPhones and MacBook Pros being allowed to connect to the network.

Edit: I see ZFS along the similar lines. Without some kind of serious volume manager and file system, like ZFS, XServes don't have a hope of being put in our data center. But with that added and supported by Apple, about the only arguments against them are lack of built in iSCSI and no NetBackup support. Even those could fall by the time Snow Leopard is released. The IT department would most likely still stick with our standard HP servers, but would have a hard time arguing against departments that needed XServes.

The Nehalem processors will go up to 8-cores? Does this mean the new Mac Pro will have one Intel processor? Which would be faster? Two quad-cores or one 8-cores? Which is cheaper?
It's nearly impossible to answer these questions, at this time. So far, it seems like the Nehalem Xeons will come in either four core or eight core models, with each core supporting two logical processors. It seems like there will be at least one socket, two socket and four socket versions available. If Apple were to pick the logical successor to their existing Mac Pro design, they could potentially put two 8 core Xeons in the 2009 Mac Pro. That would present itself as 32 logical processors.
 
I think Apple would charge $129 for the full version if you are upgrading from 10.4 or an older version. If you have 10.5 you could by the upgrade for $59 or $69.

This seems plausible because, we already paid for Leopard and Snow Leopard is an extension (rework, optimization, whatever) of Leopard so hypothetically we should get a skewed pricing, as for Tiger users, $129.

:cool:
 
What I Think Happened

This is what I think happened. Apple looked to the future and had a bunch of good ideas of features and things it would like to build into OSX. After talking it over they realized that to make these ideas a reality they need to build the foundation for them first because they currently don't have the tools to make them happen. They then decided that building this foundation might take a really long time to develop properly, and so it might be better to not invest as much in "features" and invest more in strengthening the framework.

I personally think this is perfect. They're taking things that they've already developed, and connecting them to each other to help them all reach their potential. Connect their programs to MS Exchange and MobileMe to appeal to more people, allow desktops to use their GPU to improve performance, develop ZFS so that data can be stored on the hard drive more efficiently, improve security so it is more stable, and remove bloat from the OS and all of the applications as much as possible so that they constantly run fast and reliably. It doesn't really add more features, it just helps the current software and hardware run more efficiently and achieve a higher potential. It's actually quite genius, because MS won't be releasing the next version of Windows for another 2 years. In that same 2 years Apple can release Snow Leopard, with these foundation improvements, and then release OS X 10.7 with a ton of new features based on this strong foundation. Not to mention developers will be able to produce great apps too, and business will be more likely to consider switching over to a Mac. These next 2 years could actually be very big for OSX.
 
That's not true at all if you are trying to get the best performance out of a NUMA architecture. With the number of cores and the amount of memory we're talking about, balancing things out is very challenging. Fortunately Intel is the last to the NUMA game, so Apple can build on everyone else's knowledge.

NUMA's a lot simpler with Nehalem than it is with the AMD processors though. Memory access is at most one CPU away, compared to AMDs design which can go via 2 or more HT links. Yes, it's still important to be NUMA aware, but designing for it is easier with Intel hardware.

Anyway the NUMA aspects are only going to be handled by a very very small subset of the Leopard team, i.e. the few guys who work directly on the kernel scheduler.
 
looking at the pictures from the original post...

I was comparing the pictures (the one listing the apps) from the original post with my own apps (10.5.3) and I noticed a LOT of advancements in the version numbers of many of the apps. Then I looked at Dictionary. Dictionary is listed at version 2.0, this is odd because I have version 2.0.2 on my MBP? :confused:

BTW, who are these "OrchardSpy" guys, could these be more fakes? It sure wouldn't be hard... ;)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.