Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I stopped at the Apple store this morning and tried out the 24 inch iMac and the Mac Pro. These are sweet machines. No did not buy anything.

The systems both had 1 gig on them and I compared them to a MacBook Pro. One weird thing.... the 24 incher had some stuttering on the iMoviedemo they all had. However the second time I tried it it was smoth as silk. I think it was not cached in memory and since the second time it was it ran smothly. I was also plesantly surpriced that the 24 incher screen was very readable at it highest setting even with my bad eyes. Nice screen realstate and resolution with nice easy to read fonts.

Im still waiting for Leopard to release these powerful anymals of their chains. By them the systems will be even better, maybe even incluse Santa Rosa.
 
miniConvert said:
I think we all knew that Merom would only bring modest performance gains.

Core 2 is a significantly different beast architecturally from Yonah to Merom. Merom has Intel's clone of AMD's cloned/extended x86 instruction set*, 64-bit instructions as well as long overdue changes to handling of old instructions, allowing this generation of CPUs to better utilize registers.

There are other enhancements in Core 2 as well, so I doubt that the current compilers are getting the full performance potential. It may be several months before updated compilers can properly optimize code for Core 2. Stay tuned.

* Sorry about that - x86 architecture is not pretty to look at. I sure liked the elegance of the PPC instruction set, but guess what $$Billions$$ can do?
 
AidenShaw said:
No, not at all.

An affinity mask sets the set of CPUs that can be scheduled. A job won't be run on another CPU, even if the assigned CPUs are at 100% and other idle CPUs are available.

And that, by the way, is why setting affinity is usually a bad idea. Let the system dynamically schedule across all available resources -- or you might have some CPUs very busy, and others idle.

Win2k3 also has "soft" affinity masks, which define a preferred set of CPUs. If all of the preferred CPUs are busy, and other CPUs are idle, then soft affinity allows the system to run the jobs on the idle CPUs - even though the idle CPUs aren't in the preferred affinity mask.

But I am pretty sure the newest developer tools can cope with that, considering that multicore chips are a rather new thing in the mainstream market...

Try the Processor Preferences app contained in the Apple CHUD tools, for instance...
 
Core 2 is like the OSX of CPU's. Anything else is classic!

imikem said:
Core 2 is a significantly different beast architecturally from Yonah to Merom. Merom has Intel's clone of AMD's cloned/extended x86 instruction set*, 64-bit instructions as well as long overdue changes to handling of old instructions, allowing this generation of CPUs to better utilize registers.

The Yonah is not related to Intel's big disaster chip, the Pentium D 810, but was botched to the point that the engineers turned off EMT64!
 
chatin said:
The Yonah is not related to Intel's big disaster chip, the Pentium D 810, but was botched to the point that the engineers turned off EMT64!

Really? I had understood that Yonah was close architecture wise to the previous Pentium M, while Merom represents the first "true" Core architecture.

Cheers.
 
imikem said:
Really? I had understood that Yonah was close architecture wise to the previous Pentium M, while Merom represents the first "true" Core architecture.

True, the Pentium M (Mobil Centrino) was a hudge sucess for Intel! The Pentium D (Desktop) was a dual-core disaster, pushing the old "NetBurst" Pentium 4 past all safe design limits.

Core 2 is the all new rework that saved Intel!
 
BRLawyer said:
But I am pretty sure the newest developer tools can cope with that, considering that multicore chips are a rather new thing in the mainstream market...

Try the Processor Preferences app contained in the Apple CHUD tools, for instance...
Please explain - I have no idea what "that" is....
---

Regardless of the tool, however, it is usually much better to let the OS dynamically schedule threads across the cores. Unless the programmer has some reason to try to control this, the alternative is some resources (CPUs) being overcommitted, while other CPUs are idle.

It doesn't matter who has the better tools - it's usually better to let the OS decide microsecond by microsecond how best to schedule the CPUs, than to have the developer make those decisions at edit time.

I've used the SetProcessAffinityMask APIs fairly often, but it's always been for specific test or benchmark situations. I have a hard time thinking of a situation where a general application would want to statically control the scheduler - it's just "bad think" to even try. (Except for those weird-a$$ NUMA Opterons - you can be really scr3wed if you have to go through HyperTransport to get to memory. I check NUMA topology, and use affinity to keep the AMD architecture from killing me.)
 
EagerDragon said:
Driving 1.5 hours to the Apple store this morning and the same on the way back. But I am not buying yet, just looking and getting a feel for the entire line. Oh I forgot.... and turning green with envy. Boy is going to be hard.

I've been calling around, there are 3 stores near me but none have a 24" iMac for me to look at yet. :( I'm looking to see what the annoucement is on Tuesday is...Cube redux? :eek:
 
bluewire said:
I've been calling around, there are 3 stores near me but none have a 24" iMac for me to look at yet. :( I'm looking to see what the annoucement is on Tuesday is...Cube redux? :eek:


It is Metal iPod Nano, Macbook Nano, Full HD MBP.
 
EagerDragon said:
I stopped at the Apple store this morning and tried out the 24 inch iMac and the Mac Pro. These are sweet machines. No did not buy anything.
Is the 24" as quiet as the MacPro? Have you been able to compare to the 20"?
 
chatin said:
True, the Pentium M (Mobil Centrino) was a hudge sucess for Intel! The Pentium D (Desktop) was a dual-core disaster, pushing the old "NetBurst" Pentium 4 past all safe design limits.

Core 2 is the all new rework that saved Intel!

Core 2 isn't "all new". It's an evolutionary design based on Core tho some parts are borrowed from other Intel designs (the Memory Disambiguation tech was originally designed for the unreleased, unlamented Tejas, for example).

Other changes include a full 128-bit path to the SSE registers, meaning that all SSE instructions can now complete in a single cycle, L2 shared cache instead of separate L2's per CPU, an extra integer unit, etc.

And, of course, the 64bit extensions :)

Sure, there's enough in the way of changes/additions to render it worthy of being considered a new microarch, but those changes are evolutionary.

Ironically enough, there's a direct line from Core 2 going all the way back to P6, whereas NetBurst really was "all new"!
 
AidenShaw said:
Please explain - I have no idea what "that" is....
---

Regardless of the tool, however, it is usually much better to let the OS dynamically schedule threads across the cores. Unless the programmer has some reason to try to control this, the alternative is some resources (CPUs) being overcommitted, while other CPUs are idle.

It doesn't matter who has the better tools - it's usually better to let the OS decide microsecond by microsecond how best to schedule the CPUs, than to have the developer make those decisions at edit time.

I've used the SetProcessAffinityMask APIs fairly often, but it's always been for specific test or benchmark situations. I have a hard time thinking of a situation where a general application would want to statically control the scheduler - it's just "bad think" to even try. (Except for those weird-a$$ NUMA Opterons - you can be really scr3wed if you have to go through HyperTransport to get to memory. I check NUMA topology, and use affinity to keep the AMD architecture from killing me.)

I've owned SMP machines in the past and often found it more useful to force CPU affinity of CPU-heavy tasks to a single processor, as Windows 2000 (which was current at the time) by default had a habit of swapping it between chips, resulting in a lot of cache-dirtying. I think it was the load balancing code, but it's been a while now and I don't have those machines handy currently. However, you could see some significant improvement in processing time on some non-parallelizable cpu-bound tasks.

I've no idea if MacOS does this, but at least in the case of Core 2 it shouldn't matter anywhere near as much, as the L2 is fully shared.
 
Some_Big_Spoon said:
I was credit card in hand when these were released, but I stopped myself. I'd like to wait a bit and see the 64 bit boost (if there is any), and Leopard in general.

I feel like these are speed demons, but I can't take advantage of a lot of it due to my heavy use of CS2 and the in-between feeling of Apple's apps/OS right now.

The second Leopard is out, I'm on the 24" iMac train.

Yea, I know what you mean. Apple needs to get on the ball with that already. They have been shipping dual core machines for a while yet OS X cant truly take advantage of it. Since intel will be using more cores as time goes by it only makes sense for OS X and it's apps to be able to harness the full power of all the cores/processors. I really really hope thats what they have planned for Leopard. Maybe its one of the " Super Secret Features" ??? :rolleyes:
 
AidenShaw said:
Please explain - I have no idea what "that" is....
---

Regardless of the tool, however, it is usually much better to let the OS dynamically schedule threads across the cores. Unless the programmer has some reason to try to control this, the alternative is some resources (CPUs) being overcommitted, while other CPUs are idle.

It doesn't matter who has the better tools - it's usually better to let the OS decide microsecond by microsecond how best to schedule the CPUs, than to have the developer make those decisions at edit time.

I've used the SetProcessAffinityMask APIs fairly often, but it's always been for specific test or benchmark situations. I have a hard time thinking of a situation where a general application would want to statically control the scheduler - it's just "bad think" to even try. (Except for those weird-a$$ NUMA Opterons - you can be really scr3wed if you have to go through HyperTransport to get to memory. I check NUMA topology, and use affinity to keep the AMD architecture from killing me.)

I also agree it's not the best strategy to deal with CPU scheduling...my example is linked to the following page, I presume...perhaps core affinity scheduling is also there:

http://developer.apple.com/documentation/Darwin/Reference/ManPages/man1/hwprefs.1.html
 
JZ Wire said:
Yea, I know what you mean. Apple needs to get on the ball with that already. They have been shipping dual core machines for a while yet OS X cant truly take advantage of it. Since intel will be using more cores as time goes by it only makes sense for OS X and it's apps to be able to harness the full power of all the cores/processors. I really really hope thats what they have planned for Leopard. Maybe its one of the " Super Secret Features" ??? :rolleyes:

I think that Apple is still working on a gamer machine. I can only assume tat it will be Mac Pro based or a new enclosure. They currently do not support SLI (need support), are using buffered memory (slow), and needs to be user upgradable. I think it is coming but not untl some time next year.

Open GL suporting multiple cores and multiple GPU power agregation is likely to be the ticket so insane performance that blows the rest. But I think it will wait until Leopard to be released. I am sure the next revision 10.4.8 will take us part of the way, but multi cpu, multi-package, multicores are comming. In Jan we should see 2 Quad cores in the Mac Pro and an even better dual Quad in summer. Only Leopard will be able to unleash the power.
 
DMann said:
Wonder how the 24" iMac at 2.33GHz will fare.

i don't know-- but i have a feeling it'll be really fast and a good seller. i'd go to the Apple Store down town and look at the 20" iMac and think "goodness... any bigger and it wouldn't fit on the table." now, for the same price as a 30'' ACD, you get a monitor that is just a little smaller than the 30" PLUS you get a really really good computer. if Apple does't sell a large ammount of these than something is wrong.
 
ergle2 said:
I've owned SMP machines in the past and often found it more useful to force CPU affinity of CPU-heavy tasks to a single processor, as Windows 2000 (which was current at the time) by default had a habit of swapping it between chips, resulting in a lot of cache-dirtying....
However, you could see some significant improvement in processing time on some non-parallelizable cpu-bound tasks.
I came to the opposite conclusion....

Running many compute-bound single-threaded benchmarks and apps - I saw how NT (pre Win2k) would balance across CPUs (that is, a "100%" compute-bound job would show each CPU running at 50%).

However, setting affinity so that one CPU was 100% and the other was 0% had no significant effect on the run times. (And by "significant" I mean statistically significant - I literally ran hundreds of runs in each configuration.)\\

By the way, with Win2k3 (and XP 64-bit, really the same system) you see much less "balancing" - a single-threaded app will stick to a CPU for much longer.
 
check out the numbers

EagerDragon said:
They ... are using buffered memory (slow)
Have you seen the benchmarks?

The Xeon systems scream, even with the "slow" memory.

While some contrived tests showed real latency issues with the FB-DIMM memory, for real-life applications the faster busses and large L2 caches make it a non-issue.

Focus on *system* performance, not on a particular detail.
 
AidenShaw said:
I came to the opposite conclusion....

Running many compute-bound single-threaded benchmarks and apps - I saw how NT (pre Win2k) would balance across CPUs (that is, a "100%" compute-bound job would show each CPU running at 50%).

However, setting affinity so that one CPU was 100% and the other was 0% had no significant effect on the run times. (And by "significant" I mean statistically significant - I literally ran hundreds of runs in each configuration.)\\

By the way, with Win2k3 (and XP 64-bit, really the same system) you see much less "balancing" - a single-threaded app will stick to a CPU for much longer.

I suspect if any observable difference occurs depends upon the application, dataset, etc.

I'm guessing the 50% "balanced" method was done to try and keep a single CPU from heating up too much, and with the advent of multicore systems, it probably no longer matters which core is generating the heat due to them being in a single package.

It could also be MS found that certain circumstances (like mine) resulted in improvements in processing.

Interesting stuff.
 
AidenShaw said:
Have you seen the benchmarks?

The Xeon systems scream, even with the "slow" memory.

While some contrived tests showed real latency issues with the FB-DIMM memory, for real-life applications the faster busses and large L2 caches make it a non-issue.

Focus on *system* performance, not on a particular detail.
Aiden, it's just not like you to make a statement like this without adding the links...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.