Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
easy - reschedule

nospleen said:
I have no clue about this either way. But.... If core one is using lets say 400mb of ram, how can core two access anymore than 112? I am not arguing, just using the common sense approach.:p
"cores" don't have memory, processes (running applications) have memory.

If the 400 MiB process is scheduled on core0, no process with more than 112 MiB (minus whatever the system is using) can run on core1.

A millisecond later, however, the system may be running the 400 MiB process on core1, and core0 can't run anything over 112 MiB.

Scheduling is dynamic, changing microsecond by microsecond depending on what processes have work to do.

All the talk about dedicating "one core for this, and the other core for that" ignore the reality of dynamic scheduling. You don't want to dedicate anything to either core - let the OS decide each microsecond where things should run.

There's a valid point that the iMacIntel was handicapped because it has to take care of both the application's memory needs (presumably about the same on both systems) plus the memory needs of Rosetta.

Repeating the test several times on larger and larger memory configurations would be very useful.
 
Sunrunner said:
No kidding... the forum shouldnt even allow font sizes that big

How about removing the non listening users so that people dont have to repeat themselves 20 times (therefor have no need to shout)

i was only doing it so people listening (no at the poster quoted)

aegisdesign said:
NO. Not true. A dual core or dual CPU machine lets both cores or CPUs address the same memory space with the same restrictions on each core. ie. none.

They don't get half the RAM each. The second core doesn't get any less access than the first.


well they share the same 512mb - i maintain 512 will bottleneck a dual core system more than a single core one.
 
Adobe said when?

So when did Adobe make this announcement of Universal by March????
Someone on Page 1 said this.... I haven't heard anything yet and I just checked this weekend past.
 
"soon", but not "early"

MrCrowbar said:
Can't we all be happy apple gave us the Intel Macs so soon?
Yes, but it's annoying to see all the comments (even in mainstream news reports) that the MacIntels are "six months early".

When Jobs said "by next WWDC" last June, lots of people were posting "we'll see the first ones at MWSF'06". (Like this right-on prediction from 9 June)

He said "by", not "at". That's come true.
 
a1291762 said:
Rosetta doesn't emulate a CPU but rather translates the instructions and then runs them. This is why it's so fast. The fact that it can run PowerPC binaries on an x86 chip at 50% speed is totally awesome. Not since the x86-on-Alpha days (FX!32 was based on the same idea as Rosetta, possibly even the same code) has such a thing been possible.

50% speed compared to a 2 ghz G5 but probably around 20-25% compared to a native app on a dual core Intel. Still not bad at all but we're making dubious comparisons here regarding rosetta's power.
 
It would be interesting to see a Core Duo 2Ghz versus a G5 970MP 2Ghz. Most would say the Intel will blow any G5 out of the water, but I am not completely sold. This would be a great test to see which is the better processor at the moment.
 
Aiden;

Bad boy. You quote the article as providing proof or at least pointing towards Apple's soon abanodonment of the 64 bit platform in OS X. If you look at your very link, you will see that the broken 64 bit update was soon patched merely a few days later. That does not sound like it was hardly noticed, or an intentional move by Apple to remove support for 64 bit computing.

Please, in the future, the proper way to quote a source is to use the intent of the entire available information correctly. Not out of context.
AidenShaw said:
 
sorry, point taken

seamuskrat said:
Aiden;

Bad boy. You quote the article as providing proof or at least pointing towards Apple's soon abanodonment of the 64 bit platform in OS X. If you look at your very link, you will see that the broken 64 bit update was soon patched merely a few days later. That does not sound like it was hardly noticed, or an intentional move by Apple to remove support for 64 bit computing.
I guess you're right - I assumed that the reader was already aware of the patch that disabled 64-bit until it was reissued. If one hadn't seen that story, my statement could have been read as you suggest. I should have qualified the word "disabled" with "accidentally" or "temporarily"....

My point was to show how little Apple 64-bit is being used. That patch managed to make it through development and testing without anyone trying *any* 64-bit application.

Of course, after it was released, at least one app was discovered to have a 64-bit component - and the patch was quickly reissued to restore 64-bit functionality.
 
u kidding?

LifeIsCheap said:
Rosetta emulation scores look terrible!
emulation is usually slower than native, but to have it run even slightly faster is totally awesome. the puddle of drool on my desk keeps on growing cuz of these intel macs.
 
Intel speed tests nice but not as hyped

So the average app receives a 20 to 30% increase using the Duo Core Intel even though it is being compared to a single core G5. The very best app received an 80% boost.

So where is the 2x faster boast that SJ and Apple are claiming?

Rosetta actually sounds impressive. To run native PPC apps at 30 - 40% the speed of the native G5 speed is not bad at all considering what it is doing.

Too bad IBM and Apple gave up on the G5. A low-power dual core G5 variant would have been a Yonah killer!
 
BakedBeans said:
well they share the same 512mb - i maintain 512 will bottleneck a dual core system more than a single core one.

This isn't really true... at least not universally for all task flows.

It would bottle neck a single core system as well... the amount of memory in the system is an issue for the task flow you are trying to execute and its data needs.

For example say you are using photoshop to edit a large image that can fully fit in RAM while still having enough to satisfy photoshops needs. Also lets say the editing tasks you use in photoshop are multi-threaded (many filters can divide an image in two allow two threads). Finally lets say you have a single core and dual core system that are the same except for one has a single core with the other having two (both systems with the same type of core).

If you ran your edit job on a single core system it would run without memory starvation (paging). If you ran your edit job on a dual core system it would run without memory starvation but it would run nearly twice as fast because the task can be divided between the two cores. The amount of memory in this situation didn't affect anything.

Now lets change your image size to one that is larger then available memory.

If you ran your edit job on a single core system it would run with memory starvation, slowing the edit task because of having to page in data from disk (much much slower then RAM). If you ran your edit job on a dual core system it would run with memory starvation as well, however since you have two cores one of the cores could handle the memory paging (faulting) while the other core could work on any available image data (again all cores would be utilized to maximum that the could given the task and available data). In other words the dual core system when memory starved could actually be a little more efficient and faster then a single core system under the same situation.

It is likely in a bad memory starved situation that you may not be able to feed both cores sufficiently and your performance would rapidly approach that of a single core system under the same conditions (but as I said slightly better than because of page fault offload).
 
Well my hats off to apple for coming out earlier than expected with these machine and they didn't scimp on features either, thank god! Its safe to assume according to various websites that the speed is still not there and will take time. All you imac G5 owners can rejoice, your machines are quite competitive still. I think the best time to buy the new macs will be by june - september time frame and will see much improved chips as well, 64 bit here i come!
 
single core Rosetta

I wonder how fast Rosetta will run on a single core Intel chip (I'm assuming that is what they put in the iBook and Mac mini). Although these are not Pro machines, many users will occasionally use Photoshop (or at least Photoshop Elements) as well as Office. Scrolling on Word has never been great on OS X and a Rosetta-crippled mini/iBook would probably feel like a dog to a Windows user/potential-switcher.

I don't think this is the end of the world, but I do think this transition may be rocky both at the top end AND at the bottom end. It will be very interesting to see what will happen to Apple's sales. A lot of contributors to this site have pointed out how attractive the Intel iMac is to programmers and to people who primarily use iApps. But how many people fall into one or both of these categories AND are willing to shell out $1299? Someone who uses Office and makes light use of the iApps does not stand to gain that much from upgrading from a G4. Yes, Safari and the Finder will run better, but how much?
 
IBM gave up on Apple.

Apple never had any power in the relationship.

Apple's business, at best, represented a tiny fraction of IBM's revenue stream.

There was never any incentive for IBM to deliver that 3GHz G5.

>Too bad IBM and Apple gave up on the G5.
 
guez said:
I wonder how fast Rosetta will run on a single core Intel chip (I'm assuming that is what they put in the iBook and Mac mini). Although these are not Pro machines, many users will occasionally use Photoshop (or at least Photoshop Elements) as well as Office. Scrolling on Word has never been great on OS X and a Rosetta-crippled mini/iBook would probably feel like a dog to a Windows user/potential-switcher.

I think this could be the reason that the dual core's came out first. Rosetta might have a bigger hit on single core processors. Now programmers have a reason to make universal applications so there will be more out by the time the single core products are released. It's like the chicken and the egg problem. People won't fully focus their energy on making universal apps until the intel's come out, now that they have there is more onus on getting them going.
 
Lacero said:
And quieter, too. Remember folks, lack of fan noise is more important than pure speed. I wish these tests looked at the whole computing experience rather than how fast an app launches.

Uh, that is your *opinion*....certainly not a statement of fact, nor a statement that can be applied to even a majority of users. I would suspect that *MOST* users would take speed over fan noise, in fact.

Sheesh.
 
digitalbiker said:
So where is the 2x faster boast that SJ and Apple are claiming?
It comes from the benchmarking Apple outlines on the following pages which are likely true benchmarks but obviously benchmarks only benchmark... well what they benchmark... so you have to be careful with extrapolating out from those.

iMac: http://www.apple.com/imac/intelcoreduo.html

MacBook: http://www.apple.com/macbookpro/intelcoreduo.html

So far all of the external benchmarking published has been relatively poorly done... allowing to many variable in the mix and or not targeting say just CPU performance or outlining enough of how the testing was done.
 
speed is a very important factor considering what amd has out. Intel is way behind amd in this respect, and really needed the exposure from apple. I'm sure steve got a good deal on those intel chips. Waiting patiently for june updates on imacs with 64 bit chips.
 
john123 said:
Uh, that is your *opinion*....certainly not a statement of fact, nor a statement that can be applied to even a majority of users. I would suspect that *MOST* users would take speed over fan noise, in fact.

Sheesh.

That is true, but if you could have the same performance on a quieter system. Would you pick the louder or the quieter one?
 
Has anyone brought up the fact that Universal binaries (even on the PPC platform apps are affected) take up almost twice the space now? iWork went from ~800MB to well over ~1.7GB.
 
estimates, with missing footnotes? hmmmmm

shawnce said:
It comes from the benchmarking Apple outlines on the following pages

iMac: http://www.apple.com/imac/intelcoreduo.html
MacBook: http://www.apple.com/macbookpro/intelcoreduo.html
Note that those are SPEC "rate" numbers, so they're specifically designed to measure multi-processor scaling. They're multi-threaded, without much sharing between threads. An ideal case for multiple CPUs, unlike most applications.

It's also funny that Apple calls them "estimates", with a footnote pointer even though the page has no footnotes.

ps: The SPECrate integer number that Apple quotes (32.9) is close the the number for a dual CPU 2.8 GHz Xeon (33.4 - IBM HS20). The IBM PPC970 2.2GHz JS20 dual CPU is rated at 20.2.
http://www.spec.org/cpu2000/results/rint2000.html
 
aegisdesign said:
All laptops are a waste of time for digital content creation. They're always a compromise compared to a desktop. If speed is really important for you and you need it now, the Quad G5 can't be beat.

That's BS. Total and utter BS. Laptops AREN'T a waste of time for content creation. I take a train every single day to work, so thats 2.5 hours roundtrip of commute time of me with my laptop using Maya. I get tons of work done

However, because my laptop sucks, I can only model and rig, but not animate. My coworker, however, has the XPS and it runs better and faster than most desktops, especially G5's. Dont give me this Quad G5 crap. For straight up FPS in Maya, its all about the processor, not the graphics card or anything else (because of the enveloping and deformation of the mesh is all CPU intensive). But thats a seperate debate.

My point is not to be bitchy, but Apple has YET to give me a laptop that I can buy, and they wont for months because Rosetta is a pathetic solution. Apple is so secretive, that they had to wait until the public release to reveal Universal Binaries. Why couldnt they have clued Adobe or Autodesk or (your fav software name here) in a year earlier? Why cant I get a universal binary version of Maya in March? Because Apple dropped the ball, thats why.

I want a new Apple laptop. They are just hell-bent on not getting me what I need.
 
AidenShaw said:
Note that those are SPEC "rate" numbers, so they're specifically designed to measure multi-processor scaling. They're multi-threaded, without much sharing between threads. An ideal case for multiple CPUs, unlike most applications.

Who would have thunk that Apple would show a benchmark that showed off multicore capabilities... :rolleyes:

You do realized most of Apples iLife application (iMovie, iDVD, iPhoto in particular) and well as many of Apple frameworks (Core Image, QuickTime, etc.) are fully capable of splitting streaming work across multiple cores when multiple cores are available.

AidenShaw said:
It's also funny that Apple calls them "estimates", with a footnote pointer even though the page has no footnotes.

You don't see the foot notes??

1. Get more information on Rosetta supported Apple software. Contact the manufacturer directly for 3rd party software.
2. Testing conducted by Apple in December 2005 using preproduction 15-inch MacBook Pro units with 1.83GHz Intel Core Duo; all other systems were shipping units. All scores are estimated. SPEC is a registered trademark of the Standard Performance Evaluation Corporation (SPEC); see www.spec.org for more information. Benchmarks were compiled using the IBM compiler and a beta version of the Intel compiler for Mac OS.
3. Times faster than 15-inch PowerBook G4 with 1.67GHz PowerPC. Testing conducted by Apple in January 2006 using preproduction 15-inch MacBook Pro units with 1.83GHz Intel Core Duo; all other systems were shipping units. All of the MacBook Pro and PowerBook systems ran beta Universal versions of Modo application. All other applications were beta versions.

...and...

1. Get more information on Rosetta supported Apple software. Contact the manufacturer directly for 3rd party software.
2. Testing conducted by Apple in December 2005 using preproduction 20-inch iMac units with 2GHz Intel Core Duo; all other systems were shipping units. All scores are estimated. SPEC is a registered trademark of the Standard Performance Evaluation Corporation (SPEC); see www.spec.org for more information. Benchmarks were compiled using the IBM compiler and a beta version of the Intel compiler for Mac OS.
3. Testing conducted by Apple in December 2005 using preproduction 20-inch iMac units with 2GHz Intel Core Duo; all other systems were shipping units. All of the iMac and iMac G5 systems ran beta Universal version of Modo. All other applications were beta versions.

agreenster said:
Apple is so secretive, that they had to wait until the public release to reveal Universal Binaries. Why couldnt they have clued Adobe or Autodesk or (your fav software name here) in a year earlier?
I really don't understand your statement above...

Apple informed all developers within a couple of month of Apple deciding that they would be doing the Intel transition. Those couple of month Apple spent designing, documenting and implementing tools so that developers could transition their products. Your typical developer has known about universal binaries since WWDC 2005 (June 2005) and some specific developers knew before then.

Apple has had the following site up since WWDC 2005 (enhanced as time passed with better and more complete documentation).

http://developer.apple.com/transition/

Also review the revision history for the primary document on Universal Binary Programming for a timeline...

http://developer.apple.com/document...n_1.html#//apple_ref/doc/uid/TP40002217-CH214
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.