Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Peace said:
While that seems reasonable my thought is this.

Intel is getting beat by AMD.MS,Sony and Nintendo are switching to the PPC core..Intel is in dire need of some great PR and what better way than to introduce the new Mac on the Intel..This way Intel can say (in 2 years) look here!! Apple Computer uses the new P*** chips.Wouldnt YOU want one?.

I think Intel will spend the money in R&D with Apple just to give them a better name in 2 years.


Yeah, I figured that would be how Apple would finagle early access to new CPUs. They might influence the designs of future Intel chips, but any advantage gained thereby would not be limited to Apple. Apple might get them a month or two or three ahead, but not much more of an advantage than that.
 
mrichmon said:
That is patently false.

OS X is based on the mach kernel code with significant modifications made by Apple.

Windows NT 3.5 was based on the microkernel architecture *idea* first implemented in the mach kernel. But Windows NT 3.5 has never shared any code with Mach. For Windows NT 4 and later versions in the same family (2k & XP) Microsoft intentionally moved away from the microkernel idea by moving more and more services into the core kernel.

Many of the modifications made by Apple to Mach involved a similar process of moving various services into the kernel core. Although the Apple approach was more selective than what Microsoft did.

The microkernel idea might be the same but the implementations have never been from the same or even from a related codebase.

Ok, I didn't mean identical as in bitwise-identical... What I meant is that the architecture, source base, etc are common, otherwise it does not make any commercial/development sense to use it, and you go off writing your own one. I do totally agree that the changes that Apple made are much better than the ones made by Microsoft, but again, it does appear that whole windows was designed and written by a bunch of ... (insert appropriate word here)
 
New BIOS replacement for Mac Intels....??

Sun Baked said:
Apple won't take new technology as something that happens when it is finally included in the chipset or on every motherboard Intel or somebody else crank out.

Since Apple will still be designing the board -- expect Apple to be adding nifty new feature far in advance of the standard PCs.
Peace said:
3. A developer at the WWDC re-started one of the P4's and held down the del key and it booted into a BIOS where it showed the P4 and EM64T
All this worry about the old crappy Pentium BIOS. Intel is way ahead of us - they've been working on EFI for more than 2 years, as they realise the old BIOS is severely limiting.

Key things of interest
1. Intel made it fully open - AMD were the first to use it last year with no Intel involvement.
2. It doesn't expect old 8086 devices on the motherboard
3. It can do all sorts of system checks - really it's a mini OS.
4. It has a BIOS-compatibility mode if that's what your OS expects (eg current Windows)
5. It can boot up with a nice resolution graphic (instead of text) and use a mouse.

Intel's roadmap says this will work on some 2005 intel chips, and all 2006 intel chips. I assume this gives Apple the option of building a motherboard without any legacy stuff - but I think that the motherboard may not work with Windows as current windows expects a BIOS (not sure if they're upgrading, but MS is one of the developers and Longhorn does work with this).

Hope that's of interest :)
Greg
info: http://www.neoseeker.com/news/story/2339/
more from http://www.tomshardware.com/business/20050524/index.html
"Under a UEFI-enabled system, all the least-common-denominator features that BIOS presumes must exist on the platform can finally go away. For this to happen, continues Richardson, the x86 has to come to an important self-realization, which Richardson phrases like this: "Look, [although] I'm not in the operating system, I'm not an 8086 anymore. I can actually go out and solve a real TCP/IP stack, I can present a user with something that involves a mouse and buttons like they're used to, so I can get out of this text interface thing I've been doing for so long, and give the user an opportunity to do something outside of the operating system. I can help them recover or restore or maintain their platform when the OS just doesn't respond, short of '1-800-Why-Am-I-Hosed?'""
 
jauh said:
Ok, I didn't mean identical as in bitwise-identical... What I meant is that the architecture, source base, etc are common, otherwise it does not make any commercial/development sense to use it, and you go off writing your own one. I do totally agree that the changes that Apple made are much better than the ones made by Microsoft, but again, it does appear that whole windows was designed and written by a bunch of ... (insert appropriate word here)


Um, no, this couldn't be more wrong.

There is no common source base with Mach. There aren't even shared concepts, beyond 'microkernel'.

The main thing Microsoft has from Mach is Rick Rashid, of Microsoft Research, who was a professor at Carnegie Mellon in charge of the Mach project. He wasn't involved with NT as far as I know, that was the baby of Dave Cutler, who worked on the VMS operating system at DEC.

But Apple has Avie Tevanian, who was a student of Rashid's, worked on the Mach project at CMU, and has been working on Mach ever since, at NeXT and at Apple.
 
efoto said:
I pray the world stays away from Celeron D, F, X, or anything in the future. Regardless of them being an 'alright' processor to some, the Celeron has such a poor stigma, I'll NEVER EVER compute on one, and if Apple puts that crap in any Mac, its certainly not a model I will be ordering.

Centrino is a technological grouping of wifi, chipset, and processor, it is not a processor of itself. The processor in use in the Centrino set is the Pentium-M, designed for mobile computing with good performance and low power consumption. That said I hope that iBooks and PowerBooks get these (not really, but since its going to happen anyway, might as well embrace it now). I could see iBooks with a 2.2Ghz P-M while PowerBooks boast a 1.8 dual-core or perhaps 2Ghz dual-core, which would be pretty sweet IMO. Not sure how the dual-core thing works out, but if each core can maintain the 2mb l2 cache that the dothan currently has, seems to me that would make quite the speedy little bugger :)

The Celeron-M isn't a bad chip, really. It's just slightly slower in real world usage than a Pentium-M of the same clock speed. They don't go up quite at high as the P-M's do, but they are cheaper. They of course also have a smaller cache, and also lack some of the advanced on chip power saving mechanisms of the P-M.

All that aside, they do work well, provide decent battery life, and are cheap. Makes me think we will probably see them in iBooks, and Pentium-M's in PowerBooks. Which, I think, is a good thing.

For a long time there has been little to separate the mobile lines. A few Mhz here, a better GPU there, and at least $500. Mostly artificial differences so that Apple could justify the pricing of a PowerBook. Now, with some REAL CPU options, I think we will see some GREAT iBooks, at maybe even lower prices. Because they could sport a whole different CPU line in each product type they could stop artificially holding back the other aspects of the iBook for sake of making the PowerBooks look good.

I'd expect to see Celeron-M's, faster harddrives, better GPUs, and better LCDs in the iBook family, all for about the same price as we have now, maybe a little less. The PowerBooks will get Pentium-M's, and keep on getting the best 'other' components that Apple can fit into the. And people will still pay $3000 for one.

I will almost assuredly buy the first Intel based 12" iBook. I'm hoping they launch the mini first, work out any bugs, get some good software support, and then release the portables.
 
Rosetta performance

I'd be willing to bet that Rosetta uses caching strategies and other optimizations that are better suited to 'normal' applications with a mix of calculation and library calls, as opposed to benchmarks, or computationally intensive things like SETI@Home.

One major problem with benchmarks is that they do one thing for a bunch of iterations, then they do something very different for a bunch of iterations, then they do something else for a bunch of iterations. Then they quit. This isn't a good match for Rosetta.

It would be interesting to run XBench in Rosetta, then run the benchmark *again* without quitting XBench first. Rosetta ought to have cached the translated code after the first time, so the second run through the benchmarks might be faster, because the translation time will be avoided to some extent.


I'm also sure that Apple and Transitive will be able to tune it more by the time the Intel Macs ship next year.
 
I am hoping to see more results like this for two years. It will keep the sales of G5 fairly constant. In fact if we do get a Dual Core and the 3 Ghz machine we will blow intel out of water and we will see sales increase. This would be good for Apple Stock. :)
 
mrichmon said:
That is patently false.

OS X is based on the mach kernel code with significant modifications made by Apple.

Windows NT 3.5 was based on the microkernel architecture *idea* first implemented in the mach kernel. But Windows NT 3.5 has never shared any code with Mach. For Windows NT 4 and later versions in the same family (2k & XP) Microsoft intentionally moved away from the microkernel idea by moving more and more services into the core kernel.

Many of the modifications made by Apple to Mach involved a similar process of moving various services into the kernel core. Although the Apple approach was more selective than what Microsoft did.

The microkernel idea might be the same but the implementations have never been from the same or even from a related codebase.


Thanks. What he said. :D
 
Interesting to look at the numbers and then stand back and say they don't mean anything. Then I wonder why did I compare them in the first place.
 
WAKE UP GUYS!!!!! WAKE UP THE P4 IS ACTUALLY FASTER IN THE BENCHMARK!

First, the thing is fast. Native apps readily beat a single 2.7 G5, and sometimes beat duals. Really.
(I asked about real-world apps - if any were already available in native code-Mike)
All the iLife apps other than iTunes, plus all the other apps that come with the OS are already universal binaries....

They are using a Pentium 4 660. This is a 3.6 GHz chip.
 
PLEASE READ - the benchmarks were performed incorrectly.

morespce54 said:
Yes, except that OS X is supposed to be working on a x86 since the first version... :rolleyes: ;)

ok everyone, please think about this for a minute. Some people have pointed this out already, however people keep missing the facts here.

The macintel runs existing apps (ie. apps that haven't been recompiled for intel) in an emulation mode. This is exactly how things were done when the first ppc macs came out and it's as fast as emulation can be done thanks to the technology Apple's using now for this task.

Now think about the test being done for this thread: The testing program is running challenging intensive tasks on the computer to see how long each task takes. This testing program is also timing it and presenting the results to the user. This testing program is compiled for OSX PPC, NOT FOR INTEL. So the testing program is doing very computationally intensive tasks that are being emulated on the fly by Rosetta, the intel-ppc emulator built into Macintels. The results are being sent back to the testing program which times them, sums them, compares them etc., all through the emulation of intel-ppc.

Therefore, the speed results are completely dependent on the effectiveness of the emulator, and really don't indicate anything useful about the speed of OSX on intel. OSX on intel is MUCH MUCH faster than these results show, but this test is ONLY SHOWING HOW FAST THE EMULATOR IS.

On a different note, I've read reviews on the web (don't have links but can find them if need be) that were good speed comparisons between single processor intel windows pcs and dual processor G5 PPCs running OSX. Without fail, the intel windows machines performed common tasks somewhat faster than the mac. Real world stuff like photoshop startup, photoshop filter rendering, large MS word file search and replace, and 3D rendering (if memory serves anyway). The only way the Mac was faster consistently was in multi-CPU tests (obviously) since the intel machine tested was single cpu. so multitasking was faster on the mac. But the intel machine WAS faster, nevertheless.

C'est la vie. Intel chips are actually quite good in overallperformance, and the macintel is a fast machine. The test results of this thread are irrelevant to the discussion of how fast the macintel is compared to a ppc mac. It only shows that rosetta is in fact a bloody fast emulator compared to the competing ppc software emulators out there (such as sheepshaver and pearpc).

Anyway, just my two cents worth, hope I'm not all off balance here and going mad or anything. Also I'm not trying to offend anyone, but I think somebody needed to state the above facts based on true knowledge and true testing, all of which is freely available to be read. :D
 
hernick said:
Apple now supports both x86 and PPC architectures, and will continue to do so for much longer than 3 years. By default, their development tools will create universal binaries; applications aren't so much ported to x86 as they are made universal. Future applications will not start being x86-only; it's as easy to make a universal application as it is to make a single-architecture application.

Apple now has choice. And it is in their best interest to keep all their options open - this is why they push Universal Binaries, and will continue to do so for many, many years.

Let's take a look at what they'll be able to choose from in 2006:

* IBM: dual-core PPC970MP PPC64
* AMD: dual-core Athlon64/Opteron x86-64
* Intel: dual-core Pentium M x86, dual-core Pentium 4 x86-64
* VIA: ultra-low power, dual-core Isaiah CN x86-64
* Freescale: dual-core G4 PPC32
* Sun: (not Apple compatible but noteworthy) Niagara, 8-core UltraSPARC, 32 thread HT

If we look a little further ahead, in 2007-2008:

* IBM: dual-core POWER5 PPC64 derivative, POWER6
* AMD: quad-core x86-64, dual-core Athlon64 x86-64 derivative
* Intel: dual-core Pentium M x86-64 derivative, 8+1-core server processor
* IBM/Sony/Toshiba: quad-core desktop Cell PPC64+32 SPE units
* Sun: (not Apple compatible but noteworthy) Rock, 8-core 2nd-gen Niagara

Okay. All of this is very speculative, but based on roadmaps provided by the chip makers themselves. One thing is certain: not all of these chips will be released on schedule. Some of them will suck.

However, there will be a couple of awesome chips in there. And Apple will be able to pick. With OS X already running on PPC32, PPC64, x86 and x86-64, with the option of an easy port to POWER... Apple can choose the best chips.

They can pick and use the best notebook chips, the best desktop chips, and the best server chips. As for chipsets, which have often been a problem in past Macs, well... They can now choose between Intel, VIA, AMD, NVidia, SIS and ATI chipsets.

Apple now has a choice of many suppliers for every single component. No longer will they be affected by supply problems. This move gives them the assurance that no matter what, they'll be able to use the best hardware out there, and that'll make them more competitive than ever.

PLUS....
Lets not forget they will be able to run Windows now......
:D :p
 
Just some observations, in my humble opinion...

-Discussing/comparing future Macs with the next generation of games consoles is pointless. They will be using one chip at one speed for their entire life- and probably all at minimal profit and/or even as a slight loss-leader to sell games at first. For desktop/laptop computers you need a good progression of speeds and power usage that will not exponentially spiral out of control on price/supply/heat problems.

-Basing any kind of benchmarks now on what we can expect for the actual Mactel is also pointless, whether it's benchmarks for the x86 chip in the dev machines, or just how good Rosetta is. Both of these things are going to depend hugely on the final machines Apple ship to users as finished products.

-I read the other day somewhere how laptops had out-sold desktops for the first time ever. I think this is a key point to the transition. Apple's laptops are (or certainly have been) pretty much the jewel in the crown of their range, and they simply can't afford to lag any more without a G5-or comparable laptop.

-Finally... - Roadmaps are great, but what if the roads change and the map doesn't? In my opinion, Apple should have not gone for a Transition but an Addition. Instead of replacing their chip supplier they should have added a chip supplier. That way Universal Bianaries, Intel's roadmap claims and IBM's failures to deliver would all be things that would have less sway on the Mac's future as a whole. My question is, what if, in 5 years' time IBM make extra-ordinary gains in both controlling heat and improving performance and it's Intel who couldn't deliver on their roadmap? Does Apple switch back!? Far better, in my opinion, to have everyone using Universal Binaries still, both for old software (which by that time would run way faster than the machine it was bought for, emulated or not) and for any future switch back to IBM chips. Or even mix and match, if Intel could deliver much faster mobile chips for laptops and IBM's PPC desktop chips were far better than Intel's.

All just IMHO, feel free to discuss/disagree etc :)
 
dkelley said:
but this test is ONLY SHOWING HOW FAST THE EMULATOR IS.

As I note above, it might not even be doing that.

The emulator relies on the ability to cache code after it has been translated to x86, so that it doesn't have to be translated again.

If the benchmark application is only run once, the emulator will spend a fair amount of time doing the translation, and this will distort the results. A second run would use the cached x86 code, without the translation time, so would be a more accurate measurement. Normal use of emulated programs will mostly hit cached code, not code that needs to be translated. The more you use a program that needs Rosetta, the more you'll be using cached, pre-translated code.

I don't know if the XBench results were obtained from a single run of the program, but until second-run results are obtained, I wouldn't put much faith in the posted results as a gauge of Rosetta's performance.
 
Porco said:
Just some observations, in my humble opinion...

-Basing any kind of benchmarks now on what we can expect for the actual Mactel is also pointless, whether it's benchmarks for the x86 chip in the dev machines, or just how good Rosetta is. Both of these things are going to depend hugely on the final machines Apple ship to users as finished products.

OK.... does polly want a cracker? I think everyone has to say something like that in their posts. Y I dont know. Just so that I can conform here goes.

Guys, dont judge the test system running rosetta against a g5 because its going to change.

-I read the other day somewhere how laptops had out-sold desktops for the first time ever. I think this is a key point to the transition. Apple's laptops are (or certainly have been) pretty much the jewel in the crown of their range, and they simply can't afford to lag any more without a G5-or comparable laptop.

Where did you read this? That is a good point.

-Finally... My question is, what if, in 5 years' time IBM make extra-ordinary gains in both controlling heat and improving performance and it's Intel who couldn't deliver on their roadmap? Does Apple switch back!?

All just IMHO, feel free to discuss/disagree etc :)


omg I disagree. Reading through this thread someone said "wah wah wah." Remember, apple is not a baby.

Does apple switchback?

Come on, you should admit that apple has some brains. Everyone is assuming that they made the decision on the limited information that is available to the public.....I'll let you in on a secret........apple has info to make decisions that they keep secret ;)
 
neverever said:
Where did you read this? That is a good point.

um.. *checks* It was slashdot, quoting this article [www.businessweek.com]


neverever said:
omg I disagree. Reading through this thread someone said "wah wah wah." Remember, apple is not a baby.

Does apple switchback?

Come on, you should admit that apple has some brains. Everyone is assuming that they made the decision on the limited information that is available to the public.....I'll let you in on a secret........apple has info to make decisions that they keep secret ;)

Funny link :D

And yeah, of course I realise Apple has brains, and info we don't. But Steve Jobs also stood up and said we'd have 3GHz G5s by now, secret info or not. He was wrong then, I'm just saying it's possible he could be wrong again and if they went with an addition rather than a transition it would guard against the repercussions if he was wrong again. I don't actually doubt that Jobs made the right decision as things look today.
 
Porco said:
He was wrong then, I'm just saying it's possible he could be wrong again and if they went with an addition rather than a transition it would guard against the repercussions if he was wrong again. I don't actually doubt that Jobs made the right decision as things look today.

It wasn't called an 'addition', but that's basically what it is.

It will be an explicit addition for as long as Apple releases OS updates for PowerPC Macs, especially during the period when Apple's product line will be mixed.

As long as the PowerPC checkbox remains in XCode, the software product line will be effectively mixed.

But it'll never be a big deal to add PowerPC Macs to the product line in the future, if that becomes desirable.

And it wouldn't be a big deal to add a third architecture. They could just announce it by releasing an XCode upgrade with a third checkbox (for Cell, or Sparc, or AMD, or iTanic, or whatever).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.