Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Did Apple Make The Right Move In Switching To Intel?

  • Yes

    Votes: 498 81.9%
  • No

    Votes: 66 10.9%
  • Don't know

    Votes: 44 7.2%

  • Total voters
    608
  • Poll closed .
Core is actually derived from what some people think is Intel's best cpu, the Pentium Pro. Its why they say the intel went back to its roots when they designed the core architecture.

Pentium Pro served as the basis for Pentium II and Pentium 3. In a way, Pentium Pro is the granddaddy of all current Intel-designs. But while you can draw a clear lineage from Core to PP, it's more similar to Pentium 3 than PP. But since P3 is also related to PP....
 
One word: "audio". Just Google for a minute and save me the trouble...

I googled for "G5 audio benchmark intel" (without quotes naturally) and didn't find anything relevant. I also tried other search-queries and still didn't find any info that indicated that G5 is faster than current x86.

I think the burden of proof is on you. I tried to find info that would support your claim, and I came up short.

Note: I skipped the benchmarks that were in apple.com since they might be biased.
 
Surprisingly, you're right — the Altivec benchmark web pages have disappeared. Whether the reason being it's not for sale anymore, don't know.

I'm sorry, but I myself cannot create benchmarks for you on the fly because I only have a Quad PowerMac. I am however planning to trade up this trusty old 1.25GHz PowerBook to a shiny new MBP once the new models are out, so perhaps then I can make some tests. Should be interesting.

Anyway, in the audio world Altivec really shines. Its throughput is amazing! People have hard time opening old close-to-limit Protools sessions with new Mac Pro's which have different SIMD units as you know. It's not a night-and-day difference however, so the next generation of Mac Pro's might even outperform the last G5's, but currently PPC is the safe choice for audio work.

But I must admit that since the Intel code is relatively new for plugin developers, it might not even be fully optimised. I assume that G5 code has been optimised very well because the Altivec chip has been around fo ten(ish) years. Intel code is relatively new.
 
So you had 1 computer die on you, and you bought a new machine, and moving files was hard, so you want the whole computer world to change?
[...]
I suggest rethinking them
Why should she rethink her POV? It's hers, not yours, and she has no ability (nor possibility) to force you to accept her view.

Having said that I agree with you and the others and completely disagree with Cassie. We say the reasons are obvious. She would disagree. That's fine.

I also feel that her sentiments, although disagreeable, are coming from the right place. It would be nice in theory to have some stability in things like connections, file formats etc (which we do for the most part).

Look at how RCA ports make connecting disparate audio components together. You can connect a LaserDisc player from the '70s to a modern amplifier. Yes, completely different technology and applications. And I sure don't want to go back to those thick SCSI cables when I can use FireWire. But I am trying to see things from her perspective (and it doesn't hurt!).
 
excess addressing is a waste

What they mean is there is currently no system for sale that offers 64-bit memory addressing. While "64-bit" removes the "32-bit" limitations, there are not yet any "true 64-bit" implementations available for consumers — not that the maxx available memory would be a bottleneck (for consumers), but if the architechture isn't "fully optimised" and "fully loaded", one can always whine...

Exactly - which is why the PPC970 (aka "G5") only has 42-bit addressing. (http://www.informit.com/articles/article.asp?p=606582&seqNum=3&rl=1)

It is silly and wasteful to put more physical address lines on a chip than are likely to be needed during its lifetime. The 4096 GiB potential of the PPC970 is much more than adequate, and while larger than the 1024 GiB (40-bit) limit of Opteron and Xeon - in practical terms it is no better.

Even the 64 GiB (36-bit) limit of the Blackford chipset (5000X, used by Apple in the Mac Pro) is more than enough for desktop systems today - but Penryn and its chipsets will support 128 GiB for desktop systems.

So, I repeat my question - what is there about "64-bitness" that makes the PPC970 "true 64-bit" and the Xeon "fake 64-bit" ??
 
Then why didn't those growth-prospects materialise? It's not good if something COULD happen if it never does. Hey, I COULD win in the lottery.

Um, IBM is am massive semiconductor-company. It's not like Intel spends billions while IBM spend peanuts.
Actually, it is kind of like that... Intel essentially has one product line-- yeah they do a little Itanic development but not much. IBM has many. The PowerPC series has essentially one customer. So, growth-for-resources on the PowerPC was very impressive.

All indications are that IBM had a new generation of devices ready for release, they even demoed some of them. It would have remained competitive.
Does NetBurst indicate that x86 is crappy?
It's an indication of the state of the x86 line at the time the G5 was in production. If you're going to go on and on about no G5 laptop, then compare what the Intel alternative was at the time. The Pentium M wasn't much to cheer about either.
There has neven been a Mac or PC with a 1.5GHz memroy-bus. Fastest Memory-bus in the last PowerMac G5 ran at 533Mhz. Are you by any change confusing memory-bus with front-side bus?
The CPU interface to memory is through the front-side bus, yes. To break it down more fully, the front side bus running at 1.25GHz-ish drives the memory and PCI controller chip which in turn runs drives the memory buss at 533MHz, double pumped, or 1.066GHz.

I think you see my point-- the G5 power was largely consumed by the bus frequencies, and the controller chip also ran very hot.
Multi-core G4 would have been utterly starved for bandwidth.
Because the designers would have been too stupid to rework the interfaces and caches?
Because they increased it's capabilities? Because it got twice as many USB-ports as before? Because it got faster CPU? Because it moved to SO-DIMM instead of standard RAM? Because it got two memory-slots as opposed to just one? Because it got a remote? Because it got new multimedia-software?

In case you didn't know: the internals of the Mini got a total makeover when they moved to Intel. The CPU is only a part of the equation.
No, the CPU is all of the equation. The CPU determines the chip set and the chip set determines the memory selection and most of the peripheral set as well.

What you're arguing is that it cost $100 to add a couple connectors. That doesn't make sense.

G4 to Core solo went from 130nm to 65nm. That's a 4 to 1 reduction in silicon per unit logic. There's also a significant reduction in gate capacitance making for faster logic-- more speed is what you get for free when making smaller chips. Is it faster than the G4? I hope so, given that it has physics on its side and was released years later.

The RAM for the Intel mini is also cheaper than the RAM for the G4 mini.

So... Cheaper memory, no GPU, and two more USB ports for $100. I'm gonna have to go on a limb here and say the Intel processor, even at a 4 to 1 silicon density advantage, is significantly more expensive than the G4.
Faster CPU's usually are more expensive that slower CPU's. Had they used as slow CPU as the G4 was in the Mini, they could have shaved maybe 20 bucks off the new Intel-price. But the CPU's they ended up using weren't really that much more expensive. 30 bucks maybe at the low-end.
Sure they could have cut the performance of the Intel chip down a bit, but that would have been even more embarrassing... It would have simply shown that you're paying more for exactly the same performance we had with the G4 2 years ago.
Cell is an absolute screamer when it comes to streaming computation and single-precision floating-point. And that's the kind of stuff consoles need. General-purpose computing relies heavily on integer, not floating point. Sure, you can run a general-purpose OS and apps on it (as demonstrated by the PS3 Linux-kit) and they would run fine. But the performance would be quite mediocre. You could get a better performance from a CPU that costs a fraction of what the Cell costs.

Linux runs fine on it.

It's an in-order CPU. It would run OK-ish, but it would get spanked by just about any other general-purpose CPU that cost half as much.
Some hack builds an unsupported Linux kernel that targets the most mainstream portion of a 9 core processor with an un-optimized compiler for a CPU that does no on-chip instruction reordering and is embedded in a game system and you're drawing conclusions...

FWIW, I never claimed we were all going to have the current Cell's in our desktops-- my point was that designs like Cell are pushing computing architectures forward and I expect to see heterogenous cores as a matter of course in a few years.
It was jointly designed by HP and Intel, but PA-RISC is quite different than Itanium.
No, the Itanium architecture comes from HP and the chips were co-developed by Intel and HP. It has absolutely nothing to do with PA-RISC except that it's dislocated a lot of loyal PA-RISC customers. The plan was for Intel and HP to alternate developments, but Intel got so mired down in their mind-numbingly bad project management that HP almost had the second iteration complete before the first had shipped.


The point of all of this is that Intel isn't some CPU creating god, and Apple made the switch to Intel for marketing more than technical reasons. PowerPC was and is a fine architecture. I'm sorry to lose it. I'm not looking forward to years ahead of just-good-enough-to-maintain-marketshare. We'll be fine if things continue as they are, but God help us if anything happens to AMD...
 
These comments about what PPC can do is all well and good, but where was the beef when Apple announced the switch?

I'm not so hung up on G5 notebooks, as I am on the fact that the G4 was getting pretty minor updates for awhile, and that crippled the entire Apple notebook line, down to the iBook. That's why people were looking to the G5, not because they had some mythical belief in the G5. In retrospect, I think there was probably a big opportunity since Intel got bogged down with the P4, but once the Centrino came out and G4 stayed roughly the same, the writing was on the wall.

Could IBM/Freescale have caught up (or surpassed) to Intel and Centrino? Yeah, I'm sure it was possible. Would they have? Would Apple have committed to such a major change if they had a better roadmap from IBM/Freescale vs. Intel? In retrospect one might argue that Apple would have switched for 'marketing' purposes. But at the time, there was a lot of handwringing among the press and Apple fans over the switch. It was not a foregone conclusion that it was going to go well, and we still haven't gotten a Universal Binary Mac Office (and finally just got Adobe CS3.) So I think the decision was made not on marketing purposes alone.

I don't think Intel is the end-all be-all, they're just a company like anyone else. But the same largely goes for others as well. Would Freescale have dual-core 2.33 GHz G4's with upgraded FSB's right now? (being that most comparisons seem to rate the G4 and Core as roughly equivalent for a given chip speed) I don't know for sure, but I'd be hard-pressed to believe anyone who claims to be 100% confident that there would be such chips in Apple notebooks right now, had Apple not switched.

In the worst case scenario, it's a draw to me. Apple design and software are far more important to me anyway (not that this talk isn't interesting though.) Apple had some fun in the day with their ads touting the G5 over Intel, but CPU's are a factor beyond Apple's control. They do control the rest of the hardware (most of it) and the software, so eliminating the risk of not getting maximum effort from IBM and Freescale (to surpass or at least keep up with Intel) was probably a factor for Apple.
 
These comments about what PPC can do is all well and good, but where was the beef when Apple announced the switch?

I'm not so hung up on G5 notebooks, as I am on the fact that the G4 was getting pretty minor updates for awhile, and that crippled the entire Apple notebook line, down to the iBook. That's why people were looking to the G5, not because they had some mythical belief in the G5. In retrospect, I think there was probably a big opportunity since Intel got bogged down with the P4, but once the Centrino came out and G4 stayed roughly the same, the writing was on the wall.

Could IBM/Freescale have caught up (or surpassed) to Intel and Centrino? Yeah, I'm sure it was possible. Would they have? Would Apple have committed to such a major change if they had a better roadmap from IBM/Freescale vs. Intel? In retrospect one might argue that Apple would have switched for 'marketing' purposes. But at the time, there was a lot of handwringing among the press and Apple fans over the switch. It was not a foregone conclusion that it was going to go well, and we still haven't gotten a Universal Binary Mac Office (and finally just got Adobe CS3.) So I think the decision was made not on marketing purposes alone.

I don't think Intel is the end-all be-all, they're just a company like anyone else. But the same largely goes for others as well. Would Freescale have dual-core 2.33 GHz G4's with upgraded FSB's right now? (being that most comparisons seem to rate the G4 and Core as roughly equivalent for a given chip speed) I don't know for sure, but I'd be hard-pressed to believe anyone who claims to be 100% confident that there would be such chips in Apple notebooks right now, had Apple not switched.

In the worst case scenario, it's a draw to me. Apple design and software are far more important to me anyway (not that this talk isn't interesting though.) Apple had some fun in the day with their ads touting the G5 over Intel, but CPU's are a factor beyond Apple's control. They do control the rest of the hardware (most of it) and the software, so eliminating the risk of not getting maximum effort from IBM and Freescale (to surpass or at least keep up with Intel) was probably a factor for Apple.
I think I agree with everything you've just said. Like I mentioned in my first post, one of the things Apple gained from the Intel switch was a reduction in risk. Intel is the baseline that everything else is judged against. Apple gave up a chance to exceed the baseline in exchange for eliminating the risk of falling behind the baseline.

IBM/Mot/Freescale hit the performance wall at the same time Intel did. IBM promised 3GHz, Intel promised 4, and neither hit their target. Pentium languished at the same time the G5 program did. Everybody realized a lateral step was necessary to break the next performance barrier, and multicore was it.

My comment above about just-enough-to-maintain-marketshare is true for everyone here. IBM didn't need to take a lead and extend it-- they just needed to keep Apple competitive. Beyond that, they really couldn't expect Apple's volumes to grow much, so they wouldn't be rewarded for the extra effort aside for some good press. The leap frogging probably would have continued, rather than pulling ahead and staying ahead.

Intel is as bad, or worse, than anyone else at making claims for future performance and missing them-- or changing the metric to hide their weaknesses. They're also the marketing juggernaut, so what they say tends to become truth. I think Apple looked at the roadmaps, took a few grains of salt, and decided they could afford to remain at baseline on processor performance and compete with software.

And, to some extent, IBM got punished for Freescale's decisions. Freescale decided they wanted to focus on the embedded market, essentially leaving Apple single sourced. If you're going to go single source, are you going to pick the unique architecture or the standard? Better to make the switch now, at a time of Apple's choosing, than when IBM decides it's no longer worth it at Apple's volumes. And Apple does have a potential second source to hold over Intel, for the time being at least-- AMD.

I don't think the decision was technical. Yeah, Intel may have showed them some new process trick they're using to keep power down (hafnium?), but you can bet IBM had tricks of their own in the pipeline. I think the decision was marketing and strategic. Opening up to Windows was the wild card, but even that was a separate decision as I'm sure Apple could have prevented Windows from booting if they chose to. I'm still curious how that one will play out in the end.
 
Actually, it is kind of like that... Intel essentially has one product line-- yeah they do a little Itanic development but not much. IBM has many.

Yep, but if you just look at IBM's semiconductor-business, you would see that it's pretty darn big.

The PowerPC series has essentially one customer.

For IBM or overall? Overall, PPC has HUGE amount of customers. Apple was one of the smaller users of PPC. And if we look at just IBM.... Well, they have themselves as customer, they have Microsoft as their customer, they have Sony as their customer, they have Nintendo and they have HUGE number of people putting PPC in embedded products... What is this "one customer" you are referring to here?

It's an indication of the state of the x86 line at the time the G5 was in production. If you're going to go on and on about no G5 laptop, then compare what the Intel alternative was at the time. The Pentium M wasn't much to cheer about either.

Pentium M was a kick-ass CPU.

The CPU interface to memory is through the front-side bus, yes. To break it down more fully, the front side bus running at 1.25GHz-ish drives the memory and PCI controller chip which in turn runs drives the memory buss at 533MHz, double pumped, or 1.066GHz.

No, the 533Mhz IS the double-pumped number. The RAM has clock-speed of 266Mhz, but it transfers two bits every clock-cycle, giving it effective clock-speed of 533Mhz.

Because the designers would have been too stupid to rework the interfaces and caches?

they had YEARS to reword the caches and interface, why didn't they? And there alrady is a multicore G4. It's called MPC8641D. they top out at 1.5Ghz, while having just 1MB of L2-cache.

What you're arguing is that it cost $100 to add a couple connectors. That doesn't make sense.

Here are few facts: Those connectors cost money. SO-DIMM is more expensive than typical SDRAM. All the stuff they added there costs money, and they added quite a bit. If you compare the G4-Mini to Core-Mini, you would see that the Core-Mini is a lot more capable machine.

G4 to Core solo went from 130nm to 65nm. That's a 4 to 1 reduction in silicon per unit logic. There's also a significant reduction in gate capacitance making for faster logic-- more speed is what you get for free when making smaller chips. Is it faster than the G4? I hope so, given that it has physics on its side and was released years later.

So what are you saying here? That Core-Mini is faster than G4-Mini? And your point is.... what?

The RAM for the Intel mini is also cheaper than the RAM for the G4 mini.

Uh, G4 Mini used standard PC2700 RAM that was readily available everywhere. It was the most common type of RAM that was available back then, and it was propably the cheapest RAM available as well. I should know, I actually upgraded the RAM in my Mini. Core-Mini uses standard SO-DIMM's, and SO-DIMMs are more expensive than normal desktop-RAM that was used with G4.

So... Cheaper memory

G4-Mini used cheaper RAM. It did NOT use some uber-expensive super-RAM, it was 100% normal RAM that was available at just about every retailer at rock-bottom prices. really: get your facts straight.

no GPU, and two more USB ports for $100.

No, it's actually this:

- two memory-slot as opposed to just one (in essence doubling the mem-capacity of the machine)
- SO-DIMM as opposed to normal desktop SDRAM
- PC5300 DDR-2 SDRAM, as opposed to PC2700 DDR-RAM.
- remote control
- new software
- more USB-connectors
- Optical audio in/out
- Gigabit ethernet
- A lot faster CPU

All that for $100.

Some hack builds an unsupported Linux kernel that targets the most mainstream portion of a 9 core processor with an un-optimized compiler for a CPU that does no on-chip instruction reordering and is embedded in a game system and you're drawing conclusions...

Linux on the PS3 is actually 100% supported by Sony. And fact remains that general-purpose computing relies heavily on integer, and integer-capabilities of the Cell are quite mediocre. Yes, it works, but it works even better on a general-purpose CPU that costs fraction of what the Cell costs.

Those SPU's in the Cell? They would be next to useless when running your typical apps. They scream in games (since they rely heavily on floating point) and streaming computation, but those two are not that important in everyday-computing.

I really don't understand why you try to insist that Cell would be a fine general-purpose CPU. Cell is a floating-point monster and it excels at tasks that rely on FP. But general-purpose computing is not one of those tasks.

PowerPC was and is a fine architecture.

Have I claimed otherwise?

I'm sorry to lose it. I'm not looking forward to years ahead of just-good-enough-to-maintain-marketshare.

Yeah, thank god that those G4 PowerBooks were so insanely fast that they left their PC-competitors in their dust. No, wait. they didn't.
 
My comment above about just-enough-to-maintain-marketshare is true for everyone here. IBM didn't need to take a lead and extend it-- they just needed to keep Apple competitive. Beyond that, they really couldn't expect Apple's volumes to grow much, so they wouldn't be rewarded for the extra effort aside for some good press. The leap frogging probably would have continued, rather than pulling ahead and staying ahead.

I think this is really the most important factor at the end of the day. Both PPC and x86 took a step back with G5 and the P4. But Intel had a lot more Windows business at risk (or business to win), than IBM/Freescale had Apple business. The G4 had been around a lot longer than the Centrino family, so I'm sure a multi-core G4-based CPU with improvements (faster clock speed, faster bus, etc.) would have been easily doable- those are evolutionary and not revolutionary changes. I think it would have been a spot on analogy- P3 is to Centrino/Core as G4 would be to this G4 successor.

But it would have taken more money than it was probably worth, since these CPU's would mainly just go into Apple notebooks (although desktops would be a possibility if the line was successful, just like what happened to Centrino.) Plus having 2 (3?) different companies was a potential negative as well. Everyone thought the future was the G5 and IBM, and thus a new and significantly improved G4 was probably never in the cards. Whereas at Intel, you have all this internal information about the P4, thus leading to a separate group that developed Centrino. What's ironic is that I'm pretty sure Centrino was developed to be a specialized notebook/mobile CPU only, and only in retrospect signalled a whole new direction for Intel x86. Freescale could have just as easily taken that step 'back' as well. So I would revise my earlier statement and say that a 2.3 GHz dual core G4+ CPU in a MacBook right now would be more than feasible from a technical standpoint, but business considerations (which then influence timing and planning of such a CPU) made it less likely, especially as it would have taken bold proactive action from Freescale a couple of years back (when Centrino was in development.)

The technical side is ultimately intertwined with the business side, and regardless of the technical merits of the platforms and companies (which I don't know enough to argue anyway!), Intel had both a bigger carrot and stick to push them. Apple saw the writing on the wall as well.
 
Not that it's going to make any difference, but in terms of power and tech stuff alone, there's an interesting article on the Register about the release, probably on "Tuesday", of the Power6 chip from IBM running at 4.7GHz.

"average response time of .625 seconds when handling requests from 2,100 users" when crunching Oracle 11i.

IBM's Power6 spotted bashing Oracle at 4.7GHz
http://www.theregister.co.uk/2007/05/20/ibm_power6_oracle/.

I know that there's more to the Intel move than power (identity, brand, roadmap, product development, economies of scale, etc.) but an interesting development non the less.

Of course it would be nice to see this in a Mac.
Of course we will see this in a future Mac. :rolleyes:
The future is Intel but PowerPC was good for its time. Sometimes I feel that PowerPC was more special.
 
The answer has to be 'yes' here. Just look at the extra opportunities Apple has generated for itself by doing so.

OK, people may not like XP, but the option of it - and a fully supported version at that - is there. Can the opposite be said for the MS based PCs? Instantly a whole new market becomes available.
 
The answer has to be 'yes' here. Just look at the extra opportunities Apple has generated for itself by doing so.

OK, people may not like XP, but the option of it - and a fully supported version at that - is there. Can the opposite be said for the MS based PCs? Instantly a whole new market becomes available.


Not only that, but Apple now has the opportunity to make progress with their computers. The PPC just wasn't going anywhere, and IBM just wasnt delivering on their promises. Intel was really the only option in my eyes and I'm glad they switched.

I'll finally be able to get that "PowerBook G5".
I'll finally be able to get a 3.0Ghz CPU in a laptop and desktop.
 
Yes, because it has opened up the possibility of 'switching', probably moreso than any other single event in Apple's history. Heck, I probably wouldn't own a Mac right now if it weren't for the Intel chip.
 
Yes, because it has opened up the possibility of 'switching', probably moreso than any other single event in Apple's history. Heck, I probably wouldn't own a Mac right now if it weren't for the Intel chip.

I wouldn't. The fact that I could put Windows on the machine if I didn't like OS X was the deciding factor. Its a big step to buy into a completely different system with no backup.
 
Huh?

Why are you posting in such an old thread? And talking like this news just happened. lol weirdos...

In fact why am I posting in here? :p


ps. PowerPCEEEEE foreverrrr...
 

Ignorant? How is that ignorant? Selfish, maybe...

Sure, they can evolve. Anything can evolve. But do we really want it to? I do believe that the software and hardware we have does us pretty good. All this talk of touch-screen's and voice activated commands isn't helping me believe otherwise, because many of us prefer more "archaic" means of input.

Why would I want it to stop? I'm the kind of person who likes to keep her computers for a good 15 or 20 years before upgrading.

I do realize the pro's need more power. But once they get the amount they need, that should be the limit. Who actually needs more then they have?(Of course it's ok if you realized you need a Mac Pro after buying an iMac, thats fine.) If software development focused on utilizing what power people already have, instead of forcing them to upgrade to use the software, the answer would be next to no one.

Ok well uhhh where should I start? I'm fairly sure you have more posts but this is the one I feel I should comment on. Ok well you really won't be able to do anything really useful in 15-20 years with any of those computers (not to mention you haven't even been alive that long so how should you know?) Secondly Pros will always keep needed power as what they do gets better and better. As will Consumers in 15-20 years that video file you want to watch might be 500gb you just don't know. Also stopping progress would be very ignorant.
 
If you READ the article this is a SERVER chip! How long did everyone expect Apple to wait??? Well others now wait for Intel too.....this is not a perfect world I guess no matter what chip maker you pick. Is ADM any better?
 
If you READ the article this is a SERVER chip! How long did everyone expect Apple to wait??? Well others now wait for Intel too.....this is not a perfect world I guess no matter what chip maker you pick. Is ADM any better?

No AMD isn't any better. They are smaller than Intel.
 
Yes, the switch to Intel was the right decision. Apple realized that with the Intel Core design that they could harness more power and speed while cutting down on power consumption and heat (which was a preventing factor in there being no PowerBook G5, because the PPC G5 CPU was power hungry and hotter than hades).
 
On computer progress....

Yeah... basically, there are a LOT of tasks people want to do with their computers which ideally would be done "instantly", but processor and computer technology overall is nowhere near that stage yet. As it evolves though, these tasks become more and more practical for people to do.

One example I can think of is "ripping" music from CDs into MP3 files. Not THAT long ago, this was something that took most of your CPU's power to accomplish, plus you were often limited by the throughput of your particular CD-ROM drive. Now, a modern PC or Mac can do this job on an entire music CD in well under 10 minutes, in the *background* while you run other programs without so much as a care.

Another up and coming thing is video streaming and "transcoding". As people start making digital copies of their movies the way we've done with music, lots of things come into play. Is your computer going to become a "server", pushing out the video to a device on the other end of the network that displays it on your TV set? If so, what video formats can it support? Some of the content you get may need to be converted from format to format, before your device attached to the TV can view it properly. Even on the fastest Macs, that's a very time-consuming process right now. (We're talking well over 1-2 hours to convert 1 movie ... possibly even an entire day or evening.) Wouldn't it be nice to have systems that could do this in *minutes* or even *seconds*? That's going to take at least a few more generations of processors though.



Ok well uhhh where should I start? I'm fairly sure you have more posts but this is the one I feel I should comment on. Ok well you really won't be able to do anything really useful in 15-20 years with any of those computers (not to mention you haven't even been alive that long so how should you know?) Secondly Pros will always keep needed power as what they do gets better and better. As will Consumers in 15-20 years that video file you want to watch might be 500gb you just don't know. Also stopping progress would be very ignorant.
 
This was the best move they could have done.

Intel is back on top, making efficient processors instead of that Pentium 4 nonsense.

The x86 platform allows Apple to be EVERY computer.
Through Virtualization, my MBP is:
- Windows XP
- Windows 98
- Ubuntu Linux

Through emulation, my MBP is:
- Mac OS 7.5
- C64

The fact that one physical computer can now be an almost unlimited number of virtual computers is an amazing thing. PPC just couldn't cut it in that sense.
 
Even on the fastest Macs, that's a very time-consuming process right now. (We're talking well over 1-2 hours to convert 1 movie ... possibly even an entire day or evening.) Wouldn't it be nice to have systems that could do this in *minutes* or even *seconds*? That's going to take at least a few more generations of processors though.

Well, it'd also require a lack of optical media. At some point you'll be restricted by the maximum data rate at which you can read the media in question. And, at the point at which we lose the DVD as the storage format, we'll likely be encumbered by more onerous DRM that's harder to defeat.
 
Power PC

Of course it would be nice to see this in a Mac.
Of course we will see this in a future Mac. :rolleyes:
The future is Intel but PowerPC was good for its time. Sometimes I feel that PowerPC was more special.

Power PC was more special. The roadmap was impressive. However, this new Power6 4.7Ghz would need a high capacity freon AC/coolant system larger than the ones seen in the older G5 towers to keep it from fizzling. Intel will keep up in due time, without the overheating issues.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.