Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
64bit - why apple?

OK guys,

so a lot of people noticed that for the average typing singletasker at home, 64bit doesn't make such a great difference - true.

But what is apple aiming at?
MAC - THE DIGITAL HUB FOR A DIGITAL LIVESTYLE
or something like that, right?

Now for video editing, especially with all the new dimensions coming up these days (hdtv etc) and anyone creating and athoring dvd's - people can make a lot of use of HUGE memory. Something like 512GB RAM is adressable in the ppc970. Also numbercrunching is quite necessary for encoding videos in a reasonable time.
Then, look at all those grafic guys - they always want faster computers with larger memory - because there will always be a resolution that doesn't fit in the memory you have.
Just recently I was confronted with a specially processed 'image' that allocated 120GB on the harddisk. 512GB RAM would come in handy on this kind of problems.

So next: let those pictures walk - 3d rendering raytracing animations. OK - it is the pro's that use these features, not many people can even think 3d enough to get even a simple figure look nicely, but Apple sees a market in the low cost workstation market, and they do have quite a good position to get it - most simulations soewhat unix based ... u know unix based.... can be quickly ported to osx.
And so on ....
 
Originally posted by arn
For what it's worth, PowerJack (MacWhispers) does not believe this to be the case:

http://www.spymac.com/forums/showthread.php?s=&threadid=25573&perpage=15&pagenumber=3

Ther is more from the same source as a recap:



Missions Completed 2291

Online



Posted on: 05-14-2003 12:09 AM


quote:

I've already published what I actually can say on this one. Unfortunately, I am tangled up in a situation where the actual manufacturer is also building two products for me later in the year, and everyone with whom I deal there knows I publish MacWhispers... as does an Apple developer channal person involved in my projects. in other words, I actually have more first hand info on this topic than (probably) anyone else on the Mac web, but can't say squat.

- The PPC 970 is real, is ahead of schedule, and is slated for both 1 and 2 processor Power Macs... very, very soon.

- The 15.4-inch PowerBook is also real, has been nearly completely redesigned since January, will be released with a PPC 970 chip, and will have a Chi Mei Optoelectronics-produced 15.4-inch LCD with a non-standard higher resolution than 1280x800... also, very, very soon.

- The Power Mac will move to a new enclosure, one with an anodized aluminum front panel, mathcing the new PowerBooks.

All of this I have previously published. And, sadly, that's all I can say about any of it, other than this: None of this is guesswork or speculation. It's dead-level fact, straight from the people doing the work.

I have to dance a fine line with the MacWhispers site, and what info I spread. My business must come first.
end quote

Sounds too good to be true, but we can hope!
 
:(

I hate to say it, but I think we are all going a bit overboard. I want a PPC 970 Powermac as quickly as everyone else does but that doesn't mean that we should believe any info that comes out about it.

As for PowerJack definitively saying no to Foxconn, well, that's gutsy. I doubt he'd say it unless he was 100% sure.
 
Re: Inconsistent arguments

Originally posted by AidenShaw
Hmmm.... You knock off XP Home for not supporting dual CPU, yet you don't position the 970 against dual P4 (Xeon, of course).

Methinks a dual 3.06GHz Xeon with Hyperthreading probably wouldn't be "eaten alive"! :p

Probably not. But if you build a machine like that (with Dual 3.06 Xeons and all decked) on Dell's website, it will easily set you back $5,000. So instead of the PowerMac matching price and winning on performance (assuming the 970 configs people are conjecturing), the PowerMac would match performance and win on price. Two sides of the same coin.
 
Originally posted by Freg3000

As for PowerJack definitively saying no to Foxconn, well, that's gutsy. I doubt he'd say it unless he was 100% sure.

For a guy who claims he can't say nothin', this powerjack says quite a bit -- which all by itself seems kind of suspect. In the end all he really is saying is that Foxconn is not the contract manufacturer, somebody else is. As if that really matters.
 
Originally posted by dornball
wolf,
i hope your right. i prefer your specs. if there was a 970 powerbook available, i would order it in a heartbeat.

-dornball

Again we are a small company but we will be ordering 8. Powerbooks are high margin products for Apple.

I believe this will be the first Apple laptop purchase we make, with equal support of Mac and Windows users, ever, as well.

Rocketman

avatar.jpg
 
Re: Let's see...cost cutting measures

Originally posted by PretendPCuser
Apple doesn't need to ship a dual 970 if the chip is prohibitively expensive. Nor would they have to if it is such a great performer. Yet the rumor mills seem to say that there will be a dual (or at least that's what we are all hoping). Who knows, if they make them all dual, they will be a little closer to volume discounting. :)

It is specifically designed to handle quad to 8+ Nobody is posting rumous of a 4 way or 8 way APPLE CPU. That might be the missing link.

I also find it notable that figures of processor shipments are being posted to a general readership rumor site probably for the first time.

This indicates alot of excited people are letting stuff leak because the secrecy is KILLING them.

Rocketman

avatar.jpg
 
Re: Re: Re: 64 bit, hmmmmmm...

Originally posted by ktlx

The link points out the problem I mentioned. The 64 processor Itanium 2 machine was $600K cheaper than the 32 processor IBM. IBM had to pay 7% more money to get 3% more performance.

You are confusing what IBM has to charge in order to break even with what it CAN charge w/o losing many sales (i.e. what the market will bear). Even pricing the p690 at 7% more than the Itanic Superdome, IBM is in danger of losing essentially 0 sales to HP because the p690 matches or beats the Superdome in performance AND it can actually run a lot of software out there, as opposed to Itanic, which has almost no codebase out there (but don't worry, it can emulate x86 ever so fast).

The latest estimate I've heard is that Intel has sold all of 4500 Itanic 1s and Itanic 2s since their debut (for obvious reasons - i.e. because it would be embarrassing - Intel will not disclose the actual numbers). So if they price it at $100,000 per processor or so, maybe they will be making back their $1 billion plus R&D investment in just the initial versions of the chip. Somehow I doubt their prices are that high, however.

And in terms of the marginal cost of production, the chips are both the same size: the .18 micron Itanic 2 is 421 square mm and the .18 micron Power4 is 414 square mm. So assuming the yields are both pretty good (well, maybe that is a heroic assumption for Itanic 2 given the Itanic bug that Intel just announced, but we'll give them the benefit of the doubt), then the production costs of each chip are probably about the same. But, of course, one Power4 equals two Itanic 2s, so Power4 clearly has a major cost advantage.

The bottom line is that Intel is hemorraging cash on Itanic right now - nobody would dispute that - while IBM is making nice profits on Power4. So saying that Itanic is as good as Power4 on a cost basis because the 64 proc Superdome can almost match the 32 proc p690 (at a similar price) is like saying that Apple could totally blow away Dell's sales numbers if they just lowered the price of the PowerMacs to just 20 bucks (technically true....and I'm sure they could make it up on volume, right?).
 
Originally posted by IVIIVI4ck3y27
How about them Apples?!?

Dual Quad 4-barrel... phooey

The odds of dual processor Powerbooks isn't that good... for one that's an extra processor chewing up juice (desktops it makes sense, laptops... unless they can get dual G4's or 9xx's running at 7w like the current iBook, you're talking even more heat than a current laptop G4 which is already hot to a lot of people's chagrin),


I do not know if the 970 has powersaver features built-in but the OS does.

I do not know how hot the 970 will get but cooler than any Pentium!

I do actual, real rocket science and I for one would be an early adopter of a dual processor powerbook. I admit even I would not need the 2nd processor 95% of the time, but for those times I do the $500-800 premium would be worth it.

For email and web I can run the single processor I do have at 1/2 speed in the sleep software.

Rocketman

avatar.jpg
 
Re: 64 bit, hmmmmmm...

Originally posted by copperpipe
Well, most people here have been saying that 64 bits doesn't really make any difference except in encryption, which is to say that the difference they will make is miniscule at best. Now the people saying this definitely know more than I do, because I can't even understand what they are talking about in some cases. But, I just don't understand why Intel, AMD, and IBM are all investing billions of dollars in the race for 64 bit? It doesn't make sense to me, for such a tiny gain. Maybe we should go back to 16 bit? or 8? did they make a big difference? The POWER 4 uses 64, and it descimates every processor out there, but I suppose that it could be 32 and would descimate every processor, except when it's decoding an encryption. I dunno, maybe everyone here is right about 64 bit not making any noticable difference, but I'm having a hard time believing it.

PS - I truly don't mean to upset anyone by this, and I don't doubt that everyone has good reasons for their beliefs...

The main practical benefit is more memory can be addressed all at once. That makes bigger more complex operations possible.

So the benefit is practical and has nothing to do with so-called 64 bit software.

Software that also is 64 bit clean does something similar with only incremental benefits unless working on HUGE datasets.

Rocketman.
 
Read what powerjack has said

I for one believe him, and not just because I want to. It's true that he has said a lot considering his reluctance, but his business story seems very plausible. He didn't give any specific information. Unless Apple made him sign his life away on that too, the amount of info he gave on the 970 is not obscene.

Also notice his info on the iPod's hidden features. He's an apple certified (or something) developer.Sounds legit to me.
 
Re: Re: Me thinks??

Originally posted by AidenShaw
How do you know what a dual or quad 970 would cost?

A dual 3.06GHz 1GB Xeon (Dell PWS450) is $3426, dual 2.8GHz only $2976.


Nice try. Equipping it like the machine discussed at the beginning (1 GB of RAM, 200 GB of storage), it cost $4,365, with the possibility of up to $300 in rebates. Throwing out the 19" monitor would lower the cost to $4,115. But it still is not actually as good as the machine quoted earlier in this thread, because it has a crappy 32 MB Radeon VE instead of a the 128 MB Radeon 9800 Pro (that alone will add several hundred) and no speakers or subwoofer. And compared to the equivalent Apple machine (as people are projecting) it would lack the Superdrive and have an inferior video card (32 MB Radeon VE vs. at a minimum a 64 MB Radeon 9000 Pro for the PowerMac).

Incidentally, the "equivalent" dual G4 (i.e. up the RAM and storage to 1GB/180 GB, throw out the Superdrive, and downgrade vid card to a 64 MB GeForce 4MX) is currently $2,850 from Apple. In fairness, the Dell comes with an extended warranty, so including Applecare the pricing would be $3,100 vs. $4,100. At any rate, the point is that clearly even if Apple were to raise its pricing points for the PowerMac towers by a large chunk, for example $500 or more (relative to what they are today), they would clearly still be very price competitive with the high end Dual Xeon workstations. Which is more than can be said today.
 
Originally posted by hvfsl
A 64bit chip on a 64bit OS is quite a bit faster than a 64bit chip on a 32bit OS. What people are forgeting here is that every clock 64bits of data are processed instead of 32bits. So twice as much data.

Bzzzt. Wrong. Thanks for playing our game.

Although in the real world there will not be a 2x performance boost, much like a dual G4 is not twice the speed of a single G4. IF you look at the new figures for the AMD 64bit on www.futuremark.com you will see a 1.4Ghz AMD is 70% faster than a 2.26Ghz P4.

Next time please read the article you are quoting. What the report at AMD Board said (the actual source of the article) is that at the same clock speed, the Athlon 64 is about 70% faster not that a 1.4Ghz Athlon 64 is 70% faster than a 2.26Ghz P4. The 1.4Ghz Athlon 64 actually scored lower than the 2.26Ghz P4 in every test but the CPUMark 99.

Here is the link for those who are interested:

http://www.amdboard.com/hn05120301.html

Here is the quote from the article:

The results, even though me must considere them as temporary, show some impressive datas. Athlon 64 looks to be 70% faster than competitors running at similar speed (1.4GHz).
 
About AltiVec...

Ok, so AltiVec is like a 64 bit emulator right? So why do we need it in a 64 bit machine. Will it double the emulation to 128? Or am I just getting my numbers screwed up?

I guess what I'm really asking is, am I going to see a noticable speed boost in video rendering if FCP is already coded for AltiVec?

P-Worm
 
Originally posted by hvfsl
A 64bit chip on a 64bit OS is quite a bit faster than a 64bit chip on a 32bit OS. What people are forgeting here is that every clock 64bits of data are processed instead of 32bits. So twice as much data. Although in the real world there will not be a 2x performance boost, much like a dual G4 is not twice the speed of a single G4. IF you look at the new figures for the AMD 64bit on www.futuremark.com you will see a 1.4Ghz AMD is 70% faster than a 2.26Ghz P4.

You are Wrong. The 1.4Ghz AMD is 70% faster than the 2.26 P4 because the AMD chip is a better design, not because it is 64-bit. A 64-bit chip running a 32-bit OS will only be slower than running a 64-bit OS if the chip runs 32-bit software slower than 64-bit software (which the 970 does not). You will not process 64-bits of data on average every clock, but only when the instruction calls for such processing. If you look at the instruction stream on a typical 32-bit processor, you will probably find that they spend half or more of their time processing 16-bit or smaller integers, even though they can handle 32-bit integers natively. Please stop spewing this carp, so that people in the know don't have to keep replying to correct you. Please help increase the Signal-to-Noise Ratio.
 
Re: must have missed it

Originally posted by maxvamp
what is your take on any speed impact when the processor has to manage the upper and lower set of bits of a given file.

Well, assume the integer unit has a 10 stage pipeline. Add 32-bits, then "add with carry" the 2nd 32 bits (the way 64-bit synthetic integer additions are done).

That's 20 cycles, or 10 nanoseconds for a 2GHz CPU.

But that's really worst case - in a superscalar, o-o-o (out-of-order) CPU both adds could be started at about the same time, along with many other instructions (the 970 and P4 can have 1 or 2 hundred instructions in progress at once). So other things are happening at once, so the 10 nanoseconds is hidden by other things that need to be waited on.

It's also probably not even 10 nsec worst case - since several of the pipeline stages are decoding the instruction (which can happen in parallel). It's also common that the result (in this case, the "carry" bit) is available several pipeline stages early (the later pipeline stages retire the instruction and store the result in the destination).

So maybe the 2 32-bit integer operations take 12-15 cycles, or 6 to 7.5 nsec. The main point is that it is not twice the time it takes to do one operation, but often much less.

----

What if the data is not in cache? Latency on a cache miss is often from several dozen to a hundred or so cycles - suddenly the 15 cycles for the arithmetic is 50 or 100 cycles. At this point - 10 cycles for a 64-bit add vs. 15 cycles for a synthesized add is hardly relevant.

----

Suppose we're doing a 64KB read and the data is in the filesystem cache. If the system bus is 8GB/sec, then reading and writing 64KB will take about 16 usec, or 32,000 cycles.

----

Suppose the disk head has to move - that's about 8 msec - 8,000 usec - 8,000,000 nsec - or a whopping 16,000,000 cycles.

----

So, that's why I claim that an extra 5 cycles (or even 10) to synthesize 64-bit integer arithmetic won't be measurable in actual I/O operations.
 
Why the 970 is coming (Apple's current problems)

From a tech point of view, Apple has some problems, which will be addressed with the combination of OS X 10.3 and the 970, and which Apple will address shortly:
1. 2GB of addressable RAM. OS 10.3 will address this, but processor support is required. This impacts machines across the upper end of the product line and will impact lower machines over the next few years.
- in particular it hits the xserve line, 2GB is not enough for high end servers, esp multi-processor machines.
- it hits the higher end desktop machines. 2GB is also kind of skimpy for lots of photo/music/video editing etc. Think about this: compare the performance of iTunes to iMovie and iDVD. Apple wants to have iMovie and iDVD perform like iTunes - no more hour wait to "encode assets" on a DVD on a DP 1.x. They know that for the Digital Lifestyle to pan out, performance like iTunes is the key for those applications.
- a 64 bit processor will allow more efficient use of VM, RAID, DBs etc.
--> in short, Apple needs a 64 bit processor and soon for all the other pieces of its puzzle to fall into place (digital lifestyle, xserve etc). It will have it in the 970 and 10.3.

2. Disk sizes. 180GB (+ or -) being the top end drive is low for the upper end machines. Even the upper end xserve RAID has "only" (I put only in quotations because the 1st hard drive we had was 5 MB (yes, megabytes on the Apple ][, then ][ plus), and I recall the sort-of-hard drive fixed platters < 1 MB) 2.5 TB. This is not bad, but...

--> not a large impact on the 970, but relevant.

3. Something is <b>obviously</b> up with the 15 inch PB. Apple would not leave a huge hole in their product line without a good reason. Many people are holding off on the 15 inch PB because of the lack of Airport Extreme. I have seen *no* credible reports of design problems, manufacturing problems etc on the 15 inch PB, and their would be rumors if indeed that was the problem. Lets just face the facts: the PB is going to have something quite different than the current PBs. Will it be the super-new 15 inch display alone? I doubt it, why hold off using the old display for 6-9 months just to have a different 15 inch, there would be enough time to refresh the product at that point and not give up the sales --- this leads me to believe that they are *racing* to get these out ASAP. They know that they are losing sales (or at best people are going down to the 12 inch, lower margins). The processor is the only possible reason besides criminal insanity to delay the 15 inch update for 6 months. Releasing the 15 first lets them work out the bugs with 1 design at a time and gives a safety net while doing so (e.g. they still have proven 12 and 17 inch designs). Once the 15 is running smoothly, then the 12 inch and 17 inch PBs will be updated ASAP since most of the design work will be completed.

4. You will see DP machines. The 970 is a good MP processor and since Apple isn't matching the MHZ/GHZ values, it is a good PR move, plus a good performance move. It will definitely be in the xserves and the upper end PowerMacs as we see now. Everyone at Apple knows that the xserves require MP machines if they are to compete with everyone else. Why use a completely different design when you can amortize the cost over the PMs and the xserves? It would make no sense. So, they can re-use the designs, and put a 2nd (or 3rd or 4th) processor in there at their cost while charging a nice markup.

All of these problems will hit all over the entire product line during the next 2-3 years as the whole "digital hub" really kicks in. Sharing music is the beginning. We'll be sharing iPhoto photo libraries next. Then iMovie libraries shortly thereafter. These require more memory, more processing power, and more disk space. Ditto for the smart display/tablet, you can trade of processor power for network performance or visa versa, so the more processor power the better.

The people at Apple aren't dumb, they know the products (software) that are in the pipeline and the processing power needed to do them justice. That is why 10.3 is important as is the 970, they depend upon each other and other products depend on them.
 
At this point, it's nearly impossible to believe that there won't be a lot of broken hearted Mac users, out there. The hype is getting so big that I half expect a rumor to come out saying Steve will personally deliver the new machines to your place of residence and wash your car.

I hope it's all true, but it's not fair to myself to bet on it as hard as that. I'd rather be delighted, rather than disappointed, when the new machines arrive.

Dan
 
At this point, it's nearly impossible to believe that there won't be a lot of broken hearted Mac users, out there.

Yes that is why they call them rumors. Some of these rumors may be based on fact, some may actually be good guesses. BUT THEY ARE RUMORS....

ONLY RUMORS..... ;)

Let us put a little perspecitve on this. :p

Enjoy the day for tomorrow may bring us new positronic multi processors.... :D
 
Re: About AltiVec...

Originally posted by P-Worm
Ok, so AltiVec is like a 64 bit emulator right? So why do we need it in a 64 bit machine. Will it double the emulation to 128? Or am I just getting my numbers screwed up?

I guess what I'm really asking is, am I going to see a noticable speed boost in video rendering if FCP is already coded for AltiVec?
Not really, AltiVec works on 128 bits (16 bytes) data chuncks and treat them as if they were:
16 times 8 bits
or
8 times 16 bits
or
4 times 32 bits (these could also be single precision floating point numbers)

Say for exemaple you have a grayscale picture, each pixel is represented by a single byte, and you would like to turn it somewhat darker.
Without a SIMD (Single Instruction Multiple Data) extension in our CPU, your application would have to load each byte composing your picture (one by one) in a register and subtract say... 5 to its value to make it look darker, and once this is done put it back in memory.
AltiVec like SSE2 is a SIMD engine, and this will definitely speedup the process since your application could load 16 bytes at once, subtract 5 to all these 16 values, and put them back.

Pseudo SIMD code:
load16 in vector register A from memory location B
sub16 value 5 to each cell in vector register A
store16 vector register A to memory location B
add 16 to register B

Pseudo non SIMD code:
loadbyte in register A from memory location B
sub value 5 to register A
storebyte register A to memory location B
add 1 to register B

loadbyte in register A from memory location B
sub value 5 to register A
storebyte register A to memory location B
add 1 to register B

loadbyte in register A from memory location B
sub value 5 to register A
storebyte register A to memory location B
add 1 to register B

loadbyte in register A from memory location B
sub value 5 to register A
storebyte register A to memory location B
add 1 to register B
...
should be repeated 16 times to produce the same result as the SIMD code, guess which one is the fastest?


FCP could be faster, not because AltiVec instructions are going to be faster (it will probably the same compared to a G4 at the same clock speed), but since the PowerPC 970 is expected to have 2 to 3 times more memory bandwidth, moving large data sets -like pictures- from memory to the vector registers and back should be way faster.
This is especially true if the slow bus on the G4 is the bottleneck of you app, it could be considered memory bound in this case: it cannot go faster than memory access, the CPU waits on data, and all CPUs wait at the same speed, whether they are RISC, CISC, FISC, run at 20 MHz or 4 GHz ;)
 
Originally posted by hvfsl
A 64bit chip on a 64bit OS is quite a bit faster than a 64bit chip on a 32bit OS. What people are forgeting here is that every clock 64bits of data are processed instead of 32bits. So twice as much data. Although in the real world there will not be a 2x performance boost, much like a dual G4 is not twice the speed of a single G4. IF you look at the new figures for the AMD 64bit on www.futuremark.com you will see a 1.4Ghz AMD is 70% faster than a 2.26Ghz P4.

NO!

You still aren't getting it. This has been explained many, many times.
If you require operations with 32bit precision math, it doesn't matter whether you use a 32bit, 64bit, or 256bit processor, and it doesn't matter whether you use a 32bit, 64bit, or 256bit OS... it will still only fetch 32bit words!!!
Why is this so difficult to understand?

Your analogies aren't relevent.
Dual processor rigs aren't 2x as fast as single processor rigs because they have limited bandwidth, because they have to keep cache coherency between processors, because non-multi-threaded apps can't use both processors, because multi-threaded apps may stall threads if they are waiting on results from other threads.....
The AMD Opteron is 70% faster on some tests because it is a different, more efficient architecture. The athlon is faster than the P4 on a per cycle basis too and it's only 32 bit. Sure, the Opteron is WAY faster when it is benched against the Pentium 4 while performing 64bit integer math, but it isn't faster on a per clock basis just because it is 64bit. Most people don't require 64bit precision integer math, plain and simple, so most people won't benefit from 64bit processors.
Opteron and the 970 are fast because of their architecture not because they are 64bit.
 
Re: About AltiVec...

Originally posted by P-Worm
Ok, so AltiVec is like a 64 bit emulator right? Or am I just getting my numbers screwed up?

P-Worm

Uh, yeah. Altivec is a vector unit, so it works with either 4 32-bit numbers, 8 16-bit numbers, or 16 8-bit numbers. Thus, IF your code can be vectorized (BIG if), then Altivec can in theory crunch between 4 to 16 times the number of integers every clock cycle that a single scalar integer unit can. That is why it is "up to 4 to 16 times speedup" on code that can be vectorized.

The fact that PPC 970's integer units are 64 bit just means that they can crunch one 64 bit number per clock cycle. But it is still just one. So unless you needed to use big 64 bit integers (and most code doesn't), then it will not be inherently faster than a 32 bit integer unit, because both of them still process only ONE number per cycle. In contrast, Altivec can theoretically process between 4 to 16 numbers per cycle.

Nevetheless, the PPC 970 will still be MUCH faster than the G4 on a per clock cycle basis, but that will be true because it is a much better designed chip, not because it is 64 bits per se. The upside of this is that you will NOT need to see software be recoded as 64 bit in order to see an immediate performance gain when using the PPC 970.
 
Originally posted by ffakr

If you require operations with 32bit precision math, it doesn't matter whether you use a 32bit, 64bit, or 256bit processor, and it doesn't matter whether you use a 32bit, 64bit, or 256bit OS... it will still only fetch 32bit words!!!

I agree with you in principle, but something just nagged in the back of my brain, thus:

The 970 does not have a single 64 bit data bus (each direction) feeding it, but 2x32bit ones. It also has not one, but 2 load/store units, 2 fixed point units, 2 floating point units, and 1 simd unit - that just happens to have 2 sub-units.

Now, I know next to nothing about proc design so these are just my thoughts.

What I do know is that every other chip I've seen data on has 1X data bus (64 bit procs like the AMD have 1x64bit data buses, even the power 4 does too), they all have 1 floating point unit, and I've never heard of a a simd unit that has 2 sub units.

Is it possible that the reason that IBM chose 2x32bit busses rather than 1x64bit (which must be less effecient for 64bit data read/writes?) is that it >can< call and send 2x32bit words each cycle?

Just a though, just a thought.

Flame proof suit is on! Go for it boys (and girls).

a.
 
Memory bandwith

I kind of wish Motorola could have gotten DDR support on the 7400 series FSB. Unlike many here, I am not a G4 basher, except for that FSB. AltiVec is so bottlenecked by it. Final Cut Pro rendering and Photoshop filters could be so much faster if the vector registers (and/or the cache memory) didn't have to wait so long to refill from main memory. Maybe we'd be seeing IBM and Moto having some competition for one-an-other if that had happened. (coulda', shoulda' woulda')
 
Re: Wild speculation

Originally posted by apemn88
Here is the marketing strategy: Runs better on 64 bit (2x faster)
Why does everyone seem to perpetuate the myth that a 64-bit OS is twice as fast as a 32-bit OS? This is absurd. You gain a small performance benefit due to being able to use 64-bit registers for greater floating point precision, but the performance benefit is negligable.

The number one (and almost only) benefit of a 64-bit OS is being able to address more than 4GB of memory.

That's it... 64-bit is not some holy grail (well, it is if you need that much memory), and most apps won't even benefit from it at all.

Where it does come in handy is that pretty soon most desktops will have 8GB or 16GB of RAM. It will be nice to be able to use all of that memory without using some funky PAE extensions.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.