PDA

View Full Version : Apple "has stuff to blow away intel"




bones
Feb 17, 2003, 07:50 PM
Too much work to submit a story. :)

See:
http://radio.weblogs.com/0001011/2003/02/17.html#a2298
I've had some sneaks behind the scenes (not official ones, though). Apple has some cool stuff coming this year to be sure -- including some desktop machines that are outperforming current Intel stuff.

This is from a diehard MS stalwart and apologist. Just read some of his other stuff. If HE is saying this, i believe it.



mac15
Feb 17, 2003, 08:11 PM
will we see the 970s this year, my bet is 100% YES.

nuckinfutz
Feb 17, 2003, 08:33 PM
hmmmmmmm dual PPC 970 would be just the High End ticket

wolfywolfbits
Feb 17, 2003, 09:07 PM
My bet is that Apple has some kind of blowing device, like one of those leaf buster things.

Kid Red
Feb 17, 2003, 09:14 PM
He said stuff, in addition to desktops. So wow, I can't wait to see what Apple has up it's sleeve. new tower case w/970, new iMac with new case, new iPod member, new iBooks maybe too?

MrMacMan
Feb 17, 2003, 09:48 PM
Well apple realeased there nice laptops so I can only think the desktops are next.

And I hope the 970 will only be the beggining. Please. :D

richie
Feb 17, 2003, 10:05 PM
Originally posted by wolfywolfbits
My bet is that Apple has some kind of blowing device, like one of those leaf buster things.

IMO, a high pressure water cleaner would be more useful. And they've already got the noise-production (with the Windtunnel G4s) right up there on par with the competition ;)

cr2sh
Feb 17, 2003, 11:11 PM
Wouldn't it be fat if the 970 was released in the Powerbook first...
People would clammer to buy and then when the desktops are released they'd go crazy spending again.

Ahh well, I never thought I'd have to wait this long for a "g5."

prewwii
Feb 17, 2003, 11:20 PM
.... since 1993. That was the year I said when Mac has a machine faster than a PC across the board, forget those cooked up photoshop demo's, then I would buy a new machine. I have picked up a couple of used machines since then.

With the recent turn of events and PC performance moving further and further ahead of a Mac I am wondering if I will be on social security before the Mac even equal it's relative performance in 1990 when a Mac was about as fast as a PC for a few weeks. I may have bought my last new Mac if I remain true to my statement that I will buy a new Mac when a Mac is across the board faster than a PC.

What I read about the 970 says it will return the Mac to the same relative performance as the 1990 Mac when compared to the current PC performance. That folks is not progress.

If it's not 10x it's not noticable....... it's a milking machine for getting money to move from our pockets to Apple while we're trying to figure out if the emperor has his clothes on.

Catfish_Man
Feb 17, 2003, 11:25 PM
Originally posted by prewwii
.... since 1993. That was the year I said when Mac has a machine faster than a PC across the board, forget those cooked up photoshop demo's, then I would buy a new machine. I have picked up a couple of used machines since then.

With the recent turn of events and PC performance moving further and further ahead of a Mac I am wondering if I will be on social security before the Mac even equal it's relative performance in 1990 when a Mac was about as fast as a PC for a few weeks. I may have bought my last new Mac if I remain true to my statement that I will buy a new Mac when a Mac is across the board faster than a PC.

What I read about the 970 says it will return the Mac to the same relative performance as the 1990 Mac when compared to the current PC performance. That folks is not progress.

If it's not 10x it's not noticable....... it's a milking machine for getting money to move from our pockets to Apple while we're trying to figure out if the emperor has his clothes on.

You missed it. The 604e 350MHz was the fastest desktop chip in the world for its time. First to 350MHz too. The original beige G3 also beat the Pentium IIs of its time, just as the G4 did with the P3 600-700 (of course, it started out 500MHz G4 vs 600MHz P3, and quickly became dual 500MHz G4 vs. 1000MHz P3).

rainman::|:|
Feb 17, 2003, 11:30 PM
yeah it's happened a couple of times. in portables it's more frequent... they've had the fastest processors before, i remember drooling over them.

pnw

prewwii
Feb 18, 2003, 12:00 AM
Originally posted by Catfish_Man
You missed it. The 604e 350MHz was the fastest desktop chip in the world for its time. First to 350MHz too. The original beige G3 also beat the Pentium IIs of its time, just as the G4 did with the P3 600-700 (of course, it started out 500MHz G4 vs 600MHz P3, and quickly became dual 500MHz G4 vs. 1000MHz P3).

Fast at floating point not integer. I did database work for years and would test the Mac against a PC a lot. A Mac never won since 1990.

There is a paper by Hanibal at Arstechnica that talks about the integer processor on the PowerPC and the Pentium series. Pentium's are optimized for integer operation and have a greater number of integer processors and they are a more capable than the PowerPC approach. Without a doubt there are some floating point situations where a PowerPC is king, none in the integer world.

macphoria
Feb 18, 2003, 02:37 AM
What about Apple PDA? Or Tablet like device?

Personally, I would like to see portable storage device from Apple. Zip disk is no longer what it used to be. Burning CD's just isn't very efficient. Portable external Hard Drives are good (including iPod), but they tend to be expensive and must carry cables around. Small USB keychain devices are cool, but they tend to be small in capacity and hefty in price. I wonder if Apple can come up with a nice compromise of these devices? Maybe wireless portable flash memory storage device?

jettredmont
Feb 18, 2003, 07:30 AM
Originally posted by bones
Too much work to submit a story. :)

See:
http://radio.weblogs.com/0001011/2003/02/17.html#a2298


This is from a diehard MS stalwart and apologist. Just read some of his other stuff. If HE is saying this, i believe it.

IMHO, this is coming from a bona-fide blowhard. His weblog also states that Microsoft has stuff that blows anything Apple (or OpenSource/Linux) has in development away (it would be nice if his blog had dates in it along with the times ...), and that he has been an Apple user since 1988.

Seems to me anyways that he is more interested in publicity than accuracy.

MorganX
Feb 18, 2003, 08:50 AM
Originally posted by jettredmont
IMHO, this is coming from a bona-fide blowhard. His weblog also states that Microsoft has stuff that blows anything Apple (or OpenSource/Linux) has in development away (it would be nice if his blog had dates in it along with the times ...), and that he has been an Apple user since 1988.

Seems to me anyways that he is more interested in publicity than accuracy.

I'd have to agree with that. Microsoft isn't going to deviate very far from its corporate image and branding it has created with Windows.

Performance-wise, the Wintel world will still have to improve peformance with brute force. Windows and Office must remain bloated for compatibility. That's why with much faster processors and bandwidth, the only time a PC shows its stuff is playing games or running a non-Windows OS. Even MS' database performance is achieved through brute force. Fortunately, the hardware is still cheaper than competitors with ends up giving MS the edge.

timbloom
Feb 18, 2003, 09:28 AM
I really have serious doubt that this guy knows anything more than we do about upcoming products.
The 970 will just plain clown on other desktops. I just hope we actually use it before we're all just too tired of waiting.

yosoyjay
Feb 18, 2003, 11:00 AM
I'm just going to consider this a joke for a number of reasons. Now I'm going to laugh my way to the shower.

GPTurismo
Feb 18, 2003, 11:47 AM
Originally posted by prewwii
Fast at floating point not integer. I did database work for years and would test the Mac against a PC a lot. A Mac never won since 1990.

There is a paper by Hanibal at Arstechnica that talks about the integer processor on the PowerPC and the Pentium series. Pentium's are optimized for integer operation and have a greater number of integer processors and they are a more capable than the PowerPC approach. Without a doubt there are some floating point situations where a PowerPC is king, none in the integer world.

Same reason Suns are better database systems that linux/Intel boxes at the time, if you don't add the variable of price ;)

sparks9
Feb 18, 2003, 01:35 PM
Like that is ever going to happen ... :rolleyes:

edenwaith
Feb 18, 2003, 03:30 PM
Does this mean Apple computers are 'supposedly' going to catch up, or get some increase to truly blow away Intel boxes?

Or do they mean Apple will blow them away on more Photoshop tests?

Or perhaps they are looking at some dual 1.8 GHz machines. Or maybe quad-processors, running at 1.8 GHz a piece. Now, that would be cool.

As an odd side note, it seems silly for M$ to make so many different versions of Windows. When W2K came out, I think there was something like 5 different Windows versions out there, not including Win98 and earlier. OS X is so much easier. There is OS X, useful for the casual and power user, and then there is OS X Server. But hey, even that isn't necessary if you have regular OS X! From what I've seen and read, OS X Server seems to add some useful tools to make managing a network even easier.

MacCoaster
Feb 18, 2003, 03:53 PM
This is ************. He's had sneak previews, but unofficial ones...

Probably just to grab attention AND probably based off the 970 rumors.

It's as official as Apple using the 970 -- 100% unconfirmed. Perhaps THAT is the "unofficial" reason. :rolleyes:

I won't believe this guy until he has hard evidence and I won't believe things until they are announced.

Edit: what if the previews are only software? Maybe he was just basing his opinion of hardware ON the rumors of 970 and passing them as fact--which might be SEPARATE from the things he saw. His blog entry is way too broad and not specific enough.

trebblekicked
Feb 18, 2003, 05:48 PM
Originally posted by MacCoaster
This is ************...
It's as official as Apple using the 970 -- 100% unconfirmed. Perhaps THAT is the "unofficial" reason. :rolleyes:

I won't believe this guy until he has hard evidence and I won't believe things until they are announced.

thats why it's here on page 2. where it belongs.
but hey, it is a rumor site, after all.:)

madamimadam
Feb 18, 2003, 07:58 PM
I notice that no one pointed out that the UPCOMING Apple stuff will blow away CURRENT Intel.

That is really not a feat... it is great when UPCOMING Apple beats UPCOMING Intel or CURRENT Apple beats CURRENT Intel.... otherwise, what is the point of saying anything.

Apples and Apple, Oranges and Oranges.

bousozoku
Feb 18, 2003, 08:34 PM
Originally posted by prewwii
Fast at floating point not integer. I did database work for years and would test the Mac against a PC a lot. A Mac never won since 1990.

There is a paper by Hanibal at Arstechnica that talks about the integer processor on the PowerPC and the Pentium series. Pentium's are optimized for integer operation and have a greater number of integer processors and they are a more capable than the PowerPC approach. Without a doubt there are some floating point situations where a PowerPC is king, none in the integer world.

The 604e had 3 separate integer units while the Pentium II and Pentium III had one. The 604e did outstrip the Pentiums of the time in processor throughput--integer and floating point. Unfortunately, the G3 and G4 put an end to that with Motorola's power disappation changes toward economy and their flirtation with being the first to produce a desktop processor with a vector-processing (SIMD) unit. (As everyone knows, it's much better to have a vector-processing unit that's almost unused rather than real speed.)

Database work depends on I/O throughput more than the processor unless the database you're accessing completely fits into memory and doesn't cause any auxiliary disk I/O.

Btw, what happened in 1990?

prewwii
Feb 18, 2003, 09:21 PM
Originally posted by bousozoku
Btw, what happened in 1990?

About 1990 we were doing a shop floor control and circuit test project using National Instrument's LabView (one of the finest pieces of software I have ever used) and ACIUS 4D. Both packages were developed on Mac and came out with a Wintel version about 1990...91. In both cases the Wintel version out performed the Mac version.

I am not sure of the exact date when the PPC 601 came on the scene, I think it was 1990 or 91. When that happened not many applications ran native and were slowed down even more by the emulation mode. FileMaker Pro went to version 3 and slowed down as it has slowed with each successive version since. I/O as you pointed out had a lot to do with that.

As I remember there was a version of 68040 that could beat up on an early 486. I had a 225 PowerTower (my last new computer) with a 604e that could beat up an a few 266mhz Pentiums. That lasted for a week or two.

I drive a 867 Quicksilver that is a nice machine. Not a rocket though. I have been in computers since the late 50's and I want one rocket before I quit. When Apple makes a burns them all machine I will buy it just to see it scroll fast and then will it to my kids.

marcsiry
Feb 18, 2003, 10:11 PM
I am not sure of the exact date when the PPC 601 came on the scene, I think it was 1990 or 91. When that happened not many applications ran native and were slowed down even more by the emulation mode. FileMaker Pro went to version 3 and slowed down as it has slowed with each successive version since. I/O as you pointed out had a lot to do with that.


The PPC601 first appeared in the Mac desktop line in the PowerMac 6100, released in March 1994:

http://www.lowendmac.com/ppc/6100.shtml

That's quite some time after 1990. Perhaps you're thinking of some other processor shift, like the 68020 to 68030, or 030 to 040?

MisterMe
Feb 18, 2003, 11:30 PM
Originally posted by prewwii
About 1990 we were doing a shop floor control and circuit test project using National Instrument's LabView (one of the finest pieces of software I have ever used) and ACIUS 4D. Both packages were developed on Mac and came out with a Wintel version about 1990...91. In both cases the Wintel version out performed the Mac version.
.... As someone who actually used LabView on Windows in the 1990 era, it was dog slow. You must remember that this application ran very well on a Mac Plus, but it was a snail on Intel.

prewwii
Feb 19, 2003, 12:13 AM
Originally posted by MisterMe
As someone who actually used LabView on Windows in the 1990 era, it was dog slow. You must remember that this application ran very well on a Mac Plus, but it was a snail on Intel.

Shows to go ya how poor my memory is. I worked on that project from 1990 thru 1993 and could not remember exactly when the PPC came out. I lost my parents in 1993 and dropped out for a year or so.

We used some hard to find IIx's for that project because we needed the slots for cards and the new Mac's, Ci's I think, only had 3 slots. I bought a Centris (one of the first in the Twin Cities) in that era and could not remember how long after that the PPC era started. It didn't seem long before we were seeing Apple dealer demo's of the up coming PPC. In those days it didn't take much to be a value added reseller. That's another story.

There were two National Instrument products at that time LabWindows and LabView. LabWindows was a dog. My experience with the PC version of LabView was limited until just before version 4. By then there was a clear advantage on the PC even for ease of use because of the two button mouse.

We used early LabView versions 2 thru 3 for the shop control project. If you remember when version 3 came out it was some what crippled compared to the Mac only version 2 because of the limits of Windows yet version 3 had many new features. We were a beta site for LabView in that era.

But I digress..... the point is that Mac has been behind the performance curve, with a few exceptions, for a long time. If the Spec numbers on the 970 are accurate it will only bring us back to those times in the mid 1990's when a Mac was about as fast as a PC, excluding the Photoshop bake offs. With the best of spins that's not much progress.

barkmonster
Feb 19, 2003, 05:45 AM
Unfortunately, the G3 and G4 put an end to that with Motorola's power disappation changes toward economy and their flirtation with being the first to produce a desktop processor with a vector-processing (SIMD) unit. (As everyone knows, it's much better to have a vector-processing unit that's almost unused rather than real speed.)

Actually the Pentium II and even the K-6 had SiMD extension LONG before the G3 came out, let alone the first G4 chips with altivec. Altivec was superior to both MMX and 3DNow!, even SSE when the Pentium III came out but the Pentium 4 has SSE2 which might take 2 clockcycles instead of 1 like Altivec does but it's double precision unlike Altivec.

It was around mid 1997 when the 9600 powermac with the 350Mhz 604e was out so it's not like the PC didn't take it's time catching up with the mac.

Imagine how it would be if Motorola hadn't been stuck at 500Mhz for so long ?

We would have seen the 1.42Ghz G4s come out at the same time as Intel rolled out the 2Ghz prescot chips. It would have been a double blow for intel if it had been in the same line up as now with the top 2 models sporting dual cpus.

I just hope the PPC970 being in the next range of powermacs is worth forcing everyone into an OS X only world.

Personally speaking, I'm just not ready to dump my mac just to have a more recent model to run an OS that offers incredible speed and stability but offsets it with incredible bloat and RAM requirements just yet.

If it means I can finally buy a mac for 1500 or less than matches an Athlon XP or Pentium 4 Northwood costing 1000, I think I'd gladly stay beige for a while and wait it out. OS 9 isn't too shabby really, at least it runs on my mac.

GeneR
Feb 19, 2003, 04:27 PM
They should come out with something called: iShovel -- to deal with all of the poop and unsubstantiated claims and rumors.

That said, I hope it's correct. Although I've gotten a bit skeptical as of late about the processors outperforming Intel. The 970 sounds GREAT. But I really think we needed it, like, last year?

Okay. I'm feeling sore right now. (Or maybe I'm just hungry... :D)

Go Apple! Shish boom bah! Rah! Rah! Ya! Ya! ha ha
Ah---!

:p

tumbleweed
Feb 19, 2003, 04:51 PM
I have no doubt that a 970 system (especially a dualie) will outperform _current_ Intel-based systems. But will it outperform THEN-current systems? I doubt it muchly, based on the released estimated spec marks already published. And when you compare it to an AMD hammer-based system? Not gonna happen.

Nevertheless, a 970-based Apple (especially a sweet, sweet dualie system) will be PLENTY fast for anyone. I'm more than willing to take a reasonable hardware hit to be able to use OS X, thanks, and not be at all regretful.

imaswitcheryeah
Feb 19, 2003, 05:14 PM
If the 970 will be 64-bit, then what 32-bit Pentium 4 will match it's speed, or vice-versa? Now there is a comparison.

Will 64-bit closely "double" the speed? "Double" efficiency?

I know the 970 it's supposed to top at 1.8 GHz, but let's take a single 2.0 GHz 970 for example... Could that mean it can keep up with at least a 3.06 GHz p4?? Could it effectively be the equivalent of a 3.6-4.0 GHz 32-bit processor? (Of course I'm not including optimizations of SSE2 or Altivec and the like.) And what about Altivec? If the former were true, can the addition of Altivec optimization really bring the 970 to the top-of-the-hill? Will SSE2 optimization for P4's keep a close second or keep it's crown? What about all these factors?

Bring it on....

madamimadam
Feb 19, 2003, 05:39 PM
Originally posted by imaswitcheryeah
If the 970 will be 64-bit, then what 32-bit Pentium 4 will match it's speed, or vice-versa? Now there is a comparison.

Will 64-bit closely "double" the speed? "Double" efficiency?

I know the 970 it's supposed to top at 1.8 GHz, but let's take a single 2.0 GHz 970 for example... Could that mean it can keep up with at least a 3.06 GHz p4?? Could it effectively be the equivalent of a 3.6-4.0 GHz 32-bit processor? (Of course I'm not including optimizations of SSE2 or Altivec and the like.) And what about Altivec? If the former were true, can the addition of Altivec optimization really bring the 970 to the top-of-the-hill? Will SSE2 optimization for P4's keep a close second or keep it's crown? What about all these factors?

Bring it on....

I am glad there are finally others that realise that you can not compare Peaches and Mangos.

64-bit will not do ANYTHING in day to day life... at least until the OS moves 64-bit. I really do not see that a 64 version of word or Internet Explorer will make any difference unless you are one of those people who tests machines by scrolling hundreds of pages... even then, would 64-bit make a difference???

What is going to matter is the efficiency of the chip at those "32-bit" levels we work at now until the time that consumers find a use for the 64-bit archtecture. I would imagine that one of the first areas that would start to benefit in the consumer market would be games. I could see a potential for processing more factors of a game at the same time. Obviously, a great deal of items in game require each bit of information to be processed in order but so many more factors in games these days can be processed at the same time and I think this is where 64-bit will excel. Probably excel more on graphics cards than CPUs, though.

bretm
Feb 19, 2003, 06:51 PM
Originally posted by Catfish_Man
You missed it. The 604e 350MHz was the fastest desktop chip in the world for its time. First to 350MHz too. The original beige G3 also beat the Pentium IIs of its time, just as the G4 did with the P3 600-700 (of course, it started out 500MHz G4 vs 600MHz P3, and quickly became dual 500MHz G4 vs. 1000MHz P3).

That would be a 350 mhz G4. The first line of G4s didn't even include a 500mhz. It was 350, 400, and 450.

I have the 350. Still slams my roommate's 1.3ghz wintel box.

nuckinfutz
Feb 19, 2003, 06:58 PM
Originally posted by bretm
That would be a 350 mhz G4. The first line of G4s didn't even include a 500mhz. It was 350, 400, and 450.

I have the 350. Still slams my roommate's 1.3ghz wintel box.

No he means 604e. Ala 9600/350.

http://www.lowendmac.com/ppc/9600.shtml

barkmonster
Feb 19, 2003, 07:10 PM
I have the 350. Still slams my roommate's 1.3ghz wintel box.

:D :rolleyes: :p ;) :D :rolleyes: :mad: :eek: :confused: :D

"We're Ready to believe you" - Ghostbusters (1984)

ddtlm
Feb 19, 2003, 11:39 PM
madamimadam:

I really do not see that a 64 version of word or Internet Explorer will make any difference unless you are one of those people who tests machines by scrolling hundreds of pages... even then, would 64-bit make a difference???
No, 64-bit is not beneficial here.

I would imagine that one of the first areas that would start to benefit in the consumer market would be games. I could see a potential for processing more factors of a game at the same time.
64-bit is almost certainly not beneficial here either, for many reasons.

One: You see, while its true that a 64-bit computer can easily work with 64-bit integers whereas a 32-bit computer can only easily work with 32-bit integers, each integer is still only a single number. You've simply spent twice as many bits storing it. Is that really better? Only if you are trying to store something that won't fit in 32 bits.

Two: Games a typically floating-point intensive, and both 32-bit and 64-bit computers support 64-bit floating-point (aka double-precision) math.

Three: If the idea is to process data in 64-bit chunks instead of 32-bit, as fussily as "process" is used, why not use AltiVec? It "processes" in 128-bit chunks.

madamimadam
Feb 19, 2003, 11:57 PM
Originally posted by ddtlm
madamimadam:


No, 64-bit is not beneficial here.


64-bit is almost certainly not beneficial here either, for many reasons.

One: You see, while its true that a 64-bit computer can easily work with 64-bit integers whereas a 32-bit computer can only easily work with 32-bit integers, each integer is still only a single number. You've simply spent twice as many bits storing it. Is that really better? Only if you are trying to store something that won't fit in 32 bits.

Two: Games a typically floating-point intensive, and both 32-bit and 64-bit computers support 64-bit floating-point (aka double-precision) math.

Three: If the idea is to process data in 64-bit chunks instead of 32-bit, as fussily as "process" is used, why not use AltiVec? It "processes" in 128-bit chunks.

Points one and two understood, my appologies.

Point three has the problem that many cool programs are very basically ported to Mac and so they won't have AltiVec. Also, assuming my theory was right, which you proved wrong, AltiVec would be no good because it processes 4x32-bit chunks and I was looking at the advantage of having 64-bit strings of data.

ddtlm
Feb 20, 2003, 12:20 AM
madamimadam:

No need to apologise for anything. :)

Point three has the problem that many cool programs are very basically ported to Mac and so they won't have AltiVec.
True, but will they be programmed to use 64-bitness either? I bet not. Outside of working with huge single values, there really isn't much that 64-bitness does for number-crunching. Macs have AltiVec and PCs have SSE/SSE2 ... essentially no one is going to use a 64-bit integer for doing anything besides storing single, huge values... which is generally not useful.

madamimadam
Feb 20, 2003, 12:36 AM
Originally posted by ddtlm
madamimadam:

No need to apologise for anything. :)


True, but will they be programmed to use 64-bitness either? I bet not. Outside of working with huge single values, there really isn't much that 64-bitness does for number-crunching. Macs have AltiVec and PCs have SSE/SSE2 ... essentially no one is going to use a 64-bit integer for doing anything besides storing single, huge values... which is generally not useful.

I see it like everything else like this in the computer world, when it comes out only a couple can use it but soon they will have so many uses that people will talk about when 128-bit processors will be out even though, at the time, they could not possible think of a true use for them.

ddtlm
Feb 20, 2003, 12:46 AM
madamimadam:

I see it like everything else like this in the computer world, when it comes out only a couple can use it but soon they will have so many uses that people will talk about when 128-bit processors will be out even though, at the time, they could not possible think of a true use for them.
64-bitness has been out for years, and there is no push to move to 128-bit. 32-bit is almost enough for everything, 64-bit is more than enough, and will remain so for quite a while. Perhaps you should think about just how big 64 bits is. It can store a number not twice as big as 32 bits, not four times as big, but 4 billion times as big. 64 bits can store a number equal to more than 4 billion squared (2^64 - 1). I don't know how to say that number, but anyway it's something like 20 digits long. It is far beyond comprehention.

jettredmont
Feb 20, 2003, 12:00 PM
Originally posted by ddtlm

One: You see, while its true that a 64-bit computer can easily work with 64-bit integers whereas a 32-bit computer can only easily work with 32-bit integers, each integer is still only a single number. You've simply spent twice as many bits storing it. Is that really better? Only if you are trying to store something that won't fit in 32 bits.


No, of course, that is likely to be much worse if you just blindly change your 32-bit numbers to 64-bit numbers when you don't need to, because then you're (1) using twice as much memory (and cache) storing your data and (2) using twice as much memory-to-CPU bandwitdh popping those numbers in and out of memory. You would think that programmers wouldn't use 64-bit ints when 32-bit ints would do, especially not for processor-intensive code (or code that intermingles with processor-intensive code, hence bumping everything out of cache). However, experience with 32-bit processors shows that far too many programmers use long (32-bit) ints when a short (16-bit) or even byte (8-bit) would have done quite well.


Two: Games a typically floating-point intensive, and both 32-bit and 64-bit computers support 64-bit floating-point (aka double-precision) math.


Well, generally speaking, many floating point activities could actually be done using 64-bit integers, and so a "good" programmer would intermix floating point operations with long-long int operations to keep both pipelines full at all points. Granted, as above, *most* programmers don't pay enough attention to such details, but having a single-op 64-bit math processor there alongside your screaming FPU doubles your ability to streamling bottleneck code.

Of course, again, if the memory bandwidth isn't up to snuff (as is the case on the G4), no matter how well you pipeline int/FP instructions on the chip you're still constrained by the latency and throughput in pulling those bits from main memory.


Three: If the idea is to process data in 64-bit chunks instead of 32-bit, as fussily as "process" is used, why not use AltiVec? It "processes" in 128-bit chunks.

Exactly. The altivec unit is rarely logjammed on current apps (and certainly not on current games), and is a great way to process 8 shorts or 16 bytes at a time (assuming you want to do the same process to them). Using a 64-bit int register to do this instead of a SIMD instruction set is much less efficient in that you have to handle overflow conditions (ie, you have 8 bytes that you are incrementing ... if one of those bytes held an unsigned 255, incrementing it will push it to 0 and double-increment the byte next to it), which robs you of the efficiency you were trying to get by operating on a multi-byte int in the first place. Unless, of course, you "know" that you can never have overflow conditions, in which case the 64-bit int register can "stand in for" a half-sized Altivec register if your bottleneck is actually in the Altivec unit (same memory arguments apply as before; >90% of the time on a Mac the bottleneck is to memory, NOT in the CPU or its pipelining! The 970 helps with this in a much larger CPU-memory bus, but you also have to remember that you will likely have much more data being shoved through that bus simply due to the fact that most of the time 64-bit ints will be used where they could have been 32-bit).

A 64-bit processor offers highly-expanded memory capacity, and the ability to do integer math in many cases where before only floating point (double-precision at that) would work. The 64-bit processor also allows more efficient use of 64-bit ints where they are required for, as an example, database operations (all current processors can do 64-bit math, but it's not terribly efficient; gcc and CodeWarrior offer the "long long" data type for this purpose, while MS VC++ offers the "_int64" data type for this ... incrementing a 64-bit int is a three-cycle process instead of a single-cycle process as it would be on a 64-bit processor ... right-shifting a 64-bit int on a 32-bit processor is something like 5 cycles instead of a single cycle; left-shifting is incredibly inefficient on Intel chips as it is, and is even moreso when you are dealing with a 64-bit int). 64 bit integers will not help at all on data which is naturally 32-bit or 16-bit or 8-bits per discrete chunk (for instance, strings, which even in Unicode are in 16-bit chunks).

jettredmont
Feb 20, 2003, 12:11 PM
Originally posted by jettredmont
Using a 64-bit int register to do this instead of a SIMD instruction set is much less efficient in that you have to handle overflow conditions (ie, you have 8 bytes that you are incrementing ... if one of those bytes held an unsigned 255, incrementing it will push it to 0 and double-increment the byte next to it), which robs you of the efficiency you were trying to get by operating on a multi-byte int in the first place.

Sorry about the self-reply, but it's easier than editing :)

I forgot one important fact about Altivec: memory must be aligned on 64-bit boundaries in memory for loading/unloading in Altivec (or you take multiple operations to load each chunk into Altivec), which might not be the case with 64-bit ints. That might be a boundary case where using a 64-bit int instead of Altivec to handle 8 (unaligned) bytes of data from memory. Of course, if you have a whole stream of such data, it's only the first (<16) bytes that need to be handled in an unaligned manner; after that you can use Altivec ... it's unlikely that the extra code to special-case those first 16 bytes would be efficient enough to outweigh its cost in cache memory, and so unless you're doing Altivec operations on <16 byte unaligned arrays it's not worth it. Another drawback to Altivec is that if your code uses it there is a bit of extra framing setup involved which again wouldn't be necessary for 64-bit code, and the altivec data can't be pushed directly over to a non-Altivec register for further processing without going through cache and potentially main memory, which again would cause mini-operations to be more efficient on 64-bit code than with Altivec ...

madamimadam
Feb 20, 2003, 05:49 PM
Originally posted by ddtlm
madamimadam:


64-bitness has been out for years, and there is no push to move to 128-bit. 32-bit is almost enough for everything, 64-bit is more than enough, and will remain so for quite a while. Perhaps you should think about just how big 64 bits is. It can store a number not twice as big as 32 bits, not four times as big, but 4 billion times as big. 64 bits can store a number equal to more than 4 billion squared (2^64 - 1). I don't know how to say that number, but anyway it's something like 20 digits long. It is far beyond comprehention.

I do know how big 128-bits is but I choose not to be naive.... remember how so many people pay out Gates about his RAM comments years and years ago. There WILL be a use for 128-bit processors and it would not surprise me if it only takes as long as the move from 32-bit to 64-bit did/is take/ing.

Chisholm
Feb 20, 2003, 10:52 PM
Originally posted by richie
IMO, a high pressure water cleaner would be more useful. And they've already got the noise-production (with the Windtunnel G4s) right up there on par with the competition ;)

I wonder if it'll be like some kinda' osmosis water filteration type device. Maybe I could purify my whole home water system through my mac"s cooling device, you know, in case the duct tape over the door threshhold thing doesn't protect against a dirty bomb.

sarcasm, just to be funny, not rude.

cheers!

PyroTurtle
Feb 21, 2003, 12:26 AM
some mac enthusiastic company needs to just make a new PPC proc that goes at 10Ghz and has AltiVec, Altivec 2, and plain brute force with 10TB of memory bandwidth....then we'd all be happy right? amd we'd take over the world while we're at i tihink....

ddtlm
Feb 21, 2003, 01:11 AM
madamimadam:

I do know how big 128-bits is but I choose not to be naive.... remember how so many people pay out Gates about his RAM comments years and years ago.
You are in no position to be declaring wether or not you are being naive.

There WILL be a use for 128-bit processors and it would not surprise me if it only takes as long as the move from 32-bit to 64-bit did/is take/ing.
You are apparently blinded by big numbers and have no appreciation for the actual work being done in programs. A number of things can be handled by integers as small as 8 bits, a whole lot more can be handled by 16 bits, and virtually everything can be done in 32 bits. 64 bits is the point where some integer operations benefit, and the extra memory addressability is useful. Going to 128 bits removes the addressability benefit since noone is anywhere close to getting 16 billion gigabytes of RAM, and the number of integer ops that benefit are cut down still further.

MacBandit
Feb 21, 2003, 02:40 AM
Originally posted by ddtlm
madamimadam:


You are in no position to be declaring wether or not you are being naive.


You are apparently blinded by big numbers and have no appreciation for the actual work being done in programs. A number of things can be handled by integers as small as 8 bits, a whole lot more can be handled by 16 bits, and virtually everything can be done in 32 bits. 64 bits is the point where some integer operations benefit, and the extra memory addressability is useful. Going to 128 bits removes the addressability benefit since noone is anywhere close to getting 16 billion gigabytes of RAM, and the number of integer ops that benefit are cut down still further.

You seem to be a little jumpy there DDTLM. You getting enough sleep. Just take a few deep breaths and repeat after me. It's not the end of the world. It's not the end of the world. It's not the end of the world.

By the way I read madamimadam about the move to 128bit to be a quote of what Gates said.

Originally posted by madamimadam

remember how so many people pay out Gates about his RAM comments years and years ago. There WILL be a use for 128-bit processors and it would not surprise me if it only takes as long as the move from 32-bit to 64-bit did/is take/ing.

ddtlm
Feb 21, 2003, 10:13 AM
MacBandit:

Your agreement with madamimadam is irrelevant. Wether either of you can produce a solid argument to back your position is what matters, and so far all I've gotten is hand-wavy "bigger numbers are good" claims.

Why don't you go find an application for 128-bit integers and report back so we can decide if it is justification for a 128-bit processor?

MacBandit
Feb 21, 2003, 10:18 AM
Originally posted by ddtlm
MacBandit:

Your agreement with madamimadam is irrelevant. Wether either of you can produce a solid argument to back your position is what matters, and so far all I've gotten is hand-wavy "bigger numbers are good" claims.

Why don't you go find an application for 128-bit integers and report back so we can decide if it is justification for a 128-bit processor?

You still don't get it do you? This is like pounding nails with my fist.

It's a joke man. madamimadam was picking on Gates' inneptitude. Gates said that he thought it would only take the time that it took to move from 32bit to 64bit to move from 64bit to 128bit. Gates is such an idiot sometimes.

Do you understand now? No one at least not me is saying that there is or will ever be in the next 500 years of human history a need for a 128bit processor.

I love ya man but you definitely need to take a step back and take a deep breath and look at the context of what's being said before jumping down somones speedos.

ddtlm
Feb 21, 2003, 04:46 PM
MacBandit:

Do you understand now? No one at least not me is saying that there is or will ever be in the next 500 years of human history a need for a 128bit processor.
Actually, madamimadam was predicting just that. To quote him:

There WILL be a use for 128-bit processors and it would not surprise me if it only takes as long as the move from 32-bit to 64-bit did/is take/ing.

Now back to you:
Gates said that he thought it would only take the time that it took to move from 32bit to 64bit to move from 64bit to 128bit. Gates is such an idiot sometimes.
Perhaps I missed something, but wasn't madamimadam talking about Gates' memory quote? I am unaware of any Gates bitness quote.

MacBandit
Feb 22, 2003, 02:45 AM
Originally posted by ddtlm
Now back to you:

Perhaps I missed something, but wasn't madamimadam talking about Gates' memory quote? I am unaware of any Gates bitness quote.

Well I can't speak for madamimadam though I seem to have been doing so. The Gates quote does specifically say processors not memmory.

madamimadam
Feb 22, 2003, 04:15 AM
Well, I don't see the point in jumping into an arguement that is looking like it would just get totally childish but thanks MacBandit.

All I am going to say is that it is not possible to find an example of use for 128-bit because, as my point suggested, it would not be now that they would be used but in years to come.

Asking me to give an example of how to use 128-bit chips is as good as asking someone the use for 64-bit chips when 32-bit chips first came out.

Obviously, people would not know what to do with a 64-bit chip when the 32-bit chips came out because apperently "a number of things can be handled by integers as small as 8 bits, a whole lot more can be handled by 16 bits, and" "everything can be done in 32 bits".

ddtlm, calm the **** down and have a baby... I am told they make everything worthwhile. ;) - For Chuck -

ddtlm
Feb 22, 2003, 03:13 PM
MacBandit:

The Gates quote does specifically say processors not memmory.
I would be interested to know what exactly this quote is, since I've never heard of it before.

madamimadam:

Asking me to give an example of how to use 128-bit chips is as good as asking someone the use for 64-bit chips when 32-bit chips first came out.
Now we're just back to the hand-wavy arguements again, about compelling, inevitable uses for 128 bits that we just can't possibly imagine. Speaking as someone who puts integers to work every day in programs I write, it seems very obvious to me that the uses for native 128 support are quite few and far between (addressability being the best use), but I'm not sure how I am going to be able to convince you of the same thing.

Obviously, people would not know what to do with a 64-bit chip when the 32-bit chips came out because apperently "a number of things can be handled by integers as small as 8 bits, a whole lot more can be handled by 16 bits, and" "everything can be done in 32 bits".
I don't see how you can conclude that no one could imagine a use for 64-bitness simply because most things are handled best by fewer bits.

prewwii
Feb 22, 2003, 05:15 PM
Originally posted by ddtlm
MacBandit: Speaking as someone who puts integers to work every day in programs I write, it seems very obvious to me that the uses for native 128 support are quite few and far between (addressability being the best use), but I'm not sure how I am going to be able to convince you of the same thing.


I don't see how you can conclude that no one could imagine a use for 64-bitness simply because most things are handled best by fewer bits.

In the late eighties I worked on a project developing a computer arranged in a ring with 16 slots per ring and a word size of 512 bits. This little gem cranked out answers every 5ns. The problem then was not finding a use but finding a compiler design that could keep it busy.

One day I talked to a weather bureau guy about getting more sensors. He said that with the computers and sensors they had then that the computer models were 5 hours behind the real weather and that adding more sensors would compound the problem rather aid in a solution.

What about calculating the value of money around the world so money can be moved from currency to currency? Maybe 128 bits is not nearly enough. Depends on how many different operation can be contained in a word wouldn't you think?

SmileyDude
Feb 26, 2003, 04:45 PM
why is everyone missing the big reason to go 64-bit? it's memory -- 32-bit systems can only address 4gigs of memory space without using a bank-swapping technique. Sure, it wouldn't be too hard for Apple (or Microsoft, for that matter) to support machines with more than 4gigs of RAM, but individual processes couldn't access more than 4gigs at once.

Going to 64-bit will mean that processes will be able to use files bigger than 4gigs and allocate more than 4gigs of memory. This is not only good for databases, but also video editing applications as well. Also, with the way OS X uses memory for things like the Window server, the extra memory space will help.

Also, another thing to consider -- all of the PowerPC chips already have a 64-bit data bus. The G4 extends it to 128-bit for Altivec. All the move to 64-bits means is that Integer instructions will be able to use 64-bits as well. If a program only uses 32-bit instructions, its not going to causes a slowdown -- it will be just as fast as it currently is. The instruction size isn't changing to 64-bits, so there isn't an automatic doubling of memory bandwidth required.

Just think of the 64-bit powerpc as an extension -- one that's been planned from the beginning. The registers will be bigger -- including the program counter. There will be some new instructions to support 64-bit integer math (not very useful unless you need numbers bigger than +/- 2 billion (or 4 billion if you don't care about negative numbers).

When the 970 comes out (if it does show up in an Apple product), Apple will just need to change Darwin to support the larger address space, the code in the scheduler to save the registers (64-bit now -- need more space), and any code that deals with allocating or mapping in memory. Most likely, they will go the route that Sun took with Solaris -- they will add some new routines that deal with 64-bit pointers, and maybe some glue routines that translate between the two. Code will have to be explicitly compiled for 64-bit mode, and most programs will continue to be compiled as 32-bit only.

The bigger deal for the 970 is the increased overall speed of the processor and the added memory bandwidth. 64-bit is just a minor addition in the big picture -- one that is needed, but it's not earth-shattering like the move from 16-bit to 32-bit (or, for the Mac, 24-bit to 32-bit :) )

Mirus
Feb 28, 2003, 12:19 AM
Do we have any engineers in here? I have yet to read a sensible post regarding bit sizes.

In the most general sense going from 32 bits to 64 bits will NOT increase the speed of the system. What it DOES allow you to do is move data around much more efficiently. Currently, most, if not all, 32 bit processors deal with 64 bit registers. Remember too, that these are INTERNAL buses and have nothing to do with accessing/addressing external memory...

The 7457 (G4) has a 36 bit address bus and a 64 bit external data bus (the PPC970 has a 42 bit address bus). There are 32 64 bit Floating Point Registers and there are 128 bit internal buses. The reason why it's considered a 32 bit chip is because the Integer Units are only 32 bits wide. So, data is flying around in there much more efficiently than the IUs can handle it. There are 3 main Integer Units, which means it can parallel task 3x32 bit instructions per clock cycle.

In "normal" operations this is enough. But, when you deal with multimedia (video, audio) and/or extremely accurate floating point precision (scientific) you'd be more efficient moving 64 bits than 32. Take a look at the newer video accelerators, they have 64 bit and even 128 bit pipelines and FPUs.

If all you do is "surf the net", play Solitaire and read email 64 bits is going to do NOTHING for you. If, OTOH, you do video editing, sound recording/processing, real time weather modeling, micro-precision CAD drawings or wind tunnel modeling 64 bits is going to give you A LOT...

Plus, pushing 64 bits instead of 32 bits is more efficient, hence lower operating temperatures and less power...

ddtlm
Feb 28, 2003, 02:39 AM
Mirus:

Do we have any engineers in here? I have yet to read a sensible post regarding bit sizes.
I do so love it when people start off that way.

There are 32 64 bit Floating Point Registers and there are 128 bit internal buses.
Some busses are even wider, for example the L2->L1 data bus is claimed to be 256 bits wide in Motos 7455 docs.

But, when you deal with multimedia (video, audio) and/or extremely accurate floating point precision (scientific) you'd be more efficient moving 64 bits than 32.
Well double-precision (64-bit) floating point has been supported by 32-bit CPUs for some time, so I don't quite see where you are going with that.

OTOH, you do video editing, sound recording/processing, real time weather modeling, micro-precision CAD drawings or wind tunnel modeling 64 bits is going to give you A LOT...
How do 64-bit integers help any of this?

Plus, pushing 64 bits instead of 32 bits is more efficient, hence lower operating temperatures and less power...
Only in the case that all 64 bits are actually being put to use, which would not be the common case.