Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Re: Re: Re: interesting note...

Originally posted by suzerain
Ummm...did you even go to the site? According to their FAQ, they split the multiprocessor setups onto a separate page, which is where your xlr8yourmac numbers come from.

The speed I quoted is from a page ostensibly listing single processor speeds. Now, it's entirely possible someone submitted a bogus entry, I suppose, but it wasn't from the duals page.

Just go here and you'll see that.


However... if you look at a dual 800 and this 1600 the numbers are the same. also when the 1.0qs came out someone posted 2.0 speeds... people just add the numbers up and submit them.
 
Originally posted by MacRETARD
A few things:

1) Apple has annouced no new chip.
2) Intel does support dual cpus as well. Actually intel supports 32 cpus via SMP.
3) Yes intels hyper threading could use improvement, anything could use improvement but it is a feature that is out NOW, and is a feature that does help.
4) Intel supports SSE2 which in some benchmarks will blow away the G4 and altivec just like in other benchmarks (rc5?) the g4 blows away the P4. These specialized benchmarks mean little you have to look at a bunch if you want to get a good idea of performance.

Whats my point? The legendary holy grail of apple computing is not out yet. I think too many people have "G5 syndrome". How long have we been talking about the legendary pentium killer G5? To bash something that is out, and is useful(intels HT), while comparing it to something that is NOT out and you can only speculate on is stupid.

If Apple or AMD finally come out with the 64 bit chips for the desktop that support SMP at reasonable prices you can bet Intel will magically enable the P4 to be SMP capable. The only thing keeping the P4 from supporting SMP now is a hardware restriction intel adds so they can charge more for the Xeon cpus.
He makes a point, Apple has resorted to 2 cpus to make up for the g4 and it still dont do it what ever math you use. The g4 has been left to stagnate by that darn who cares motorola. the g4 had all the potential only a company behind it who could care a less. why else would apple have to resort to marketing, OSX, xserve architecture, etc just to keep it on the same page. motorola SUCKS!SUCKS SUCKS. Just another company ran by some Dam Bean counters with no vision. BRING ON THE 970 and DO IT NOW! INCREASE YOUR MARKETSHARE APPLE!
 
Originally posted by Hattig

If the 1.2GHz PPC 970 from IBM processors are cheaper than the 1.6GHz 7457's from Motorola, then I don't see why Apple would want to use the 7457 processor, when it could get cheaper processors, and the advantage of 64-bit marketability.


If that turns out to be true, then yes, I agree, Apple should offer the 1.2 Ghz 970 instead of 1.6 Ghz 7457. But I seriously doubt that will be true. My understanding is that the 7455 die is as large or larger than the 970 die, but the 7455 is on a 180 nm process and 970 is on 130 nm process. I would expect that the 130 nm 7457, which I believe has significantly fewer transistors than the 970, will be cheaper than a 130 nm 970. At the end of the day, the 970 is a workstation chip, and the 7457 is an embedded chip. Unlikely that former will be cheaper than latter, even at lower clock speeds. But you never know, of course.


However, if IBM aren't going to ship below 1.8GHz (as you could read the recent article as), then the iMacs will of course be faster 7457's.

Yeah, well, I suspect that the 1.8-2.5 Ghz speed range is on the 90 nm process, but we'll see. You never know, but I'm not *that* optimistic. Still, the 970 will definitely be a good chip - much better than what we have now.
 
SORRY

Originally posted by reyesmac
Steve said this was the year of the laptop, so whatever Powermac comes out, it probably wont be the fastest computer out there, just a speedbump. We wont be crossing the 2gig barrier this year.

Sorry to inform you, but Steve will do what makes Apple money. Apple will make money off of both strong portable sales and a revamped Pro line.
 
Originally posted by ddtlm
macrumors12345:


Arstechnica has this information nicely presented in table form at the link that follows. Note that the 970 is 14% larger than the 7455.

http://www.arstechnica.com/cpu/02q2/ppc970/ppc970-1.html

Thanks for the link. The die size of the 130 nm 970 is slightly larger than the 180 nm 7455, so it is safe to say that the 130 nm 7457 will be substantially smaller than the 130 nm 970. Furthermore, for the 970 the transistor count is 52 million, for the 7455, 33 million. But the 7457's transistor count should rise somewhat with the doubling of L2 cache from 256k to 512k.
 
Re: Re: Re: Re: crap-idy-crap crap

Originally posted by ffakr
I did realize that CAD/CAM was interested in higher bit depths for precision. I had heard 48bit was being talked about.
I didn't realize that a 'color depth' would include alpha info for so called 64bit and 128 bit color. Usually there is a seperate value associated with the alpha channel.
I still don't see how you'd need 128bits though... especially for raw video or image editing.
32bit color with 16bits of alpha channel is considered high end right now. Even 64bits of room is vastly larger than that.

Thanks for the info though.

... stuborn old ffakr.

First of all, when I mention the alpha value, that does not apply to video capture, or final representation, but is instead applicable to any component of the video ie: fading from one scene to another reqquires to separate video sources, each with their own alpha value.

Secondly, when talking about floating point values, one must keep in mind that they are really just some integral value, plus a powers ie: 12.34 = 1234x10-2 is stored as [1234, -2]. Of course the computer actually uses base 2, but I'm trying to get at the fact that with a single color (say red) taking up 32 bits, that only, say, 20 bits might actually be for a precise number, and the rest is used for the exponent, so that one can better differentiate a really faint candle in one side of the room, to the sun shining through a crack elsewhere. (I don't know the IEEE floating point standards by heart, so it's probably not 20 bits, but is something close to that).

Ok, so now we see that to use 128 bits for each pixel only gives us ~20 bits of precision per color, which isn't so far from the current high end of 16 bits of precision. Of course everyone will be using 64 bit color for a while before moving to 128 bit, but it's best to make the standards well beforehand.
 
128-bit color . . . NOT!

Of course everyone will be using 64 bit color for a while before moving to 128 bit,

I would really like to see links to, even possible, uses of 128-bit color. There is NO NEED for that kind of precision. And everyone using 64-bit color? I don't think so.

As others have mentioned, our eyes can only detect the equivalent of 10-bits per color channel (30-bit color), max. This would be analagous to those uber-audio-guru-geeks that can claim they can tell the difference between pristine analog, and 16-bit, 44khz CD-quality sound (read: it ain't easy). We just can't discern the differences at higher bit depths.

Hollywood only uses 10-bit per channel color (maybe 12-bit) for film and HD. These are typically housed in 16-bit channels, with the remaining bits "unused", resulting in RGB=48-bits, and RGBA=64-bits of information that needs to be crunched/manipulated.

That 64-bits for RGBA even includes Alpha channel info that isn't really presented to the eye in the form of color, and it's still more than enough. 8-bit/channel is all anyone really needs for 98% of video projects.

I meet people all the time that can't tell the difference between an 8-bit indexed color GIF and a 24-bit RGB picture, so I just can't see humans needing 128-bit color.

ffakr said: Additionally, assume a video stream of 640x480 resolution, 30 frames per second, 128bit color depth (fairly low rez, super high quality). You'd need over 140 MByte/second constant bandwidth to stream that video.

fwiw - 140 MB/s is the equiv of uncompressed HDTV (1920x1080, 30p, 8-bit, not 10-bit) roughly 6x uncompressed NTSC, or about 5 MB/frame.

See? Wouldn't you want more physical resolution in the form of more pixels, and a bigger TV (that you can brag about), rather than more color that you can't even see on your present TV?? (try braggin to your buddies about that -- they'll look at you funny).
 
3G4N:

128-bit color (composed of 4x32-bit FP components) is actually used in video cards already, so that pixels can be passed through again and again without meaninful loss of data, and also so that extreme differences such as lightlight vs candlelight vs darkness can actually be handled properly. It's even part of the Directx 9 spec as far as I know. Both the Radeon 9700 and GeForce FX support it.
 
Originally posted by ddtlm
3G4N:

128-bit color (composed of 4x32-bit FP components) is actually used in video cards already, so that pixels can be passed through again and again without meaninful loss of data, and also so that extreme differences such as lightlight vs candlelight vs darkness can actually be handled properly. It's even part of the Directx 9 spec as far as I know. Both the Radeon 9700 and GeForce FX support it.

I could be wrong here, but I thought the new Radeon and GF only support 48bit.
 
Re: 128-bit color . . . NOT!

Originally posted by 3G4N
I would really like to see links to, even possible, uses of 128-bit color. There is NO NEED for that kind of precision. And everyone using 64-bit color? I don't think so.

As others have mentioned, our eyes can only detect the equivalent of 10-bits per color channel (30-bit color), max.

...

Thank you for illustrating exactly what I said. With 64 bit color, one would have 16 bits per ARBG channel. Since they're floating point, the exponent probably takes from 4 to 6 bits, leaving 10 or 12 bits for the actual precision description of the color, precisely what you say our eyes can perceive.

Most radiation is detected by our body in a logarithmic fashion, not linear. That is why using more bits ni an integral fashion is useless, but adding more bits for an exponent is useful.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.