Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Originally posted by cubist
And those hyperthreaded P4s will be available in 2H03, I think? Or was it 1H04? Please use correct tense.

Hyperthreaded P4s are available now at 3.06 GHz. Intel's entire line should be hyperthreaded by end of 2003, but if you want the performance now it is quite available. Dell even has them.
 
Re: and I thought AMD was bad......

Originally posted by TMJ1974

The cell phone radiation must be having an impact on Motorola's thinking. You're not making money....ok, fair enough. But, you won't make any by allowing the processor's you make to become obsolete. You have to invest money to make money....have they ever heard of that?

Ummm....have you ever heard of competitive advantage? No one has been able to compete with Intel. All those that have tried: AMD, VIA, etc., fail because Intel has the best research and best production in the PC CPU market, and can lower prices at will since they the industry's largest profit margins by far.

If there's one company that can take on Intel it's IBM, definitely not Moto. IBM has better R&D than even Intel, and certainly has the resources (of course they would have to be willing to spend billions fighting a bloody chip war). My hypothesis is that the 970 chip represents a full-out assault against Intel for the Linux market. If IBM could produce the CPU of choice for Linux, that would be a pretty decent chunk of market share. And then IBM cold start pushing Linux on the server side and working with Apple to push Mac OS X on the desktop. I envision a future with more Linux and more OS X. Linux is our ally - the more it becomes entrenched, the less Microsoft is a monopoly, and the more people are willing to look at alternative platforms like the Mac.l
 
Originally posted by Falleron

Plus, on the note of PC's being quicker. On most tasks (barring a few such as video/3d) what does that bit of extra speed help you do when you are browsing the web? ABSOLUTLY NOTHING. You save more time by using the Mac OSX which saves time on less keystrokes. I admit that Apple needs to move up a level for 3D and Video. However, its out of Apple's hands. Dont blame them.

Ever browse the web using IE in Window XP? It is snappier and quicker to load pages. The texts in many cases look smoother and pleasing to the eye. Wake up and smell the coffee. Denial is blissful, isn't it Falleron?;)
 
Re: Re: Motorola 7457 Upgrades

Originally posted by jettredmont

Intel is doubling CPU performance every 18 months, and Motorola is more like 1.73x in 18 months (extrapolating from 1.44x in ~12 months), especially after the long sub-Moore's trend we've been stuck in for years (Sept 1999-Aug 2002 3.125x improvement instead of 2.51x under Moore's ... although granted July 2001-Aug 2002 was above Moore's), we're SOL!

I'm not sure how you arrive at that figure for motorola. The 500MHz G4 was initially released in 8/99 and had a faster design than later G4s because it had fewer pipeline stages, such that a 1GHz G4 is not double the speed of the original 500. You'd have to go to 1.25 at least to get double the performance. That didn't happen until 8/02. So fully half of what Moore's law would predict.

July 2001 - August 2002 was NOT above Moore's law. Going from 867 to 1250 is a 44% improvement, not the 66% improvement you'd expect after 66% of 18 months had passed.

I think the last time Motorola was ahead of Moore's law was in the year before August 97 when the 300MHz G3 was first introduced (though not released until many months later because Apple killed off the clones that were to ship with them).

We really ought to be at about 3GHz G4s by now if Motorola doubled performance every 18 months since the G4 500 debuted in 1999. If we get that sometime in 2004 we'll be lucky.
 
Originally posted by Chomolungma


Ever browse the web using IE in Window XP? It is snappier and quicker to load pages. The texts in many cases look smoother and pleasing to the eye. Wake up and smell the coffee. Denial is blissful, isn't it Falleron?;)
I have used IE on windows. However, what you dont see is that the Mac version could be as good as the PC version if M$ put the money into developing a decent version (less bugs + was optimised). Its speed is very little governed by the cpu.
 
Originally posted by Spievy
Guys, Guys,
Don't forget this point here. The speed of a computer doesn't totally rest on the processor alone, a major part of speed is the architecture of the computer. Bus Speed, amount of Cache and Cache speed. My work horse computer is a G4 400mhz AGP!!!, and I constantly compress movies, render animations, and do MAJOR Photoshop work. I am just now considering upgrading to a new PowerMac after using this G4 for 3 years. You can't find a PC that will last 3 years like macs can.

Come on, guys. I use a PC that I built, so I'm a bit well more informed about the PC than you folks who haven't touched one in a few years.

The PC isn't just kicking the Mac's ass in CPU, but the PC is also taking names in FSB, RAM speeds, etc.

I have an Nforce2 mobo, which can go to 400MHz FSB. All I'm waiting for is the Athlon XP Bartons, due out next quarter. Those will ramp up the Athlon XP from 333 to 400 MHz FSB, both of which is faster than whatever Apple has out today.

Intel is at 533MHz FSB for the P4.

RAM speeds... my Nforce2 supports 400MHz DDR right now. It can run asynchronously, so I can have my 333MHz FSB and 400MHz DDR RAM run together. I don't like Rambus, but the latest Rambus speeds that are available now still kick some serious ass. The RAM speeds on PC in both DDR and Rambus are faster than whatever Apple has out today.

AGPx8 is available now on the latest PC motherboards, and so are AGPx8 cards for the PC from Nvidia and ATI. Again, faster than whatever Apple has out today.

Nforce2 also has Hypertransport (800Mb/sec replacement for PCI) as well as Serial ATA support, which replaces UltraIDE.

Hyperthreading is the real deal, too. Go watch the videos on Toms Hardware, they run two 3.06GHz Pentium 4's side-by-side, one with HT turned on, the other with HT turned off. The HT P4 kicks ass on the non-HT P4, even in apps that aren't multithreaded. You can buy these machines right now from Dell and other manufacturers.

All these technologies aren't stuff the PC is getting later in 2003 or 2004. These are technologies that are available right now that you can buy.

And who is winning the speed race?
http://www.digitalvideoediting.com/2002/11_nov/reviews/cw_macvspciii.htm

The single Processor P4 slaughters the Dual G4 in every test. And we're not talking beating the Dual G4 by 5 or 10 percent. The P4 does it all in half the time the Dual G4 does. This in a machine that costs $630 less (and that's after Apple's discounts, else it would be $1,000 less) and runs "church-mouse quiet" compared to the windtunnel G4's.
 
Imalave

I agree with your points :)

I am very excited about IBM's chances to give Intel a run for the money. As pointed out, time and again, it's not the clock speed, but what the chip can do. I would love the PPC 970 to outperform whatever Itanium or P4 out at the time. Further, I can only pray that Apple is going to use it.

Also agree with you on Linux being our ally. In fact, the next home build PC I make, I might put Linux on, just to play around.

Tim
 
Shadowplay, we all know apple is playing catch up to intel, amd, and their dominance.

We all understand that if apple doesn't start taking a stand rasing clock speed and new real preformance (see fake DDR) we will lose the battle.
There is no doubt that Intel has the uper-hand but soon enough we might level the playing feilds. (see 970 or higher-clock speed G4's)

Apple hasn't adopted new technology's recently sure they were the leader for hardware like USB and Firewire but now they have fallen back again.
They haven't started to use new hardware that is out there. Apple will need to use some real cash (see 4 Billion in the bank) to get back to where we feel good. (see 'we want real comparisons')
What makes apple customers stay is the nice OS, lack of crashing and overall the wellness of there computer, but, Intel is gaining from us silently.
Look at the topics were people are saying they are just gonna build there own computers, they are buying Intel/AMD hardware (AMD's not bad) and even if they don't get a M$ OS they have switched.
If apple doesn't fix these gaping wholes soon, we may not see many people here in the future.
 
Re: Re: Re: Pentium 4: Intelligence Meter

Originally posted by firestarter


Unfortunately, Intel hasn't been sitting still on the Pentium 4 design front.

Not only are the 3GHz P4s extremely fast, but the 'hyperthreading' architecture runs 2 instruction pipelines in the P4 core. Yes - the longer pipeline suffers more from stalls, but now the processor can switch over to the other pipeline and keep executing. This is giving these new chips a 20-30% speed boost, and has fixed the main architectural inefficiency of this design.

In short, the MHz myth is over, and this 'G4 = 2xP4 MHz' is no longer true. Motorola/IBM/Apple are going to have to play some serious catchup to a competitor who's lead is increasing.

where are you getting this information from everything i have read said that hyperthreading increase is relatitible minor for most programs and only the FEW programs out there that benafit from it benafit from it get about 10-15 percet speed increase and someprograms (virus scanns actually loose about 5 percent performace) CNET has a pretty good article on hyperthreading

Cnet Article on Hyperthreading

In fact Dell and Hp shipped the xeon servers with hyperthreading turned off because there was not enought programs out there that ook advantage of it.......... its not the savior of the extemely badly designed p4 line, like u made it sound....
 
Originally posted by MrMacman

Look at the topics were people are saying they are just gonna build there own computers, they are buying Intel/AMD hardware (AMD's not bad) and even if they don't get a M$ OS they have switched.

Hey man, I just ordered a Mobo, AMD Athlon XP 1700+, and case from NewEgg.com, plan to install Win2K on it, but I am by no means "switching". My trusty iBook is still my everyday computer. The only things I will use my white-box PC for is for games and as a file server (since it supports up to 4 IDE drive and I can definitely get some cheap ones...).

It's been said a million times here before, but unless you're a gamer or a pro user, what do you need more MHz for? If Apple continues to focus on design and price, they won't have any problems. Price is really the key for most consumers - I wouldn't have purchased an iBook for $2300 instead of $1300, even if the freakin' thing had a 970 in it right now! The iBook does everything I need to do like a champ.

I don't even think Apple's digital hub strategy needs much more powerful CPUs. Keep in mind most of the consumers that are using iMovie just want to splice their video, add some titles, and maybe a couple effects. My G3 iBook can already handle multimedia tasks such as simple editing of movies in iMovie (I know 'cause I've done it on my friend's Beige G3), ripping and burning CDs, etc.

I suppose mainstream multimedia apps could become more demanding to the point where Apple benefits from a 970 CPU thats more than twice as fast as what I have now. But really, isn't there a limit to all of this? Before when folks predicted that the average person wouldn't need more than X amount of RAM or Y amount of disk space (like Bill Gates did 20+ years ago), they were thinking purely in terms of non-graphical, non-multimedia application data. Nowadays we KNOW exactly how many bytes have to be stored/processed, etc. to manipulate audio or video information. And we can envision ever-higher resolutions, bitrates etc.

But there IS a limit. IBM has already made prototypes of displays that have a higher resolution than the human eye. And audio technologies already exist that supposedly surpass the audio processing capabilities of the human ear (wasn't DAT basically there, like, 10 years ago). So from a multimedia hub perspective we are already almost at the point where we really don't need much more powerful computers. 3D gaming would be the one exception, since one could still envision photorealistic real-time rendering that would require many times the horsepower of even the highest-end consumer video card currently available. But even then the 3rd party video card is doing all the work, and all Apple has to do is provide the video card with whatever bus speed it requires.
 
Re: Re: Re: Re: Pentium 4: Intelligence Meter

Originally posted by Huked on Fonick


where are you getting this information from everything i have read said that hyperthreading increase is relatitible minor for most programs and only the FEW programs out there that benafit from it benafit from it get about 10-15 percet speed increase and someprograms (virus scanns actually loose about 5 percent performace) CNET has a pretty good article on hyperthreading

Cnet Article on Hyperthreading

From the article you link to:
"On our multitasking tests with Hyper-Threading, we saw performance improvements of anywhere between 4.5 and 28.6 percent overall. With a multithreaded application, we saw as much as a 25.3 percent increase."

In fact Dell and Hp shipped the xeon servers with hyperthreading turned off because there was not enought programs out there that ook advantage of it.......... its not the savior of the extemely badly designed p4 line, like u made it sound....

As you hint - the effect is dependant on the work you're doing. Multithreaded apps are better (Photoshop etc.) and multitasking is faster. Since the original 'G4 is twice as fast as P4 at the same MHz' MHz myth tests were done on favorable Photoshop filters which use Altivec & parellalise well (so can make use of dual Mac processors), it's not unreasonable to quote P4 speedups under these same conditions.

Tom's Hardware guide has an interesting downloadable video on http://www6.tomshardware.com/cpu/20021114/p4_306ht-22.html which shows the qualitative speedup of the 3.06GHz hyperthreading processor over the 3.6GHz regular processor. The 3.6GHz is up to the expected 20% faster in synthetic benchmarks - but the Hyperthreading seems to make disk access more asynchrolous, and delivers a better overall experience in multithreaded/multitasking apps.
 
Originally posted by lmalave

It's been said a million times here before, but unless you're a gamer or a pro user, what do you need more MHz for?

[ ... ]

I don't even think Apple's digital hub strategy needs much more powerful CPUs. Keep in mind most of the consumers that are using iMovie just want to splice their video, add some titles, and maybe a couple effects. My G3 iBook can already handle multimedia tasks such as simple editing of movies in iMovie (I know 'cause I've done it on my friend's Beige G3), ripping and burning CDs, etc.

I suppose mainstream multimedia apps could become more demanding to the point where Apple benefits from a 970 CPU thats more than twice as fast as what I have now. But really, isn't there a limit to all of this? Before when folks predicted that the average person wouldn't need more than X amount of RAM or Y amount of disk space (like Bill Gates did 20+ years ago), they were thinking purely in terms of non-graphical, non-multimedia application data. Nowadays we KNOW exactly how many bytes have to be stored/processed, etc. to manipulate audio or video information. And we can envision ever-higher resolutions, bitrates etc.


Well, there's a vast realm of unknown in how much power one needs to "handle" a single pixel of data. A more powerful CPU (and/or GPU) can make the resulting pixels much more lifelike or fantastic simply because they can perform more precise modeling of the real world being simulated on the screen. I mean, yeah, a Mac from 1985 could handle dragging a sprite across the screen pretty darned well, but that just isn't the same as editing full-motion video in real time or in rendering a true digital world in real time.

It's not the number of pixels, but what you're doing with them. And, no, we have not gotten anywhere near realtime (or even non-realtime) accurate simulation of virtual worlds. I mean, the hair on Scully in Monster Inc was pretty cool and all, but that's rendered at far from realtime on a farm of Intel boxes and it still takes artistic "finesse" when all's said and done to make it seem "real". There's no ceiling approaching in the digital video realm.

As for live-motion video streams ... have you actually done rendering work recently? It's still butt-slow for anything that requires any calculation. With more horsepower you will see common tasks get done realtime instead of sub-realtime, and new effects that weren't possible with today's machines (like editing out characters in video and interpolating the background fill, etc) that can't be done today because they would require more horsepower than any but the professional studios have available to them.


But there IS a limit. IBM has already made prototypes of displays that have a higher resolution than the human eye. And audio technologies already exist that supposedly surpass the audio processing capabilities of the human ear (wasn't DAT basically there, like, 10 years ago). So from a multimedia hub perspective we are already almost at the point where we really don't need much more powerful computers. 3D gaming would be the one exception, since one could still envision photorealistic real-time rendering that would require many times the horsepower of even the highest-end consumer video card currently available. But even then the 3rd party video card is doing all the work, and all Apple has to do is provide the video card with whatever bus speed it requires.

Umm, no. That's great for rendering, but for a game to be truly immersive, the world needs to act real, obeying the laws of physics, etc. That's not done a lot today simply because there isn't enough horsepower (not even on the Intel side) to do it. And the video card isn't designed to calculate the effects of gravity on microparticles of dust; it's only designed to render those specks as realistically as possible once the main CPU tells them where they are.

But this is not limitted to games. The main problem with 3D interfaces today is that they compromise far too much in the name of performance. You can have a terrific and intuitive 3D interface, or you can have one that is usable in real time. As CPUs advance, we will hit upon the threshold where new UI paradigms that are intuitive and utilitarian finally become usable in realtime. Again, the task here is not so much video-intensive, as it is in managing a true 3D world where well-known physical laws are obeyed.

Like Gates in the early 80s and his famous 64k quip, shorting the need for more computing power with the argument that what we have today runs okay is shortsighted. What we have today runs okay today because it was designed with today's machines in mind. That which is designed for tomorrow's machines will not succeed in today's marketplace. Hence, it is always a truism that what we have today works reasonably well with the hardware of today. History shows, however, that as computing power increases, in fits and starts it makes great new paradigms possible. I wouldn't want to live in today's world with the hardware of five years ago, and I expect I'll say the same five years from now.
 
Originally posted by jettredmont


Well, there's a vast realm of unknown in how much power one needs to "handle" a single pixel of data. A more powerful CPU (and/or GPU) can make the resulting pixels much more lifelike or fantastic simply because they can perform more precise modeling of the real world being simulated on the screen. I mean, yeah, a Mac from 1985 could handle dragging a sprite across the screen pretty darned well, but that just isn't the same as editing full-motion video in real time or in rendering a true digital world in real time.

It's not the number of pixels, but what you're doing with them. And, no, we have not gotten anywhere near realtime (or even non-realtime) accurate simulation of virtual worlds. I mean, the hair on Scully in Monster Inc was pretty cool and all, but that's rendered at far from realtime on a farm of Intel boxes and it still takes artistic "finesse" when all's said and done to make it seem "real". There's no ceiling approaching in the digital video realm.

As for live-motion video streams ... have you actually done rendering work recently? It's still butt-slow for anything that requires any calculation. With more horsepower you will see common tasks get done realtime instead of sub-realtime, and new effects that weren't possible with today's machines (like editing out characters in video and interpolating the background fill, etc) that can't be done today because they would require more horsepower than any but the professional studios have available to them.



Umm, no. That's great for rendering, but for a game to be truly immersive, the world needs to act real, obeying the laws of physics, etc. That's not done a lot today simply because there isn't enough horsepower (not even on the Intel side) to do it. And the video card isn't designed to calculate the effects of gravity on microparticles of dust; it's only designed to render those specks as realistically as possible once the main CPU tells them where they are.

But this is not limitted to games. The main problem with 3D interfaces today is that they compromise far too much in the name of performance. You can have a terrific and intuitive 3D interface, or you can have one that is usable in real time. As CPUs advance, we will hit upon the threshold where new UI paradigms that are intuitive and utilitarian finally become usable in realtime. Again, the task here is not so much video-intensive, as it is in managing a true 3D world where well-known physical laws are obeyed.

Like Gates in the early 80s and his famous 64k quip, shorting the need for more computing power with the argument that what we have today runs okay is shortsighted. What we have today runs okay today because it was designed with today's machines in mind. That which is designed for tomorrow's machines will not succeed in today's marketplace. Hence, it is always a truism that what we have today works reasonably well with the hardware of today. History shows, however, that as computing power increases, in fits and starts it makes great new paradigms possible. I wouldn't want to live in today's world with the hardware of five years ago, and I expect I'll say the same five years from now.

All very good points. Alright, then bring on the 6GHz IBM 970 chip with at least 900 MHz FSB! That'll do for now, at least until the next generation of CPUs comes out...
 
Originally posted by lmalave



But there IS a limit. IBM has already made prototypes of displays that have a higher resolution than the human eye. And audio technologies already exist that supposedly surpass the audio processing capabilities of the human ear (wasn't DAT basically there, like, 10 years ago).


Um....if IBM has made a display with a higher resolution than the human eye, then humans would never be able to see this "improved" resolution. Any image that appears on such a display will only appear as good as the human eye can perceive it. Basically, any image, whether the resolution is higher than that of the human eye, can only be seen as well as the human eye is capable of seeing it. Bad eye vision = poor display image, no matter what the resolution of the display happens to be.

Same with the audio capabilities.


Also, we will always need more processing power. We just don't know it yet.
 
Originally posted by Abstract



Um....if IBM has made a display with a higher resolution than the human eye, then humans would never be able to see this "improved" resolution. Any image that appears on such a display will only appear as good as the human eye can perceive it. Basically, any image, whether the resolution is higher than that of the human eye, can only be seen as well as the human eye is capable of seeing it. Bad eye vision = poor display image, no matter what the resolution of the display happens to be.

Same with the audio capabilities.


Also, we will always need more processing power. We just don't know it yet.

Hahahahaha! See, that's exactly where you're wrong! You forgot about super-human vision. One of the technologies I'm most anticipating is are custom-made glasses (or contact lenses) that correct EVERY aberration in your eye. The problem with current optical technology is that there are about 50 different types of aberration in the human eye, but lenses and surgery only correct a couple of them. The technology to scan your eye and produce the lenses is already has already been developed and should be productized in a couple of years. I read and interview with someone who'd tried a prototype who said that these glasses made his supposedly "perfect" 20/20 vision look like an impressionistic painting. I don't wear glasses but I used to since I have astigmatism, until I realized that glasses weren't really correcting my vision and were just making my eyes lazier. I think these super-glasses would be the first glasses I wear that actually correct my vision...
 
Originally posted by Abstract



Um....if IBM has made a display with a higher resolution than the human eye, then humans would never be able to see this "improved" resolution. Any image that appears on such a display will only appear as good as the human eye can perceive it. Basically, any image, whether the resolution is higher than that of the human eye, can only be seen as well as the human eye is capable of seeing it. Bad eye vision = poor display image, no matter what the resolution of the display happens to be.

Same with the audio capabilities.


Also, we will always need more processing power. We just don't know it yet.

Hmm, actually, not true. Assuming, for one instant, that all human eyes were exactly the same (your argument fails immediately if one takes into account the fact that the resolution capabilities of the human retina varies widely from one individual to another).

You see, the human eye does not use a square- or rectangular-pixel "grid" of photosensors to capture its view of the world around it. The rods and cones of the human eye will never align perfectly on the pixel boundaries of the display. As a result, what one rod/cone sees will either be entirely one generated pixel or a combination of two or more gewnerated pixels. In order for the eye to perceive "black,white,black" picket fence pattern at a frequency equivalent to the rod's linear density on the retina, going down just to the resolution of the rods (120 million in the retina, more highly concentrated in the center, which is the "max resolution" area of the eye) is insufficient, as going down only that far would yield "gray, gray, gray". One would in fact have to display the picket fence to much higher resolutions (meaning, either make the "black" portion significantly thinner or the "white" portion significantly thinner) to give the eyes enough data to see that this is really "black, white, black". In the real world, the eye is able to pick up clues of a thin black line because it can see that the rods in that line are slightly "grayer" than the surrounding all-white rods. This is a somewhat contrived example, but I think you can see what I'm getting at.

Another way of looking at it is in a more tangible analogy: a scanner versus a printer. Imagine you have a scanner set to scan at precisely 100 dpi (and the scanner engine scans only at this resolution, not at a higher resolution then applying logic to resolve a clearer 100 dpi image), and a printer that prints at exactly 100 dpi (true gray tones instead of dithering patterns). You print out a highly-detailed line drawing at 100 dpi. You scan that line drawing in. Is it still precisely black/white, or are there numerous "gray" areas? For the most part, you should have ended up with a slightly "blurred", grayed drawing, with few if any "true black" lines. Print out the scanned image, and scan it again: even more blurring. Every generation introduces more blurring, highlighting the fact that no generation was an exact depiction of the preceding generation.

Now, with the scanner, you have no way of combatting this blurring. The eye, however, sees in "grains" of varying sizes/shape, and can take multiple "pictures" of an object from slightly modified viewpoints, and so can distinguish between
gray" and "thinner than my resolution black", much as a scanner which scans at 1200 dpi and then downsamples and uses path-finding logic can come up with an almost identical black-white image of the original.

With the human eye, one can perceive exactly the displayed image if the display pixels are much larger than the resolution of the rods and cones at the center of the retina (ie, matching many rods/cnes per pixel). The eye can be fooled into not seeing pixel boundaries at all when the image is displayed at a resolution significantly greater than the resolution of the rods and cones at the center of the retina. The eye "wants" to be fooled, and the optical cortex tries to make what it sees (square pixels with a thin black line surrounding them) match what millions of years of evolution makes it believe it should be seeing (an image), and so even a fairly low-resolution image will be "resolved" to a photo-like image by the mind. But, that having been said, even better if one doesn't rely on one's own eyes "fooling" him, and instead can "see" the image without effort.

So, to the point: a higher-than-retinal "resolution" image can well be seen by the retina as being a synthesized image, and can cause the cortex to employ its extraordinary processing capabilities so that the brain sees "real" images. If one is to convey to the retina the highest possible density of information, the display array must have a resolution significantly greater than that of the target area of the retina. Defeating the post-processing by the cortex is another matter entirely, of course.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.