Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Well, in his defense...we don't know exactly what he paid for the Pro (maybe he got upgrades/options on it) nor do we know what he had for computers in the past (and were they Macs or pcs and what were the specs?) nor do we know what he was using the old computers for as well as the Pro (diehard video editing or simply websurfing)...nor do we know a lot of other questions.

Me? I have a Mini that I bought new in August 2007 right after the refresh...I rarely use it but it runs ok performance wise...the same performance as 2+ years ago for things like web surfing and iPhoto. But I've ALWAYS had a tendency to buy new Windows PCs every 3-5 years. Not because they suck or break or are slow, but it is VERY affordable to buy a new $550 box (no monitor needed for me) every 3-5 years that come with options/performance that outweighs me taking the time/money/effort of simply upgrading my old box...AND... I am also not the average PC user...most PC users I know hold onto their boxes for 5-8 years...yes, that is not a typo. I'm more of a technologist who is always looking for the performance gains and technology improvements (like eSata, faster buses, faster ram speeds, etc.) rather than most personal computers (mac and pc alike) who just want the box to work as long as possible without spending any more money....I do a lot of audio work (and some video) as well which always benefits from faster technology as the years go by.

Contrary to belief, people buy new/replacement machines (mac and pc) all the time for dozens of reasons...and I would bet that one of the top reasons (albeit they wait longer than me) is they feel the old machine is just...well...old...buying a new one is going to be cutting edge, come with a warranty, have a new OS that has new features, etc. etc. They then take the old computer and give it to the kids or make it a 2nd computer (a lot more these days thanks to LCD monitors making everything nice and small). I'm actually in the process of helping a co-worker buy a new Windows box after his 7-year old one finally died (actually just the drive but he wants something entirely new and obviously much more recent technology). My last 3 Windows pcs were purchased for $600 each without a monitor which totals $1800...all total, the 3 pcs gave me just over 10 years of use. All 3 were of course not bleeding edge but the most recent included: Intel Quad chip, 3gig ram, 500GB SATA drive, 1 dvd/cd drive, ATI video card, 6 USB ports, 1 Firewire. For $600 and my usage, that's a great performance/price ratio. So 10 years ago, I don't think it would have been wise for me to plunk down $2500+ for a super high end PC because a)technology would have changed significantly in 10 years on many fronts and b)I would have spent about $700 more for the super duper computer.

Again, everyone uses a machine differently...and thus thinks differently about how often to replace. People also have budgets. :)

-Eric

Most people have made this connection because it works for very many people. IIRC, a long time ago John Carmack (iD fame) was asked what workstations they used at iD. He said they didn't use workstations. It just made more sense to buy a new computer every year than to spend that much up front. You simply ended up with more features and better average performance over the time period by buying a new $600-$1000 computer every year or two. If you need the power of a MP right now, then by all means buy it, but don't think you're future proofing yourself by spending $3k on a machine.
 
This time Apple...update the freakin' case.

A new design would be nice this time. I was expecting one last time. I have been waiting for this update for a while now. I also want a 30"+ LED Cinema Display update. I will be buying a new decked out Mac Pro with Dual LED Cinema Displays, and I want bigger than the 24 inchers.

Also, LIGHT PEAK!!! PLEASE APPLE!!! :D

Another MAJOR plus would be FCS with Cocoa, 64 bit, UI updates, and Open CL support. WOW, I would be so satisfied if I could have these things! :)

Oh, and update Apple TV and everything to support iTunes Extras and open it up to us normal people already! :cool:
 
The original Phenom could but the feature was removed from Phenom II. It's either all idle, half, or full speed now much like Core 2.

Threads would hop around from a full speed core to an idle one and the processor wouldn't ramp up core speeds quickly enough.

Nehalem/Westmere seems to handle the throttling and idle just fine. Windows 7 also features Core Parking so threads will stick to a certain core when its at full speed instead of the chance of bouncing to an idle one.

Ah, good ol' thread juggling.

If I had to say Conventional Computing fails anywhere in particular it has to be the nature of threads.
 
Now I ask this simple question. Will they offer this in dual processor or also in quad processor? 24 cores.

Gulftown is scheduled to be used on Socket 1366; which is high-end single-socket and dual-socket only. Quad-socket is going to be a different socket; but quad-socket is also getting an eight core processor, "Beckton" so four sockets will be 32 cores, 64 threads.

is he talking HT cores?

No. The current 'Nehalem' chips have four physical cores each, eight total threads via HyperThreading. Gulftown is 6/12. So the dual-socket systems now have eight physical cores, 16 virtual; and Gulftown dual-socket systems will have 12 physical cores, 24 virtual.

I wonder if these 6 core processors are backwards compatible, for some (way off in the future) upgrade?

Supposedly, Gulftown is 100% backward compatible with X58/5520 chipsets used by Bloomfield/Gainestown now. So yes, you should be able to throw a Gulftown in a board made for Socket 1366 i7 now; or two of them into a Xeon 5500-series board, such as the Mac Pro. (Just like with the original Mac Pro, which could take "Clovertown" quad-core processors, even though it only launched with dual-core "Woodcrest".)

If this becomes true, including the RAM it will be my next purchase. I'm hoping the SLI option is in there for x16 on 2 GPU cards and thus have one beast for OpenCL coding.

While I don't know about the Mac Pro, or even OS X; on the Windows side, you can use nVidia's "CUDA" (OpenCL-like programming interface,) with multiple video cards even on a motherboard that doesn't support SLI. The "GPGPU" programming doesn't care if the card can be used for graphics, it only cares if the GPU is physically present. I would hope that OS X is similarly agnostic. It would just see that you have multiple GPUs, and use them.

P.S. Sorry if others already answered these, I didn't read the whole thread.
 
The only thing I can think of that comes close to NEEDING Mac Pro power is audio/video authoring/creating/rendering/mastering.

So how 'bout it? How many of you on this thread actually buy a new Pro every few years (or more often) for personal use...and why?

-Eric

My company buys me a pro/tower for office use whenever we upgrade. I want to buy another one to replace my old busted G5 for the work I do at home.

I do all the things you describe above plus 3d. The Mac Pro Xeon I use at work is literally 4 times faster at rendering than my MacBook Pro at home, plus it has 4 times as much ram!

Of course, my MacBook Pro is twice as fast as the G5 I had, so things are getting faster overall. :)
 
Someone with access to the Google server farms wrote a paper about that, and the number of errors they found are quite significant. More like thousands, not one every thousand years. Have a look at the last few weeks on slashdot.

BTW from Wikipedia: "Cosmic rays constitute a fraction of the annual radiation exposure of human beings on earth. For example, the average radiation exposure in Australia is 0.3 mSv due to cosmic rays, out of a total of 2.3 mSv." That's about 0.01% of the lethal dose that hits you every year, not "one hitting earth every thousand years" (3000 mSv = "50% die within 30 days").
thats all good and all. but the odds of a bit flip actually happening are not very high at all, and thats what i was referring to. sure we are getting radiation and stuff but naah bit flips arent going to occur from it.

btw, in that slashdot article a few people agree with my POV over the likeness of bit-flips actually occurring. :)

Well that is pricing in a small market. US, UK and European prices have the addition of ECC as a small percentage of the overall cost on such systems (not just hardware). It may be expensive for an individual over non ECC, but you have that choice. The Mac Pro doesn't require ECC for example.
i wouldnt call australia a small market. we have server farms here too you know :p

but yea i agree the MP doesnt really need ECC RAM, not that it would drop the prices that much (it is apple remember lol)

The idea that the only area where easily preventing random errors is miltary use in the world we live in today seems very narrowminded to me. Google and the Univeristy of Toronto did a study that found 8% of Google's DIMMs suffered from memory errors over the two years of study looking at most of Google's systems. There are plenty of other published studies by companies on the error rates of their systems. The don't have a vested interest in promoting something that costs them money yet offers no benefit.
8%? pretty significant. can you back it up? can i read the article? what percent was from computational errors? what percent was from failing sticks? i would also like to see what percent was from cosmic rays :rolleyes:

Without ECC you cannot easily detect, correct or prevent further memory errors, be they soft or hard errors.
i have never been that familiar with how it actually works. i would love to learn. soft errors being made from bad code? hard errors being external sources/failing hardware?

No, he means two physical 6-core CPUs, for a total of 12 physical cores, and 24 threads if Hyper Threading is in use.
yes that is what i meant. 24 cores under HT.

I don't think so... and I'd love to build a Mini for $300.
i meant an equivalent mini-like computer in terms of power. $300 would cover that.

No. The current 'Nehalem' chips have four physical cores each, eight total threads via HyperThreading. Gulftown is 6/12. So the dual-socket systems now have eight physical cores, 16 virtual; and Gulftown dual-socket systems will have 12 physical cores, 24 virtual.
tahts what i was referring to, 24 threads (which are all thanks to HT).

i would rather not have HT turned on.


While I don't know about the Mac Pro, or even OS X; on the Windows side, you can use nVidia's "CUDA" (OpenCL-like programming interface,) with multiple video cards even on a motherboard that doesn't support SLI. The "GPGPU" programming doesn't care if the card can be used for graphics, it only cares if the GPU is physically present. I would hope that OS X is similarly agnostic. It would just see that you have multiple GPUs, and use them.

P.S. Sorry if others already answered these, I didn't read the whole thread.

AFAIK that is the same view from OSX, if the GPU is there and is supported by OpenCL then it will be used. i have yet to see benchmarks of video conversions/system benchmarks etc comparing leopard to SL but i imagine it would be a very nice increase.
 
LMAO you do realize that 2002 was almost EIGHT years ago, yes? You see there's this thing called "inflation". Ah, nevermind, you're probably too young to understand.

That would make since except that computer prices have year after year decreased and have been pretty much immune to the effects of inflation.
 
thats all good and all. but the odds of a bit flip actually happening are not very high at all, and thats what i was referring to. sure we are getting radiation and stuff but naah bit flips arent going to occur from it.

It depends if you like your files/data or not.

I have two racks of dual-quad Xeon 5500 ProLiant systems - 8 3.0GHz cores, 16 GiB per system.

With about 50 systems running, I get a random bit-flip about once per week. The HP error logging is superb - the front panel of the machine has an orange LED that lights for each of the DIMMs that has had a correctable error. (I track the logs to distinguish between the random error and a bad DIMM. If one DIMM gets several errors in a week, it's replacement time.)

Are you comfortable with the "odds" being that your system will corrupt data or files once per year? Not "might"... "Will"!

I'm not.
 
It depends if you like your files/data or not.

I have two racks of dual-quad Xeon 5500 ProLiant systems - 8 3.0GHz cores, 16 GiB per system.
impressive ;) i am extremely jealous. what do the servers do?

With about 50 systems running, I get a random bit-flip about once per week. The HP error logging is superb - the front panel of the machine has an orange LED that lights for each of the DIMMs that has had a correctable error. (I track the logs to distinguish between the random error and a bad DIMM. If one DIMM gets several errors in a week, it's replacement time.)

Are you comfortable with the "odds" being that your system will corrupt data or files once per year? Not "might"... "Will"!

I'm not.

how many errors of those are caused by cosmic radiation?

the modules that repeatedly get errors are clearly dying, so that rules out the radiation. the point im trying to make is that the cosmic radiation will not effect your systems. and if it does happen then your unlucky.

i understand that there will be hard errors, im not denying that at all! id be an idiot if i said they dont exist. :rolleyes:
 
impressive ;) i am extremely jealous. what do the servers do?

Mostly VM simulations of "cloud computing" apps.


how many errors of those are caused by cosmic radiation?

I don't know (or care). With a few hundred DIMMs, I expect a low rate of random single bit errors. As long as I have ECC memory, these errors are only noise entries in my error logs.


the modules that repeatedly get errors are clearly dying, so that rules out the radiation. the point im trying to make is that the cosmic radiation will not effect your systems. and if it does happen then your unlucky.

No, the point is that *any* error, for any reason, that corrupts the database destroys months of work. Even worse - you may not know that months of work have been destroyed if you don't have ECC.

When the PowerMac "super computer" was built at Virginia Tech, there was a big gloat on the Mac web. Unfortunately (for the people at VT), it was unusable at first because the first generation XServes did not have ECC memory. Running more than a thousand servers without ECC was not viable.

That facility was not usable until the second generation XServes with ECC memory came out.
 
Gulftown is scheduled to be used on Socket 1366; which is high-end single-socket and dual-socket only. Quad-socket is going to be a different socket; but quad-socket is also getting an eight core processor, "Beckton" so four sockets will be 32 cores, 64 threads.



No. The current 'Nehalem' chips have four physical cores each, eight total threads via HyperThreading. Gulftown is 6/12. So the dual-socket systems now have eight physical cores, 16 virtual; and Gulftown dual-socket systems will have 12 physical cores, 24 virtual.



Supposedly, Gulftown is 100% backward compatible with X58/5520 chipsets used by Bloomfield/Gainestown now. So yes, you should be able to throw a Gulftown in a board made for Socket 1366 i7 now; or two of them into a Xeon 5500-series board, such as the Mac Pro. (Just like with the original Mac Pro, which could take "Clovertown" quad-core processors, even though it only launched with dual-core "Woodcrest".)



While I don't know about the Mac Pro, or even OS X; on the Windows side, you can use nVidia's "CUDA" (OpenCL-like programming interface,) with multiple video cards even on a motherboard that doesn't support SLI. The "GPGPU" programming doesn't care if the card can be used for graphics, it only cares if the GPU is physically present. I would hope that OS X is similarly agnostic. It would just see that you have multiple GPUs, and use them.

P.S. Sorry if others already answered these, I didn't read the whole thread.

I still think Shanghai holds up pretty well considering they have standard threading. I still think its funny, Intel ditched HyperThreading and NetBurst in a hurry. Now they bring it back as the next big thing since sliced bread.
 
I have been dreaming about a Hackintosh; they have pushed me over the edge. I will be just fine spending $1200 for a Hack Pro with an i7 920 o/c'd to 3.6Ghz, 12 gigs of RAM, and a GeForce 260. This rig will beat the $3500 8-cores in most tasks, and be almost as fast in multicore rendering tasks, since it'll be running at such a high clock speed.

I recommend the rest of you look into a similar solution until Mac Pro prices come down by at least 33%, and/or they stop forcing us to buy server processors we really don't need.
I'd recommend NOT listening to this guy because he KNOWS NOTHING about why people own Macs!
Running OS X on anything other than a Mac is illegal and when you have a problem (in which you will have very very many) you get NO support so what are you really saving? Plus, a Mac has a higher resale value than any PC out there so don't listen to this wrong information... it's misleading people away from the TRUE Mac experience! True elegance and user friendliness!
And you save what... $800.00? I don't know, but I make money with my Macs and to buy some hackinspoodge is just another way to go broke... because the machine is broke he he!
The new 6 core sounds really nice... and they are getting you ready for the new 64 bit applications (that PC's have been promising for how long now?) so get ready for the first computer to go fully 64 bit throughout!
And do it with elegance and intuitiveness!;);)
 
Mostly VM simulations of "cloud computing" apps.
ahh awsome. that requires quite a bit of power no doubt.

I don't know (or care). With a few hundred DIMMs, I expect a low rate of random single bit errors. As long as I have ECC memory, these errors are only noise entries in my error logs.
backs up my point then. if you arent worried about cosmic radiation effecting your servers then it doesnt pose a threat, otherwise you would be educated about it and all that.


No, the point is that *any* error, for any reason, that corrupts the database destroys months of work. Even worse - you may not know that months of work have been destroyed if you don't have ECC.
that's your point, which i am well aware of :) memory errors can be disastrous i know that.

When the PowerMac "super computer" was built at Virginia Tech, there was a big gloat on the Mac web. Unfortunately (for the people at VT), it was unusable at first because the first generation XServes did not have ECC memory. Running more than a thousand servers without ECC was not viable.

That facility was not usable until the second generation XServes with ECC memory came out.

for servers that large it would be pretty important, but for MPs? hardly worth it.
 
backs up my point then. if you arent worried about cosmic radiation effecting your servers then it doesnt pose a threat, otherwise you would be educated about it and all that.

for servers that large it would be pretty important, but for MPs? hardly worth it.
I'm sorry, but you are starting to sound quite uneducated on this topic. Anyone who depends on the accuracy of their data would benefit from ECC, or more accurately, would risk losing time, money, and data by NOT using ECC memory. Just because you don't need it, it doesn't mean others don't. Just because someone doesn't know what percentage of errors are caused by cosmic rays, it doesn't mean they aren't educated about it. It doesn't really matter how many errors are caused by them. There are multiple causes of errors, and cosmic rays are probably the biggest source of them. From your posts I gather that you think cosmic rays are either imaginary, or not an issue.

From Wikipedia:
Research has shown that the majority of one-off ("soft") errors in DRAM chips occur as a result of background radiation, chiefly neutrons from cosmic ray secondaries which may change the contents of one or more memory cells, or interfere with the circuitry used to read/write them.

Also from Wikipedia:
Cosmic rays have sufficient energy to alter the states of elements in electronic integrated circuits, causing transient errors to occur, such as corrupted data in memory, or incorrect behavior of a CPU. This has been a problem in high-altitude electronics, such as in satellites, but as transistors become smaller it is becoming an increasing concern in ground-level equipment as well. To alleviate this problem, Intel has proposed a cosmic ray detector which could be integrated into future high-density microprocessors, allowing the processor to repeat the last command following a cosmic ray event.
 
How would this compare to a Cray CX1?

Wow!

If Apple made a quad 6 core beastie, giving 24 cores in total, wouldn't this be able to compete at some level with the Cray CX1? Not an expert in these matters, but would you expect comparable performance?

Just a thought....... an Apple Supercomputer!! Nice :)
 
I'll show you mine ...

There's always been a little battle between hardware makers and software writers.

Sort of like "I'll show you mine, if you show me yours."

That was easy, now you go first ... no, you go first ... no, you go first ... no, you go first ...

My main apps are Adobe Photoshop, et al. Last time I checked, Photoshop still only utilizes 2 cores max. Someone please correct me if I'm wrong. So, when they get around to changing that, will the limit be 4 cores?

And how much memory will it be capable of accessing?

Want to use more RAM than you're supposed to?

I've got a suggestion: if you have a G5, with lots of RAM (8GB+), but Photoshop can only use 2GB ... set up a 2GB RamDisk and select it as your first scratch disk.

http://www.mparrot.net/

Sure, there are occasional volatility issues, but usually after I let the machine sleep, Photoshop left open, wake it and PS won't save anything. Easy workaround: no sleepy, save often.

Have a nice day.

This was good for me ... was it good for you? (first post)
 
I'm sorry, but you are starting to sound quite uneducated on this topic. Anyone who depends on the accuracy of their data would benefit from ECC, or more accurately, would risk losing time, money, and data by NOT using ECC memory.
i am uneducated in this topic as i have no idea of ECC RAM works and figures out the errors, ill admit to that (i.e. the detailed stuff). i but i most certainly not uneducated on the importance of needing 100% accurate data, data backups, data integrity and all of that. assuming i know nothing about this topic based on two/three of my replies is silly of you ;)

as i have said to other people, i am NOT talking about ECC as a whole! i am simply referring to the low probability of cosmic radiation doing a bit-flip on the memory modules (and you already know my position on this)

Just because you don't need it, it doesn't mean others don't.
i dont need it yet :rolleyes: why would i need ECC RAM in a laptop or desktop computer? thats just silly! i am, however, studying to become a network administrator - so you know, ECC is a pretty big thing.

Just because someone doesn't know what percentage of errors are caused by cosmic rays, it doesn't mean they aren't educated about it.
i never said that, my arguement was that if he wasnt aware of the percentage of errors then it didnt pose a threat to the integrity of the RAM.

It doesn't really matter how many errors are caused by them.
why not? as somebody who manages servers/data/EVERYTHING of a business the network administrator NEEDS to be educated in this issue and be aware of the problem, because maybe there can be a fix to it - or they can be made aware when a big radiation blast is going to occur and save data or whatnot.

There are multiple causes of errors, and cosmic rays are probably the biggest source of them.
probably? you base your opinion on uneducated results. please link me to a page where it shows cosmic radiation causes the most errors.

From your posts I gather that you think cosmic rays are either imaginary, or not an issue.
FINALLY! we are on the same page ;)

DRAM chips?? i thought we were talking about ECC chips. still not that sure whether to trust it or not.


nothing important lol. the cosmic ray detector seems pretty cool.


sorry to be so upfront. whilst i see where you are coming from, i am trying to show you my view.
 
there's another good reason for ECC

i dont need it yet :rolleyes: why would i need ECC RAM in a laptop or desktop computer? thats just silly!

It's not silly, at all.

ECC typically does two things:
  1. If a read (usually for 64-bits of data) contains a single bit error, ECC will correct the error, return good data to the CPU, and flag the error for the error logs
  2. If a read gets two or more bits in error, the ECC controller will raise a fatal hardware exception

What this means is that those cosmic ray hits are ignored, the single bit error is fixed.

However, if there's a real error (2 bits or more), it's BSOD or kernel panic time.

The BSOD on Windows basically says "Fatal memory error, address XXXX". Do a little checking to figure out which DIMM that is and replace the DIMM.

Without ECC, maybe the 2 bit error wasn't noticed, may some results are wrong, maybe some file (or even the whole filesystem) is corrupted, maybe the system justs acts weird, or maybe it crashes.

Look around at the number of times people ask about strange system problems, and the response is "have you tested your memory?".

With ECC, you don't have that worry. Either the memory is fine, or the system is constantly failing with the memory error BSOD.

Unfortunately, no laptops and few desktop chipsets support ECC memory - so there's not much to argue about.

If I were to build a new system today, I'd use the Xeon 3500 instead of the Core i7 so that I could use ECC memory.

So, in my mind, the more important reason for ECC is not to protect from cosmic rays - it's to never have to wonder if some problem is due to a failing DIMM.
 
DRAM chips?? i thought we were talking about ECC chips. still not that sure whether to trust it or not.

DRAM is the memory chip type used on current computer memory modules. ECC is a feature present on the modules.

There seems to be more attention being paid to the cosmic ray part than needs to be. I think I was the one who brought it up, but really I intended to just reference external, unpredictable outside forces that can cause errors in memory (ever present radiation). The memory manufacturers do all they can, but it isn't an exact science. Modern production techniques have reduced the effects of these external forces, but as memory sizes grow by large amounts the errors are still something to be concerned about. You just have no idea what bit in which DIMM in which system will get flipped. Because it is so random you need to use whatever methods you can to protect against it if it has the potential to cause issue. It's just insurance. It also reports and can correct hard errors.

The result is that Intel and AMD provide it as part of their platform. Intel don't require ECC memory on their single and dual socket workstation and server platforms anymore (as of 2009), but memory manufaturers don't really cater to those who don't want it. So 4GB non-ECC dimms are expensive, 8GB and 16GB don't exist and you can't get Registered memory ("needed" for high overall capacity) without ECC because there is no market for it. The cost is negliable for most companies and as an individual you have the choice so the issue is really just something that is accepted: You get ECC memory on most workstations and all servers.
 
No SLI remember, the only people left in the Chipset market now is AMD and Intel. Best you could hope for is CrossfireX. (ATi HD 5XXX series is kicking some serious green ass) This is one of the Reasons I secretly hope that Apple changes to AMD. If Intel wants to believe it or not the Binary CPU is becoming irrelevant.
Actually, Apple/Intel could work on getting SLI on the Mac Pros. SLI isn't only available to nVidia-made chipsets. nVidia also licenses it out for other platforms (the X58 chipset for socket 1366 i7s and the P55 chipset for socket 1156 i5's/i7's both have SLI support due to nVidia licensing, and both are Intel-made chipsets). You would simply have to see Apple and Intel work on incorporating SLI support, and have Apple license it from nVidia.

And as the market is now, why would you want to see Apple change to AMD? While the top of the line Phenom X4s are competitive with the top Core 2 Quads, Intel's i7/Nehalem-based processors are generally quite a bit faster than anything AMD can provide, and that doesn't appear to be changing soon. If iMacs do get switched over to using Clarksfield-based i7s, you should see a pretty decent performance increase over the current Core 2s.

And yeah, the Radeon 5*** series is pretty awesome. Given the power management changes and such, here's hoping we can see at least the 5850 make it over to be an upgrade option on the next iMacs.

Edit - Woops, realized people answered this already :p (well, part of it)
 
I still think Shanghai holds up pretty well considering they have standard threading. I still think its funny, Intel ditched HyperThreading and NetBurst in a hurry. Now they bring it back as the next big thing since sliced bread.

Well, Netburst had a *lot* of issues. Remember, it was a time when the main focus in advertising and selling new CPUs were how fast they could go. This is why, for awhile, you saw Motorola (now Freescale) trying to ramp up the speed of the PowerPC chips as quickly as they could, and why it was such a big issue when it took them so long to get to the 500 Mhz mark on the G4 (if I recall correctly).

Intel thus created the Netburst architecture because it was intended to scale to very high speeds (10+ Ghz). They envisioned it lasting them for several years. However, what resulted was that the P4s would get very hot, and within a couple of years they realized that Netburst's days were numbered. The P4's early problems were partly the result of RDRAM, and the high expense that resulted from it. Performance was also somewhat lack luster in the very early chips. However, Intel was more than happy to just chirp along by advertising how fast the P4s could go.

Hyperthreading kinda got dragged down for a couple of reasons, but primarily because it was released at a time when not a lot of applications were multi-threaded. As a result, due to the extra power consumption incurred, it was generally viewed as not being worth the benefit of having.

AMD released their highly-successful 64-bit processors. While there's no doubt that in reality moving to a 64-bit processor had relatively little impact in terms of performance, the fact it could be marketed as such was huge. It also didn't help that, due to no small part that AMD integrated a memory controller, the Athlon 64s easily outperformed the top-model P4s, and word of mouth quickly spread about how much Intel's P4s "sucked".

The reason Intel "ditched" Hyperthreading was that the Core series was derived from the Pentium-M, which itself was an enhanced Pentium III (and the Core 2 series was a much greater expansion upon the Core series). Hyperthreading was never a part of the Pentium III, and it didn't make much sense to put it into the Pentium-M (as it was designed for optimal performance and power consumption). The Core series, expanding upon the Pentium-M, thus didn't incorporate it.

Remember though, Nehalem has been in development for some time, and Hyperthreading is a part of it, so it should be fairly clear that Intel never really ditched it.

And now that many more apps are multi-threaded, you can now actually see a decent benefit from having it enabled, and it isn't the "minor occasional performance increase with larger power consumption hog" that it once was.

:)

Edit - When I said that moving to a 64-bit processor didn't result in much performance from being 64-bits, I should have clarified it as consumer-side. It did make a nice difference in the server and supercomputer markets.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.