Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
AnandTech is putting a lot of emphasis on this FB-DIMM issue. Their Conroe vs Xeon comparisons are poor given that they maximize the FB-DIMM latency "problem" by using a Mac Pro with only two RAM slots occupied. Seems as though they have an agenda to exaggerate the importance of this technical issue.
 
amin said:
AnandTech is putting a lot of emphasis on this FB-DIMM issue. Their Conroe vs Xeon comparisons are poor given that they maximize the FB-DIMM latency "problem" by using a Mac Pro with only two RAM slots occupied. Seems as though they have an agenda to exaggerate the importance of this technical issue.

I have noticed this emphasis as well; not being an expert on this issue myself though, would you care to shed light on how their coverage is an exaggeration and why we shouldn't be worried about it?

The comments about separate platforms in the NT era I took to refer to NT3.x/4 vs Win9x.

Yes, this is what I was getting at. ("arse about face"? What is that, Swedish? :rolleyes: ). Noone other than a vintage Windows IT person would know there were further differences between versions of NT itself. Also when making comparisons I never mentioned Server 2003 (about which I know almost nothing); I was talking about XP and 2000 being relatively similar whereas, for example NT and 98 were not.

ergle2 said:
New micro-arch -- Nehalem is due 2008.

Really, completely new? As in, to Core 2 what the G5 was to G4? In just two years?? I guess they're really ramping things up... Core 3 Hexa Mac Pros, anyone?
 
brianus said:
Really, completely new? As in, to Core 2 what the G5 was to G4? In just two years?? I guess they're really ramping things up... Core 3 Hexa Mac Pros, anyone?

Intel's stated plans as I understand them are thus:

A new micro-arch every 2 years. I don't think they mean brand new so much as "significant changes/improvements". Whether this is akin to Yonah->Conroe or Netburst->Conroe remains to be seen, but more like the former (or perhaps Pentium-M -> Merom -- Core Duo was very much a stop-gap). Little has been released about Nehalem, but at one time it was slated as "based on Banias/Dothan", due in 2005 and expected to ramp to 9/10GHz.

"Off" years will recieve derivative versions (e.g. Merom->Penryn), which appears to be mostly stuff like L2 cache increases, faster FSB speeds (at least while we have FSBs - 2008 looks like the year for DCI, finally), die shrinks, increasing the number of cores (expect at least one to be more cores on a single die instead of two dice/package), etc.

Die shrinks are currently scheduled for "off" years, in order to stablize the process ready for the new micro-arch in the following year so Intel doesn't need to deal with both new process and new arch at the same time, and presumably in part to keep speed increases coming in "off" years

Of course, roadmaps can change quite rapidly -- it's not that long ago that Whitfield was expected to debut late 2006 with DCI (FSB replacement). Whitfield was replaced by Tigerton which is now due sometime in 2007...

One thing's for sure, Intel appears to have learnt a great deal from the Netburst fiasco -- how not to do things, if nothing else. Unfortunately, they still estimate ~50% of processors shipping in 1Q2007 will be netburst-based (mostly Pentium-D).
 
brianus said:
I have noticed this emphasis as well; not being an expert on this issue myself though, would you care to shed light on how their coverage is an exaggeration and why we shouldn't be worried about it?

I am no expert, and I am not denying that this issue matters. However, I see no cause for concern unless someone provides some decent evidence that it matters. It strikes me as odd that they (at AnandTech) put so much emphasis on explaining the theory behind a "problem" without making any competent effort at illustrating an example of the problem. When you go to configure a Mac Pro, the Apple page says the following about memory: "Mac Pro uses 667MHz DDR2 fully buffered ECC memory, a new industry-standard memory technology that allows for more memory capacity, higher speeds, and better reliability. To take full advantage of the 256-bit wide memory architecture, four or more FB-DIMMs should be installed in Mac Pro." Yet AnandTech chose a 1GB x 2 RAM arrangement to compare the Core 2 Extreme and Xeon processors. Using this setup, which effectively cripples the Mac Pro memory system, they find it to be at worst 10% slower than the Conroe Extreme (in a single non real world usage benchmark). Meanwhile in any comparison that utilizes the four cores, the quad Xeon whoops ass by a large margin.
 
ergle2 said:
One thing's for sure, Intel appears to have learnt a great deal from the Netburst fiasco -- how not to do things, if nothing else. Unfortunately, they still estimate ~50% of processors shipping in 1Q2007 will be netburst-based (mostly Pentium-D).

It is a shame, but sadly those are the real cheap chips right now. The good news is that they'll change those over soon enough with more Allendales, then millville and so on and so on taking on more segments of the market.

I think as they transition to 45nm we'll see more and more Core chips, simply because they'll want as much manufacturing to be on the new process as possible, and they don't need to scale the D's etc. down to it.
 
Silentwave said:
It is a shame, but sadly those are the real cheap chips right now. The good news is that they'll change those over soon enough with more Allendales, then millville and so on and so on taking on more segments of the market.

I think as they transition to 45nm we'll see more and more Core chips, simply because they'll want as much manufacturing to be on the new process as possible, and they don't need to scale the D's etc. down to it.

Indeed. The Netburst chicken has been decapitated, it just hasn't yet stopped running around the marketplace...

I think Intel wants the transition to go as quickly as possible, given the aggressive pricing of Core 2 - not as cheap as Pentium D, but a much better bang for the buck, so to speak. Of course, that's also connected to trying to beat back the AMD surge of recent years...
 
ergle2 said:
And of course, NT started as a reimplementation of VMS for a failed Intel RISC CPU...
A cancelled Digital RISC CPU.

Although, some of the ideas for the cancelled CPU ended up in the Alpha chips.
 
suneohair said:
Didn't you get the memo, Hyperthreading was a joke.

Didn't you get the memo, PowerPC is dead. WTF does that have to do with anything? Do you just have this Pavlovian response to the word "Hyperthreading"?

I fully understand how Hyperthreading works -- in certain situations the processor can run two instructions simultaneously. Not all situations, however. So sometimes a single CPU can act like it is dual core, other times it cannot, depending on the independence of the two threads. It's like having an ambidextrous person instead of two people.

It was a top of the line processor when I bought it ~18 months ago. It is a DUAL CORE processor before Hyperthreading even comes into the picture. With Hyperthreading on it looks like 4 processors to Windows. So, what, should I turn off Hyperthreading just because you don't like it? Am I supposed to stop using the machine? Boob.
 
I think all this quad and oct core stuff is fantastic (it would be even more fantastic if I have the money to get such gear...)

But at the moment it's the HDD that slows everything down. Your RAM may be able to send 4GB/s of data to the processor to deal with, but the HD can't write the said executed data at even a 10th of the speed.

I remember reading a BBC news article the other month about mRAM (or magnetic RAM) which has the same write speeds as RAM, but without its volatility. It doesn't loose it's data when the power is off. Ideal for fast HDDs they say.

On an unrelated note, wouldnt it been cool to effectivly install a whole OS on RAM. That would be noticably quicker....
 
scottlinux said:
The Today show is an embarrassment. The US major tv networks do not have any real morning news programs. How to trim your dog's ears and an inside look into American Idol contestants is NOT NEWS. It is an entertainment talk show.

The network morning "news" shows have always been fluff. What's worse is that the so-called "hard news" shows are just as bad, and not just in the morning -- CNN, MSNBC, and Fox News all run mindless fluff instead of news. And don't get me started with MSNBC airing Eye-Puss in the Morning.
 
simontarr said:
I remember reading a BBC news article the other month about mRAM (or magnetic RAM) which has the same write speeds as RAM, but without its volatility. It doesn't loose it's data when the power is off. Ideal for fast HDDs they say.

Yeah, I think they're calling them "Hybrid drives" where they will have some fast static RAM built into a hard drive and store the most frequently accessed part of the drive in cache on the static RAM.

simontarr said:
On an unrelated note, wouldnt it been cool to effectivly install a whole OS on RAM. That would be noticably quicker....

You used to be able to do that with ramdisks, but getting the files onto the ramdisk took more time than just booting from the disk. Sometimes you can force the OS to keep itself in RAM when it's loaded from disk (so the OS won't start swapping itself out when it needs memory), there's a setting for this in Tweak XP.
 
brianus said:
drsmithy said:
The server/desktop division with Windows - as with OS X - is one of marketing, not software. Windows "Workstation" and Windows "Server" use the same codebase.

True (today anyway; in the NT era they were indeed separate platforms though. Which brings me to my next point..)

No, that is not true, in fact it couldn't be more untrue. Now, the 95 family (95/98/ME) was a totally different codebase. But with the NT family (NT/2000/XP) the client and the server were identical, even identical in distributed code. In fact there was a big scandal years ago where someone discovered the registry setting where you could turn NT Workstation into NT Server. Back then all that was different was the number of outbound IP connections and possibly the number of CPUs supported. All they were trying to do with Workstation was prevent you from using it as a server (thus the outbound IP limit) and at some point they didn't give you full-blown IIS on Workstation. That's it.
 
ergle2 said:
And of course, NT started as a reimplementation of VMS for a failed Intel RISC CPU...

More pedantic details for those who are interested... :)

NT actually started as OS/2 3.0. Its lead architect was OS guru Dave Cutler, who is famous for architecting VMS for DEC, and naturally its design influenced NT. And the N-10 (Where "NT" comes from, "N" "T"en) Intel RISC processor was never intended to be a mainstream product; Dave Cutler insisted on the development team NOT using an X86 processor to make sure they would have no excuse to fall back on legacy code or thought. In fact, the N-10 build that was the default work environment for the team was never intended to leave the Microsoft campus. NT over its life has run on X86, DEC Alpha, MIPS, PowerPC, Itanium, and x64.

IBM and Microsoft worked together on OS/2 1.0 from 1985-1989. Much maligned, it did suck because it was targeted for the 286 not the 386, but it did break new ground -- preemptive multitasking and an advanced GUI (Presentation Manager). By 1989 they wanted to move on to something that would take advantage of the 386's 32-bit architecture, flat memory model, and virtual machine support. Simultaneously they started OS/2 2.0 (extend the current 16-bit code to a 16-32-bit hybrid) and OS/2 3.0 (a ground up, platform independent version). When Windows 3.0 took off in 1990, Microsoft had second thoughts and eventually broke with IBM. OS/2 3.0 became Windows NT -- in the first days of the split, NT still had OS/2 Presentation Manager APIs for it's GUI. They ripped it out and created Win32 APIs. That's also why to this day NT/2K/XP supported OS/2 command line applications, and there was also a little known GUI pack that would support OS/2 1.x GUI applications.
 
AidenShaw said:
A cancelled Digital RISC CPU.

Although, some of the ideas for the cancelled CPU ended up in the Alpha chips.

NT was originally designed for the i860, which was codenamed the N-10 (hence NT).

Anything for Digital would have been while Cutler was at Digital, I imagine, rather than after he joined Microsoft.
 
janstett said:
No, that is not true, in fact it couldn't be more untrue. Now, the 95 family (95/98/ME) was a totally different codebase. But with the NT family (NT/2000/XP) the client and the server were identical, even identical in distributed code. In fact there was a big scandal years ago where someone discovered the registry setting where you could turn NT Workstation into NT Server. Back then all that was different was the number of outbound IP connections and possibly the number of CPUs supported. All they were trying to do with Workstation was prevent you from using it as a server (thus the outbound IP limit) and at some point they didn't give you full-blown IIS on Workstation. That's it.

Dude, how many times do I have to repeat myself before you myopic '90s-era IT geeks understand me? I was referring to the difference between Windows 9x and Windows NT. I neither knew, nor care, that there were different versions of NT itself. For. Christ's. Sake. I have said this three times now. Don't make me come over there.

simontarr said:
On an unrelated note, wouldnt it been cool to effectivly install a whole OS on RAM. That would be noticably quicker....

I keep hearing about speculation that they'll start using NAND flash to help with startup times in laptops, things like that -- now, how would that work? Doesn't everything have to be on the boot volume? OS's seem to assume these days that the OS, programs and user directories are all going to be on one volume and you have to be kind of technically literate to do it differently..
 
janstett said:
More pedantic details for those who are interested... :)

NT actually started as OS/2 3.0. Its lead architect was OS guru Dave Cutler, who is famous for architecting VMS for DEC, and naturally its design influenced NT. And the N-10 (Where "NT" comes from, "N" "T"en) Intel RISC processor was never intended to be a mainstream product; Dave Cutler insisted on the development team NOT using an X86 processor to make sure they would have no excuse to fall back on legacy code or thought. In fact, the N-10 build that was the default work environment for the team was never intended to leave the Microsoft campus. NT over its life has run on X86, DEC Alpha, MIPS, PowerPC, Itanium, and x64.

IBM and Microsoft worked together on OS/2 1.0 from 1985-1989. Much maligned, it did suck because it was targeted for the 286 not the 386, but it did break new ground -- preemptive multitasking and an advanced GUI (Presentation Manager). By 1989 they wanted to move on to something that would take advantage of the 386's 32-bit architecture, flat memory model, and virtual machine support. Simultaneously they started OS/2 2.0 (extend the current 16-bit code to a 16-32-bit hybrid) and OS/2 3.0 (a ground up, platform independent version). When Windows 3.0 took off in 1990, Microsoft had second thoughts and eventually broke with IBM. OS/2 3.0 became Windows NT -- in the first days of the split, NT still had OS/2 Presentation Manager APIs for it's GUI. They ripped it out and created Win32 APIs. That's also why to this day NT/2K/XP supported OS/2 command line applications, and there was also a little known GUI pack that would support OS/2 1.x GUI applications.

All very true, but beyond that -- if you've ever looked closely VMS and at NT, you'll notice, it's a lot more than just "influenced". The core design was pretty much identical -- the way I/O worked, its interrupt handling, the scheduler, and so on -- they're all practically carbon copies. Some of the names changed, but how things work under the hood hadn't. Since then it's evolved, of course, but you'd expect that.

Quite amusing, really... how a heavyweight enterprise-class OS of the 80's became the desktop of the 00's :)

Those that were around in the dim and distant will recall that VMS and Unix were two of the main competitors in many marketplaces in the 80's and early 90's... and today we have OS X, Linux, FreeBSD, Solaris, etc. vs XP, W2K3 Server and (soon) Vista -- kind of ironic, dontcha think? :)

Of course, there's a lot still running VMS to this very day. I don't think HP wants them to tho' -- they just sent all the support to India, apparently, to a team with relatively little experience...
 
brianus said:
I keep hearing about speculation that they'll start using NAND flash to help with startup times in laptops, things like that -- now, how would that work? Doesn't everything have to be on the boot volume? OS's seem to assume these days that the OS, programs and user directories are all going to be on one volume and you have to be kind of technically literate to do it differently..

Intel's "Robson" technology.

It's just a cache of certain files in FLASH. It's trivial to have the system check there first and then the boot volume afterwards. Like everything else, those implementing it need to be technically literate, but once its done, it's done. Users don't need to know what's going on.

Vista already has some feature that allows caching etc. to any flash devices connected to the system, btw.
 
simontarr said:
On an unrelated note, wouldnt it been cool to effectivly install a whole OS on RAM. That would be noticably quicker....

The OS would be faster but unless you had tons of RAM, the Apps ... :)

Modern OSes use RAM not used by apps to cache recently used files/data, since it makes more sense to keep around stuff the system mind need again. Most OS files aren't needed (just look at the size of the OS itself on any system!).

Of course, back in my Amiga days, pretty much all the OS was running from ROM/RAM, and it had pre-emptive multitasking but no VM system. As a result, it was incredibly snappy to use, despite being a 7.14MHz 68K. I've occasionally seen real Amigas since then and I'm always impressed by how "fast" it feels, even if the system itself seems rather primative by modern standards.

I imagine the early Macs were somewhat similar in this regard, but I didn't use one properly til the early 90's, by which time I was more interested in Unix, VMS, etc.
 
Evangelion said:
Uh, last time I checked, Windows can take advantage of multiple cores just fine. Do you think that multithreading is some Black Magic that only MacOS can do? Hell, standard Linux from kernel.org can use 512 cores as we speak!

Related to this: Maybe not 512-way SMP, but here is what it looks like when Linux boots on 128-way SGI Origin supercomputer. Note, the kernel that is booting is 2.4.1, which was released in early 2001. Things have progressed A LOT since those day.

OS X works with quad core == "Ahead of technology curve"... puhleeze!



Windows works just fine with dual-core. It really does. To Wndows, dual-core is more or less similar to typical SMP, and Windows has supported SMP since Windows NT!



Any reason why it wouldn't work? And did you even read the Anandtech-article? They conducted their benchmarks in Windows XP! So it obviously DID work with four cores! And it DID show substantial improvement in performance in real-life apps! Sheesh! Dial tone that fanboysihness a bit, dude.

I think the same applies to you, Bill. You seem to be here to act as a Microsoft evangelist.
 
AidenShaw said:
Any description of the history of NT that doesn't say "Mica" and "Prism" is missing some major details ;) !

Well, come on! I wrote a synopsis that was already too lengthy. I felt it sufficient to say that Dave Cutler's life at DEC gave him OS Guru status and left it at that. I didn't mention Gordon Letwin either. On either point it's rather like mentioning Brian Kernighan and Rob Pike in a history of OSX -- technically accurate but of marginal relevance.
 
brianus said:
Dude, how many times do I have to repeat myself before you myopic '90s-era IT geeks understand me? I was referring to the difference between Windows 9x and Windows NT. I neither knew, nor care, that there were different versions of NT itself. For. Christ's. Sake. I have said this three times now. Don't make me come over there.

Well then, if you are so consistantly misinterpreted, have you ever stopped to think you should CLARIFY yourself, or that you must not be communicating your point clearly? The truth is Microsoft has dealt with two simultaneous families of operating systems from 1987-2003, and the survivor is NT/2K/XP, and it was always the better of the two operating system families that geeks like us would be concerned with, so naturally that's the one most people think of when projecting back in history.
 
ChrisA said:
One app would be iTunes. I noticed iTunes was running 14 threads last night. Any time you have a multithreaded application or are running multiple single thread aplications more cores can help.

iTunes is generally so low-impact that it could be single threaded and you probably wouldn't notice. If the main thread is bogged down, I still get the spinny color disc with iTunes on occasion. It seems to do this sometimes when I sync an iPod, iTunes sometimes won't let me do anything else.

An eight-core system should be able to eight single threaded programs running at 100% of one CPU without issue. What I hope is that more programs that need the processing power can use the full power of more than one CPU so you don't need to multitask so heavily to take advantage of the power available.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.