Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
idea_hamster said:
Technically, I think that both the iMac and MacBook pro are physically limited to two RAM modules rather than 2GB.
They have two slots. But Apple also advertises a 2GB memory limit, despite the existence of 2G modules.

The core-logic chipset (that controls the busses) is as much responsible for memory capacity as the physical number of sockets.

Apple does have a history of under-documenting memory capacities. (e.g. the Bondi iMac, advertised with a 64M capacity per slot, but can actually accept 256M in each slot.) But this doesn't mean it will accept the largest modules that can physically fit in the slot (those same iMacs can not accept 512M or 1G SO-DIMMs.)

Right now, Apple is advertising a 2G memory limit. Until people install larger modules and demonstrate a larger actual capacity, one should assume that this is the real limit.
 
steve_hill4 said:
I think the maximum you can get into a single slot is 1GB with 32-bit processors.
Not at all true. The bit-width of a processor has nothing all to do with this. It is entirely a function of the memory controller chip (typically part of the core-logic chipset.)

If current 32-bit core-logic chipsets won't accept more than 1G per slot (and I don't know if this is actually true), it does not mean nobody can make such a chipset.
steve_hill4 said:
The maximum that consumer computers can handle is 4GB, but spread across four ram slots. I amy be wrong, (as there would currently be no need for so many 2GB sticks of ram), but that's what I recall.
This is also a function of design decisions by motherboard makers and chipset makers. It has nothing to do with the bit-width of the CPU.

As a matter of fact, 32-bit CPUs can support more than 4G of RAM. Intel Xeons have been doing this for a while (using a 36-bit address bus.) With a traditional operating system, each process is limited to 4G of virtual memory out of the 64GB total system capacity.

It should also be noted that Intel CPUs all support a segmented-addressing model that allows for a 48-bit virtual address space (16-bit segment, 32-bit offset - 256TB) Very few operating systems make use of this mode, because it is difficult to use efficiently, but the chip still supports it. On a chip that has more than 32 address lines (like a Xeon), this mode can allow single processes to have much more than 4GB of RAM.
steve_hill4 said:
If you actually check the tech specs of both Core Duo offerings from Apple, they both state a maximum of 2GB in total anyway.
True. But this isn't because the CPU is 32-bit. This is because of the number of memory slots, capacities of currently-shipping memory modules, and the core-logic chipset.
 
janstett said:
Not neccessarily. More bits isn't always better. First, for many applications like you mentioned (games, 3d modelling, etc) floating point operations are more useful.

In the Intel world, when they moved from 16-bit to 32-bit it was a big deal, mostly for the change from a segment-offset memory model to a flat 32-bit memory model. Here the ability to get access to > 4gb memory is the big deal, and they've been able to put in some hacks/shortcuts to work around this for several years.

Just to illustrate how 64-bit isn't always better, let's imagine doing an add for two 32-bit integers with the same values at the assembly (register) level:

00000000000000000000000000000001 +
00000000000000000000000000000010

versus 64-bit:

0000000000000000000000000000000000000000000000000000000000000001 +
0000000000000000000000000000000000000000000000000000000000000010

For this simple add, the 64-bit int is more work and doesn't yield any benefit.

for other architectures this is true. on x86-64, however, things are never so simple. in 64-bit mode, the original 8 gp registers are accessed as 32-bit registers by default with zeroing of the upper 32-bits. however, the additional 8 gp registers can only be accessed as 64-bit registers. also, there are an additional 8 xmm registers in 64-bit mode.
 
shamino said:
The core-logic chipset (that controls the busses) is as much responsible for memory capacity as the physical number of sockets.
Indeed. And the 845PM chipset, as found in the iMac Core Duo, supports up to 4 GB, and does support 512 mbit modules, which would be the common way to get 2 GB per module. (SO-DIMM anyway; you can squeeze 2 GB onto a desktop module with 256 mbit modules.)

shamino said:
Apple does have a history of under-documenting memory capacities. (e.g. the Bondi iMac, advertised with a 64M capacity per slot, but can actually accept 256M in each slot.) But this doesn't mean it will accept the largest modules that can physically fit in the slot (those same iMacs can not accept 512M or 1G SO-DIMMs.)
Very much so. SOME Bondi iMacs can support 512 MB modules, some can't. Reminds me of the SE/30, which supports 16 MB modules, even though that size wasn't even under development yet, so thinking of it supporting 128 MB of total memory wasn't even a thought in the engineers heads. (Yet I have 128 MB in my SE/30, which came out in 1989. And my iMac that came out in 1999 only came with 64 MB of RAM!)
 
bigandy said:
no.

vista will have limited support, but no previous, or current, windows versions, have.

xp64 does support efi, otherwise itanium systems wouldn't be able to boot into windows.
 
jhu said:
xp64 does support efi, otherwise itanium systems wouldn't be able to boot into windows.

Unfortunately, 'Windows XP 64-bit Edition' for Itanium systems is an absolutely different beast than 'Windows XP Professional x64 Edition', the AMD64/EM64T version.

It has been stated that the x64 Edition can boot on EFI systems, but the only systems that I have personally seen it on all use the BIOS Compatibilty Module, which makes an EFI system pretend it has a BIOS. This is because the system pretty much has to be compatible with the 32-bit version of XP, too, since not enough people are using x64 as their primary OS to warrant making a computer that is only compatible with the x64 version.

So while it is POSSIBLE that x64 is EFI compatible, I haven't yet seen proof of it.

[five minutes of Googling later...]

Well, after a bit of inspired Google-fu, I found the following MS TechNet article. In it, it has a grid of supported disk formats. MBR is what is used by BIOS systems, GPT is used by EFI systems. It shows that only Itanium systems can boot off GPT disks, but that any processor architecture running Windows Server 2003 SP1, and XP Pro x64 can use a GPT disk as a data disk. This means that an EFI-only (no BIOS Compatibility Module) x64 system cannot boot any current version of Windows. That means that even if Apple released an x64 Intel Mac (or if people were right, which they're not, about Core Duo being 64-bit,) you STILL couldn't boot Windows.

And here is a second article specifically detailing that x64 systems can mount (in Server 2003 SP1 or XP x64,) but not boot from, GPT disks. Just as Itanium systems cannot boot from an MBR disk, but can use it as a data disk.

What is the point of being able to use a GPT disk as a data disk? Well, if you format a disk on an Itanium system, it would be nice to be able to read it on an x86 or x64 system. So they've implemented that in Pro x64 and Server 2003 SP1.
 
ehurtley said:
... TechNet article. ...
It shows that only Itanium systems can boot off GPT disks, but that any processor architecture running Windows Server 2003 SP1, and XP Pro x64 can use a GPT disk as a data disk. This means that an EFI-only (no BIOS Compatibility Module) x64 system cannot boot any current version of Windows. That means that even if Apple released an x64 Intel Mac (or if people were right, which they're not, about Core Duo being 64-bit,) you STILL couldn't boot Windows.
It means more than that. It also means that the OS doesn't have a problem with GPT, only the bootloader. And a GPT compatible bootloader has already been developed (for Itanium).

So although no currently-shipping distributions will boot on EFI, it would appear that all the hard work has already been done. Unless the code is really messed up (or written in assembly language), they should be able to just port the boot loader from Itanium to x64.

In other words, all it takes is some managers in MS to make a business decision to start supporting EFI boxes, and the developers should be able to get it up and running.

Of course, making that business decision may not be simple. Given the fact that most EFI PC's have a BIOS-compatible mode, there isn't a very compelling reason to develop a native EFI bootloader. Perhaps the desire to run on Mac hardware will be a reason. Or maybe not, if they decide porting VPC makes better business sense. (After all, a VPC customer buys a Windows license and a VPC license - meaning two sales instead of one.)

As for third parties doing the work, I don't know. Have any third parties made a bootloader (GRUB?) that can boot Windows without MS's OS Loader program? If such a program exists, then it should be modifiable to work with EFI, in much the same way that GRUB can boot Linux on EFI boxes.

I don't know of any such program. It may not exist, since (until now) there hasn't been any good reason for someone to develop a third-party OS Loader replacement.
ehurtley said:
And here is a second article specifically detailing that x64 systems can mount (in Server 2003 SP1 or XP x64,) but not boot from, GPT disks. Just as Itanium systems cannot boot from an MBR disk, but can use it as a data disk.
See also this article and this article. Macs have the same issue. Intel Macs can mount, but not boot, APM-partitioned discs. And PPC Macs can not boot GPT disks. (I think PPC Macs running the latest Mac OS can mount GPT disks, but I'm not certain of that.)
 
hdasmith said:
32-bit processors require more than twice the number of cycles and registers to perform 64-bit processing, therefore with larger equations, a 64-bit processor will easily outperform a 32-bit. This is part of the reason the G5's have less cache than the G4's.

Question: How many bits wide was the floating point unit in the old Intel 386? How about in the 486 and in all the Pentiums and in the new Core Duo?

Answer 80 bits.

Intel has been doing 64-bit math from the beginning days of the PC era.

What defines a 64-bit processor is that this code (written
in C) will print the number "64";

void* foobar;
printf( sizeof(foobar) );
 
There are (at least) three sizes of floating point ,32 bit, 64 bit and 80 bit.

A 64 bit processor would generally have 64 bit data and address paths.
It will have 64 bit address registers and 64 bit data registers.
Large data sets (>32bit) work well with 64 bit processors. Others have described this well.
If you are doing heavy Integer math (not floating point). Then the register size can matter.
If you are doing 32 bit integer math on a 64 bitter in 64 bit mode, then the 64 bitter will be slower because it has to move twice as much data around.
If you are doing 64 bit integer math, the 64 bitter can do this in (typically) one clock cycle. A 32 bitter needs to emulate to do 64 bit math. Adds, subtracts and multiplies I believe take 4 operations on 32 bitter versus 1 on a 64 bitter. This is the reason a 64 bitter is faster at 64 bit integer math.
 
Flynnstone said:
There are (at least) three sizes of floating point ,32 bit, 64 bit and 80 bit.

A 64 bit processor would generally have 64 bit data and address paths.
It will have 64 bit address registers and 64 bit data registers.
Large data sets (>32bit) work well with 64 bit processors. Others have described this well.
If you are doing heavy Integer math (not floating point). Then the register size can matter.
If you are doing 32 bit integer math on a 64 bitter in 64 bit mode, then the 64 bitter will be slower because it has to move twice as much data around.
If you are doing 64 bit integer math, the 64 bitter can do this in (typically) one clock cycle. A 32 bitter needs to emulate to do 64 bit math. Adds, subtracts and multiplies I believe take 4 operations on 32 bitter versus 1 on a 64 bitter. This is the reason a 64 bitter is faster at 64 bit integer math.

almost. at least the above is true on practically every other 64-bit architecture. on x86-64, the default size for the original 8 registers is 32-bit. so that means 32-bit integer math is as fast in 64-bit mode as 32-bit mode. eg:

add eax,1 ; 32-bit operation which also zeroes out the upper 32-bits of rax
add rax,1 ; 64-bit operation
 
Flynnstone said:
A 64 bit processor would generally have 64 bit data and address paths.
Lots of 32-bit processors have this as well. The Pentium (and all its successors) have 64-bit data paths. The Xeon series has a 36-bit address path.
Flynnstone said:
If you are doing 32 bit integer math on a 64 bitter in 64 bit mode, then the 64 bitter will be slower because it has to move twice as much data around.
Maybe, maybe not. The chip might perform 32-bit mov operations by transferring 64 bits and discarding half. Or it might have microcode to transfer only 32 bits. Depending on the architecture, either way might be faster.
Flynnstone said:
If you are doing 64 bit integer math, the 64 bitter can do this in (typically) one clock cycle. A 32 bitter needs to emulate to do 64 bit math. Adds, subtracts and multiplies I believe take 4 operations on 32 bitter versus 1 on a 64 bitter. This is the reason a 64 bitter is faster at 64 bit integer math.
Again, this depends on the implementation. There's no reason why a chip can't have single operations that act on two registers. This was done in the 8-bit days on chips like the 6809 to allow 16-bit operations. There is no reason why a 32-bit chip can't do the same thing to allow 64-bit operations.

The real problem is that there is no clear definition of what it means to be a "64 bit processor". There are so many different aspects of this, and there are plenty of chips that implement some of these aspects without implementing others. Ultimately, the term is little more than marketing lingo.

There is no reason why a 32-bit chip can't access more than 4G of RAM (the Xeon does). There is no reason why a 32-bit chip can't do 64-bit math. There is no reason why a future 32-bit x86 chip can't increase the number of registers (one of the key reasons why AMD-64 chips can run apps faster in 64-bit mode) without going 64-bit. There is no reason why a 32-bit chip can't run with faster clocks, faster busses and larger caches (the key reason why a G5 is faster than a G4 for most applications.)
 
steve_hill4 said:
I think the maximum you can get into a single slot is 1GB with 32-bit processors. The maximum that consumer computers can handle is 4GB, but spread across four ram slots. I amy be wrong, (as there would currently be no need for so many 2GB sticks of ram), but that's what I recall.

If you actually check the tech specs of both Core Duo offerings from Apple, they both state a maximum of 2GB in total anyway.

A single slot can handle up to 2GB, assuming that enough address control lines are routed. It's not related to the bus width of the CPU. In fact, DDR and DDR2 both provide 64 bit data paths to and from the memory controller which, for 32 bit processors, eases the data bus bottleneck somewhat.

The 1GB per slot limit is almost always imposed by the memory controller not having enough address lines routed to support more than that amount. Besides, 2GB modules are pretty fabulously expensive right now. It's probably a purely economic thing.
 
jhu said:
add eax,1 ; 32-bit operation which also zeroes out the upper 32-bits of rax
add rax,1 ; 64-bit operation

That's not 32/64 bit math. That incrementing.

On a 64 bitter, incrementing a 32 bit or 64 bit integer, the assembler is like:
add rax,1 ;same speed

On a 32 bitter, incrementing a 32 bit integer :
add eax,1 ; same speed
incrementing a 32 bit integer:
add eax,1
addc eax,1 ; or something like that (add with carry)

that is two instructions or have the speed.
This is all disregarding system issues like bus speed ...
 
I personally think it'll be quite a while before Apple starts offering 64-bit Intel Macs. I think they'll start with the Xserve, since those were where performance was particularly weak with the G5 procs (performance went down to a fourth of Dell servers when running Apache with high requests)
 
Flynnstone said:
That's not 32/64 bit math. That incrementing.

On a 64 bitter, incrementing a 32 bit or 64 bit integer, the assembler is like:
add rax,1 ;same speed

On a 32 bitter, incrementing a 32 bit integer :
add eax,1 ; same speed
incrementing a 32 bit integer:
add eax,1
addc eax,1 ; or something like that (add with carry)

that is two instructions or have the speed.
This is all disregarding system issues like bus speed ...

i was referring to your statement:
Flynnstone said:
If you are doing 32 bit integer math on a 64 bitter in 64 bit mode, then the 64 bitter will be slower because it has to move twice as much data around.

which is not true on x86-64 because in 64-bit mode instructions operate on the lower 32-bits of the original 8 general purpose registers unless explicitly appended (instructions involving the new registers always need to be appended). example in 64-bit long mode:

mov eax, 0 ;encodes to b8 00 00 00 00 - a 5 byte instruction
mov rax, 0 ;encodes to 48 b8 00 00 00 00 00 00 00 00 - a 10 byte instruction

so even on 32-bit instructions, the encoding is still smaller. additionally, the first instruction zeroes out the top 32-bits of rax (although a simple "xor rax,rax" (encodes 48 31 c0) is more efficient at only 3 bytes)
 
Mr. Mister said:
I personally think it'll be quite a while before Apple starts offering 64-bit Intel Macs. I think they'll start with the Xserve, since those were where performance was particularly weak with the G5 procs (performance went down to a fourth of Dell servers when running Apache with high requests)
If you're referring to the benchmarks I think you're referring to, then you are drawing incorrect conclusions. Those tests, when run on an Xserve running Linux, outperformed the Dells, indicating a problem with Mac OS, not the hardware.

Based on this, moving to Intel probably won't solve the problem.
 
shamino said:
If you're referring to the benchmarks I think you're referring to, then you are drawing incorrect conclusions. Those tests, when run on an Xserve running Linux, outperformed the Dells, indicating a problem with Mac OS, not the hardware.

Based on this, moving to Intel probably won't solve the problem.

actually the g5 still loses, but not as much as with os x.
 
twoodcc said:
the question is, who is going to try to put a 2gb stick of ram in the new imac?

I would, but I can't find any for less than 4 times the price of a 1 GB module. Sorry, I'm not willing to pay 16 times the cost for double the memory. (I'm getting 1 GB with my MacBook, and just bought a 1 GB module for $135 locally; if I was to upgrade to 2 GB modules, I would get two to keep dual-channel working. Two 2 GB modules would cost $2000, based on desktop prices.)

In fact, I can't even find 2 GB DDR2 SO-DIMMs, at any speed; much less the brand-new PC2-5300 (a.k.a. DDR2 667.) Not even DDR1. Desktop full-size DIMMs, yes, but not notebook SO-DIMMs. And the only desktop 2 GB PC2-5300 DIMMs I can find are server-quality, ECC modules, for $1000 each!

So, sorry; not any time soon.
 
64 Bit CPU's/Instruction Set

janstett said:
Not neccessarily. More bits isn't always better. First, for many applications like you mentioned (games, 3d modelling, etc) floating point operations are more useful.

In the Intel world, when they moved from 16-bit to 32-bit it was a big deal, mostly for the change from a segment-offset memory model to a flat 32-bit memory model. Here the ability to get access to > 4gb memory is the big deal, and they've been able to put in some hacks/shortcuts to work around this for several years.

Just to illustrate how 64-bit isn't always better, let's imagine doing an add for two 32-bit integers with the same values at the assembly (register) level:

00000000000000000000000000000001 +
00000000000000000000000000000010

versus 64-bit:

0000000000000000000000000000000000000000000000000000000000000001 +
0000000000000000000000000000000000000000000000000000000000000010

For this simple add, the 64-bit int is more work and doesn't yield any benefit.

Microsoft learned this lesson with Windows 95. Remember, they started with 16-bit Windows 3.1 and added 32-bit extensions to it. But the fact was that in many cases the 16-bit code was faster (partly due to years of optimization) so key parts of Windows 95's subsystem stayed 16-bit internally and used 32-bit thunking to talk to their 32-bit "other halves".


Given: 2 RISC CPUs of similar spec (Speed, Bus Speed, Cache Mode, Cache Speed etc..) One isa 32bit and the other 64bit.

Following on from the above, the 32Bit instruction (Op Code, Operand) Vs 64Bit instruction would essential execute with similar number of fetch-execute cycles if they where run on 32 Bit and 64 Bit CPUs respectively. However, as already pointed out Addressing More Memory (Operand) would be greater using 64 bit instructions. But when it comes to Floating Point Precision calculations (as in 3D rendering etc..) 64 Bit processors will out-perform there 32 bit CPUs on the fact that the CPU will compute 64 bit (quadword) in one+ cycle, whereas 32 bit CPUs would have to perform it in two+ more cycles...

***Note: the Intel x86 family are CISC and PowerPC are RISC. A True benchmark comparison is very difficult. If you have a certain application in mind such as Vector Processing, 3D rendering, Audio Synthesis. Then as was the case with the G5 CPU, the compiler optimisation would allow you to harness the chips best potential. At the moment if you are stuck with the Universal Binaires Nonsense, it's best to stick to a application that will harness one of the current cpus. So you might have to wait a for the Software companies to optimise there applications to run on Intel CPUs.

In my opinion there was nothing wrong with applications written on PowerPC cpus as they took advantage of the fact they where RISC processors and used them well (for vector based processing).

My benchmark is to see how long the phase the Power Mac G5 desktop!!!!

They won't rush on that one...
 
Mr. Mister said:
I personally think it'll be quite a while before Apple starts offering 64-bit Intel Macs. I think they'll start with the Xserve, since those were where performance was particularly weak with the G5 procs (performance went down to a fourth of Dell servers when running Apache with high requests)
I don't think so. I think it will be 4-5 months. Not quite a while. :rolleyes:
 
2 GB limit - just my 2 cents

I just want to add something about the 2 GB limit (actually, the 1 GB limit per slot). It's obviously not due to 32-bit processors, and the chipsets used in the new Macintels seem to support up to 4 GB of memory, according to some posts.

Last time I wondered about putting 2 GB of RAM in Mac mini, all 2 GB sticks were ECC. That was a few months ago, but even now I can't seem to find 2 GB non-ECC modules on Crucial, which makes me believe that such RAM sticks still aren't available. <b>Edit :</b> I've found one when checking their advisor tool for the Quad G5 ... $1664 though :)

Not so long ago, PowerMac G5s also had a limit of 1 GB per slot. Support for 2 GB modules came along with support for ECC modules. As far as I know, no PowerPC Macs except the Dual-core G5 towers and the Xserve support ECC RAM. It is possible that these Intel chipsets do not support ECC RAM ...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.