Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Originally posted by makkystyle
Arn, are all these stories from MacB going on the front page becuase you have heard any kind of corroborating info or, just because they are fairly big/interesting news?

I would love it if all of these rumors are on the money, but if they were so close to the truth and coming out this early, wouldn't Apple legal be sending some fairly stern words to MacB? I mean they sent lawsuit threats to Spymac over iSync compatible phone lists being leaked. Seems to me that Apple would have tried to nip this in the bud several weeks ago if MacB were really putting out info that hit close to home.

Anybody else reading these "leaks" with a whole hand-full of salt?
well you cant be for certain, also the stie was thinksecret, not spymac. anyways, thinksecret had ipod pictures 2 weeks before the ipods were released. the picture were EXACTLY how the ipods turned out to be, apple never touched them.

iJon
 
Originally posted by makkystyle
wouldn't Apple legal be sending some fairly stern words to MacB?

maybe french law is different... maybe there this kinda thing is considered to be freedom of speech or something? i don't really know much about anything that has to do with law :eek:

edit: why are all the romours coming from MacB?! isn't there any other sites that is posting this much stuff?

thank you
MaT
 
These stories make me seriously doubt the possibility of a PPC 970 @ WWDC. MacB keeps coming out with this far out stuff, which, in my eyes, kills their credibility.

:(
 
Originally posted by Chobit
You can always write a harddrive size in MB or GB, but you have to realize there is a conversion factor. Somtimes spaces are written without considering the conversion and are therefore bigger or smaller than you may think they are. Advertized Harddrive spaces ARE normally written in real Gigabytes (1,000,000,000 bytes per Gigabyte) however, when your computer reads it, I believe (I am not positive) It calculates the size in Gibibytes however writes GB. Before the prefixes were changed (1998) there could be either 1024 bytes in a megabyte, or 1,000 and you rarely would know which one. Now you only know for sure what someone's talking about if they use the new base two prefixes as so many use the base 10 prefixes either way.

Anyway, this is all really off topic and we should probably get back to the actual thread.

There are 1024 x 1024 bytes in a Megabyte, 1024 in a Kilobyte.

32GB (or even GiB) is only 35 bits, wasting 29 bits of the addressing unless it's constrained to fewer than 64 bits. Perhaps, it is constrained to 35 bits of memory plus one bit to signal whether it's real or virtual memory--an IBM big machine habit.
 
At first, I thought MB was doing a good job, but now I'm seeing more and more outrageous rumors from them.

I'm taking this whole 32GiB (weird to say..gibibyte :p) with a major grain of salt. I'll be happy to see a PPC970 that goes to 4GB(or GiB if you wanna get technical) this summer.

__________________
Argh...I have to wait how long for my 30 gig iPod?! (June 10th to ship)

If the PC coming out around the time of the WWDC are 970 based why would they even bother to implement a smaller memory capacity. 4GB is the curent addressing range of the G4 (this is not implemented in any Apple design due to I/O decoding) so why would you go to a 64 bit chip and use an address range that is covered by older processors? Further the PC world has already moved to dual channel memory systems, it is the only way to get the required memory performance.

I do not believe that the rumors are outrageous at all considering trends in the industry. Just look at what AMD is doing or Intel for that manner. Wether those rumors are true or the board being discussed is for a PowerMac is another issue. The reality is that there is a big pent up demand for large memory systems, if apple can tap that demand it would be fantastic.

The biggest problem I have is the fear that Apple will do that tapping with a big screw. In other words make the systems so expensive that tears will well up in our eyes and bank accounts will be bleed dry. It is obviuos that if you want a large memory system today you can buy one, the issue is finding a system that runs software you want to use.

Next month will be very interesting indeed. My geuss is new server hardware, but who knows for sure.

Thanks
Dave
 
Come WWDC Apple will introduce the next PowerMacs with the 970. It will be much faster than the current G4's and will be close to the top end P4s. BUT, everyone will be very disappointed because the new machines will not be as described by MacB. They may not have 8 RAM slots or fibre channel on the motherboard. They may not have built in 5.1 audio or 6 PCI slots.

I think that these rumors are getting more and more ridiculous.

Maybe MacRumors should rename itself as the English version of MacB:D They do seem to be quoting them very often.
 
There are 1024 x 1024 bytes in a Megabyte, 1024 in a Kilobyte.

32GB (or even GiB) is only 35 bits, wasting 29 bits of the addressing unless it's constrained to fewer than 64 bits. Perhaps, it is constrained to 35 bits of memory plus one bit to signal whether it's real or virtual memory--an IBM big machine habit.

__________________
LOOP: MOVE #$01, D0
CMP #0, D0
BNE LOOP
folding@home is good for you.

Or perhaps they sized the memory subsystme around other parameters. This could be the number of physical slots they where will to produce. The limits for current DDR module design, which by the way I have no idea what that limit is.

This is still an expansion of 16 times the current limit if I recall correctly. Its a rather large jump, but even then I'm sure many would find it limiting. Lets just hope that they designed things for a seemless upgrade in the future. Lets hope that apple has keep all those other bits reserved, though I imagine the top 2 to 4 bits are going to I/O usage.

Actually the sketch of a machine as described in the MacB reports could have been lifted form recent PC world motherboard specs. So I'm not sure at all why it would be unexpected for apple to be working on similar technology. The question comes down to when will we see it. I flip flop between next month and late fall.

Obviously Apple needs to deliver such a machine now. Having no PoaerMac sales to speak of has got to hurt.

Dave
 
Originally posted by bousozoku
There are 1024 x 1024 bytes in a Megabyte, 1024 in a Kilobyte.

32GB (or even GiB) is only 35 bits, wasting 29 bits of the addressing unless it's constrained to fewer than 64 bits. Perhaps, it is constrained to 35 bits of memory plus one bit to signal whether it's real or virtual memory--an IBM big machine habit.

The PowerPC 970 can only address 42 bits or 4 TB of RAM. The memory controller wouldn't get sent signals indicating real or virtual memory either way, as it only knows about real memory. Most likely (if this rumor is true) the controller is not constrained specifically to 35-bits total, but the RAM technology that it can interface with implies this limitation.
 
The Return Of The RAM Disk?

With a theoretical maximum of 32 GB I wonder if we will see the return of RAM disks in the Mac operating system. For hi-res video capture the RAM disk could be used to capture tapes in 5-10 minute segments and then moved to a serial ATA drive for playback and editing purposes. Of course the price of RAM will have to be lowered to the point where it would not be cheaper to buy a SCSI RAID array instead.
 
Re: what's "truth"

Originally posted by AidenShaw
What's the difference between doing 4 transfers per cycle on a 200Mhz bus, and 1 transfer per cycle on an 800 MHz bus. Effectively none, right?

And check your PPC970 facts - it has 2 unidirectional double-pumped (DDR) 32-bit busses, each at 1/4 of the CPU speed. You get 900MHz by double-pumping (DDR) the 450MHz bus on a 1.8GHz part. This is 3.2GB/sec read and 3.2GB/sec write.

The P4 is 64-bit bidirectional at 800MHz (200MHz quad-pumped), so the full bandwidth can be either read or write.

The practical difference between a quad pumped 200Mhz bus and a double pumped 450 Mhz bus is one of latency - request and response will return to the CPU faster on the 450Mhz bus than on the 200Mhz bus simply because the 450Mhz bus transfers data faster. So the 450Mhz double pumped bus will transfer a data chunk just over twice as fast as the 200Mhz quad pumped bus, although they may have the same theoretical peak bandwidth.

The Pentium will be at an advantage (though unlikely used one) of being able to completely focus it's bus on either reading or writing, but will have a lower actual throughput. The 970 will have the (likely rarely used) advantage of being able to read and write to memory at the same time.
 
Re: GiB

Originally posted by Chobit
I'm just glad that someone's starting to use the correct GiB instead of GB. Its 32 Gibibytes because it is 2^30 bytes, not 10^9. I don't know.. It really doesn't matter too much, but sometimes its annoying not knowing if things (especially harddrives) are being measured in Giga or Gibi bytes. People use Giga (which is supposed to be pronounced "jiga" even though I still say it with a hard g out of habit and people not understanding me) when they mean gibi and it all makes my head hurt!

I can't stand the new abbreviations personally - it just seems silly that we are creating new abbreviations because one segment of the computer industry has consistently failed to use them in the same way that the rest of the computer industry does. So instead of K=1024 for computers and K=1000 elsewhere, it becamse K=1024 for computers unless you are talking about storage devices, and some other value completely when talking about the high density floppy... blah =p

PS: Like your handle - I'm waiting for the 3rd DVD myself right now :D
 
Re: The Return Of The RAM Disk?

Originally posted by Sol
With a theoretical maximum of 32 GB I wonder if we will see the return of RAM disks in the Mac operating system. For hi-res video capture the RAM disk could be used to capture tapes in 5-10 minute segments and then moved to a serial ATA drive for playback and editing purposes. Of course the price of RAM will have to be lowered to the point where it would not be cheaper to buy a SCSI RAID array instead.

Well, the general reason why RAM disks went away with demand paged operating systems (Unix, Windows, MacOS X) is because the operating system should be caching as much as possible in RAM including your huge capture sessions. The actual disk writes happen in a DMA process that allows the capture device to continue to write to memory and that memory write to be sent to disk without a huge performance hit. I did once actually setup a RAM disk under Win 2k and not only was it a pain in the *** to figure out how to do, it didn't end up helping performance any.
 
Re: re:re:re:GiB

Originally posted by Chobit
Harddrives and RAM and other data storage devices should be measured in Kibi/Mebi/Gibi/Tebi etc. bytes as they are always in some grouping of a power of 2, not a power of 10.

Kilo=1000
Kilobyte=1024 bytes
1000 is approximately equal to 1024.
It is an approximation, not an error to refer to kilobytes as 1024 bytes, megabytes as 1024k, gigabytes as 1024 megs, etc.
It's a non-issue.
 
Reading & writing at the same time

Originally posted by Rincewind42
The 970 will also have the (likely rarely used) advantage of being able to read and write to memory at the same time.

That sounds like a technology waiting for an application. Software written from the ground up to utilize this feature would be able to do things that ported software never could. Unfortunately most non-Apple software for the Mac market seem to be ports from Windows.
 
Re: Re: Re: Re: PowerMac 970 memory architecture from MacBidouille.

2 years ago I ordered a HP Netserver LH3000r. It has 2 1gig modules. 1 gigs have been out for a while now.


Originally posted by EponymousCow
.

I believe that 1GB modules have just become available. Apple has done this before, making their motherboards compatible with a memory size not yet available. In fact, the MDD powermacs have 4 slots that can use 1GB DIMMs, but the motherboard can only handle 2GB total. Must be limited to 31-bits of addressing. :rolleyes:
 
Kilo=1000
Kilobyte=1024 bytes
1000 is approximately equal to 1024.
It is an approximation, not an error to refer to kilobytes as 1024 bytes, megabytes as 1024k, gigabytes as 1024 megs, etc.
It's a non-issue.

Actually, it was an SI standard to for Kilo to equal 1000 or 1024 depending on the context, not an apporoximation. Now the SI standard is Kibi as 1024 and Kilo as 1000, and when you get to higher prefixes, as we will soon, the differences become larger and larger. A Terabyte (1 000 000 000 000 bytes) shares only one significant digit with a Tebibyte (1 099 511 627 776 bytes).

I'm not saying you can't use Terabytes, as you most certainly can, I'm saying there's a distinction that should be made and things get confusing when people don't accept standards.

Anyway, I'm getting tired and don't care. 32 GB or 32GiB would be great to have in a computer. If only I had more money.

PS: Sorry if I misunderstood what you meant to say.
 
Originally posted by makkystyle
Arn, are all these stories from MacB going on the front page becuase you have heard any kind of corroborating info or, just because they are fairly big/interesting news?

They are front page because they are of significant interest... and people are following it.

i dont have any collaborating info. I've heard varying things regarding the timeframe of these things... with slightly more relaible lean towards them not being available immediately.

arn
 
Re: Reading & writing at the same time

Originally posted by Sol
That sounds like a technology waiting for an application. Software written from the ground up to utilize this feature would be able to do things that ported software never could. Unfortunately most non-Apple software for the Mac market seem to be ports from Windows.

I'm not sure about games but most of the best Audio,video and graphics software started life on the Mac.
Can I have a witness or an amen:D
daniel
 
Re: Re: The Return Of The RAM Disk?

Originally posted by Rincewind42
Well, the general reason why RAM disks went away with demand paged operating systems (Unix, Windows, MacOS X) is because the operating system should be caching as much as possible in RAM including your huge capture sessions. The actual disk writes happen in a DMA process that allows the capture device to continue to write to memory and that memory write to be sent to disk without a huge performance hit. I did once actually setup a RAM disk under Win 2k and not only was it a pain in the *** to figure out how to do, it didn't end up helping performance any.

Ok, so for the slower ones in our viewing audience,(me) you're saying that the ram disk would no longer be needed because the os is doing the same thing behind the scenes?
Setting one up in OS 9 and previous was easy but 1 gig isn't enough at least for audio and video to matter.
So I guess my next questions would be how much ram can an app use under os X and can you set up a ram disk?
I would still love to have 12 gigs or so of ram disk to capture to, seems like it would have to speed up renders? peace
daniel
 
Well the dual Opteron MB from Tyan already supports 12GB. One of the biggest wins for most users with a 64-bit chip is having lots of memory more easily addressable. So not only is 32GB pretty damn normal. (and not really a huge amount in the server world). it's damn handy for lots of video work. Try using Discreet Combustion and you would know why many of us would dig on 32GB of memory.

Still, the MB folks do seem pretty over the top with the rumors.
 
Re: Re: what's "truth"

Originally posted by Rincewind42
The practical difference between a quad pumped 200Mhz bus and a double pumped 450 Mhz bus is one of latency - request and response will return to the CPU faster on the 450Mhz bus than on the 200Mhz bus simply because the 450Mhz bus transfers data faster. So the 450Mhz double pumped bus will transfer a data chunk just over twice as fast as the 200Mhz quad pumped bus, although they may have the same theoretical peak bandwidth.

Huh?? Just over twice as fast?

First of all, the P4 bus is 64-bits wide, not 32-bits like the 970 bus. So, "one transfer" is 64-bits at 800 MHz, vs 32-bits at 900 MHz (assuming 1.8GHz CPU).

I don't think we know enough about the 970 bus and memory controller to compare latencies. Does the 970 have to use 2 transfers on the 32-bit bus to pass the 64-bit memory address? That will hurt latency.

Does the 970 (or the P4) immediately forward the first chunk of data up through the cache levels to the registers? Or does it wait to fill a cache line (32 to 64 bytes) before making the data available - big latency issue here.
 
G4 can address 64GiB of RAM

Originally posted by wizard
4GB is the curent addressing range of the G4 (this is not implemented in any Apple design due to I/O decoding)

Actually, the G4 (and the P4) both implement 36-bit physical addressing, supporting up to 64GiB.

You are correct, no Apple design goes above 2GiB.

Many P4 systems are available with up to 32GiB today (e.g. Dell 6650) or even 64GiB (e.g. IBM x440).

If the PPC970 board supports 32GiB, that's just an implementation limit for that board - most likely influenced by the maximum DIMMs available and the maximum number of DIMM slots that are permitted (apparently 8). Future systems could support more, but it's not unusual for a particular mobo/chipset combo to support less than the maximum supported by the chip. For example, Xeon workstations (e.g. IBM zPro) support up to 8 GiB, even though the Xeon chip could support 64GiB.
 
Re: Reading & writing at the same time

Originally posted by Sol
That sounds like a technology waiting for an application. Software written from the ground up to utilize this feature would be able to do things that ported software never could. Unfortunately most non-Apple software for the Mac market seem to be ports from Windows.
This is nothing particular, the way caches work already mimics this behavihour once they are full they have to write a cache-line back to memory to keep a new one.

In the code you'll find load and store instructions mixed up, the problem I would see is that there are often more loads than stores (how much more depends on the kind of computation of course) and when an instruction is read it generates a load (this should have minimal impact). Anyway it's not that easy to tell the effect outside the CPU since what basically enters and leaves the CPU are whole cache-lines (32 Bytes usually, could be 128 Bytes on the 970), a single cache-line could take many hits from the Load/Store Unit point of view but would only require a single memory transaction on the outside bus.

And even if the PowerPC 970 is able to read and write at the same time there is no memory technology that allows this kind of access, memory chips have a single interface, you cannot designate two memory cells at the same time at this level since there is only one place to put the address of the cell. This will lead to some sort of serialisation, the memory controller will handle this, may be with a few Apple tricks. How it will cope with 16 open datastreams (8 per PPC 970) would be even more insteresting to know.

Compared to the G4e the 970 has overkill capacities to handle more memory and way faster, the problem is what kind of memory could follow its pace ? AMD put the memory controller inside the Opteron to get the best out of DDR2700 and reduce latencies, IBM will probably take the same approach in the 980; what Apple managed to do with the 970 is an open question.


Actually there are a lot more software coming from the GNU/Linux and *BSD sphere than from the Windows area.
 
Re: Re: Reading & writing at the same time

Originally posted by mathiasr
Actually there are a lot more software coming from the GNU/Linux and *BSD sphere than from the Windows area.

Granted that OS X has more in common with Unix, Linux, Irix, etc. than it does with Windows. The problem is that most of the companies that create software for both Mac & Windows seem to concentrate all their efforts on the Windows applications and treat the Mac versions as if they were less a priority. This would explain why software is usually released for Windows first, then Mac. As examples Corel comes to mind and so does Adobe (since the introduction of OS X). This is why Apple needs more developers like The Omni Group who make the effort to build software that takes advantage of OS X's features. Their port of Giant: Citizen Kabuto was the first ever Mac game to utilize dual CPUs. More of that please!
 
Originally posted by tjwett
And I find it very hard to believe that a 4gb module would cost $191, I've spent more than that on a 512 SDRAM chip. Please prove me wrong, that would be exciting stuff.

Recently? Seems like you got ripped off. Seems like 512MB RAM should be in the 100-150 range for high-quality parts ...

But, yes, 4GiB for just $50 more ... yeah, that's a bit off sounding ...

I do have a link for you, although it is specific to Sun servers (not likely the same as in our 970), and it seems the previous poster lost a digit on his estimates:

4GiB @ $1984 from Dataram, $3800 from Sun.

http://biz.yahoo.com/bw/030318/185022_1.html

Here's another for Compaq SDRAM at $2199.99 per stick:

http://www.chicagocomputersupply.com/com4gbsdramm.html

I doubt that 4GB of memory on a stick went down in price by a factor of 10 in two weeks.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.