Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
aegisdesign said:
However, if you're tooling along at native speeds on your new Intel Mac and then come up against an emulated app, it'll be like driving a Porsche into treacle. Suggesting that that doesn't matter because people will be used to slow speeds on their old computer is naive
I never suggested that--I said it made Rosetta "OK" as a transitional step, and this is true: if I get an Intel Mac this week, it will run even Photoshop about the same speed or faster than my 2-year old PowerBook. That's acceptable--it's enough to get me to buy, and it gives me something perfectly useable while Universals arrive over time.

From all reports, though, I think your "Porsche into treacle" analogy is exaggerated: people won't be running native Photoshop and going to Rosetta Photoshop, they'll be switching between DIFFERENT tasks. And even Rosetta apps have a faster UI because of the native OS.

And alghouth Photoshop is a great example to keep an eye on, it's not the main consumer app. Office runs great in Rosetta, and iLife is native.


BakedBeans said:
This is one post from a person testing the 17inch intelimac v the isight version
I don't know which G5 he was testing on, but those Rosetta results sound great! We'll need to see a variety of reports before a really solid trend can emerge, but if Rosetta is doing ANYTHING faster that (or even equal to) a G5, that's good in my book.
 
Two thoughts regarding these real-world benchmarks:

1. There is little reason for current iMac G5 owners (myself included) to upgrade at this time.

2. Single-core Yonahs would not be sufficient to run most apps at a reasonable speed under Rosetta, so I think the iBooks and Mac Minis will still get dual core chips, at the lowest clock speed available (1.66 GHz if I remember correctly), since the price difference between single and dual core Yonah chips is negligible.
 
agreenster said:
That's BS. Total and utter BS. Laptops AREN'T a waste of time for content creation. I take a train every single day to work, so thats 2.5 hours roundtrip of commute time of me with my laptop using Maya. I get tons of work done
Agreed. Flexibility (and yes, noise) trumps raw speed for me. Raw render speed isn't the only measure of productivity in a creative field. Desktops are the way to go for top speed, but laptops are great as well--especially with the MacBook Pro.


agreenster said:
Rosetta is a pathetic solution. Apple is so secretive, that they had to wait until the public release to reveal Universal Binaries. Why couldnt they have clued Adobe or Autodesk or (your fav software name here) in a year earlier? Why cant I get a universal binary version of Maya in March? Because Apple dropped the ball, thats why.

Autodesk, like Adobe and everyone else, HAS known for a year. They may CHOOSE to wait for their next major release (which they surely have had in progress for a while) instead of porting the old release and then RE-porting the next one. Makes sense to me. Or, they may simply be delayed by the complexity of their products. That's life. Rosetta pathetic? It's an outstanding solution--but it's only a transitional measure. This transition WILL have some rough edges. Nothing could prevent that--but Apple's good at smoothing the way as best they can. They (and Transitive) have done better than expected in my view.

Some things to note:

* Apple does developers no favors by making them start programming for a transition too early: the details aren't final enough, the tools aren't final enough... it leaves the developer working towards a moving target and wasting tons of cash and time. Once the details of the transition were mature enough to reveal, and Universal Xcode was ready to deliver, Apple did so.

* Letting developers know, say 2 years ago in "general" terms, "this transition is probably coming but we don't have details yet" doesn't really help push apps out the door a year earlier. All it does is clue Microsoft in on Apple's plans and harms sales of highly productive G5 Macs for 2 years.

* But Apple DID clue developers in MORE than 2 years ago in the one USEFUL way: they told developers that Xcode is the future, in no uncertain terms, and to get their apps transferred to Xcode. They've repeated that all along--long before last June. That is key to delivering Intel support, and developers who listened to what Apple told them could get a head start and be in much better shape to deliver Universal apps.

I know you're frustrated that IBM and Freescale didn't deliver on the promise of PowerPC. It was a great architecture. But those companies don't want to BE in the personal computer processor business. It's not where their profits come from. So that leaves us where we are today--and if you ask me, Apple and Intel have really come through in response.

The transition is NECESSARY, is is WORTH it, and it is being handled very WELL. It's still a transition, and we have no choice but to try to survive the temporary inconveniences that can't be avoided.


opq said:
Has anyone brought up the fact that Universal binaries (even on the PPC platform apps are affected) take up almost twice the space now? iWork went from ~800MB to well over ~1.7GB.
A necessary evil, but note that only the executable binary needs to double, not the support files (even if they're within the app package): that increase in iWork is probably more due to new features and templates than anything else--you're comparing iLife 05 to 06.

Extreme example: My Unreal Tournament app is 17.5 GB!!! But if you Show Package Contents, the actual executable binary is 1/1000 of that. Making UT Universal won't double it to 35 GB. (Plus a lot of that 17.5 is add-on maps and things.)

Excluding games, may apps folder is 6 GB--including templates and tutorials and the works. That includes Final Cut Pro, LightWave 3D, Photoshop, and dozens of others. Double ALL my apps to Universal and I've only lost 6GB of space. But only the binaries will double, not the rest of the data--and many of my apps I'll keep running in Rosetta because they just aren't speed-intensive anyway. So I'll lose less than 6 GB in fact.

As necessary evils go, this one's not that bad. Download times for updates will increase--but you don't update your apps on a daily basis anyway.

MacinDoc said:
Two thoughts regarding these real-world benchmarks:

1. There is little reason for current iMac G5 owners (myself included) to upgrade at this time.

2. Single-core Yonahs would not be sufficient to run most apps at a reasonable speed under Rosetta, so I think the iBooks and Mac Minis will still get dual core chips, at the lowest clock speed available (1.66 GHz if I remember correctly), since the price difference between single and dual core Yonah chips is negligible.
1. Agreed for sure. Your machine's still new :)

2. A lot of apps don't make heavy use of dual CPUs, and if they run OK in Rosetta with a Duo, they'll run OK with a Solo--especially if low-end Mac buyers are less demanding, and since Rosetta is merely a transitional measure. Most importantly, vital apps ARE native: iLife, Safari, Mail, and the rest of OS X. That's enough to make a Core Solo Mac very useable as a low-end consumer system for a LOT of people's needs.

All-duos would be cool, I agree, but every little bit helps in keeping low-end machines cheaper. I'm expecting Core Solo on the low-end. If I'm pleasantly surprised, then all the better.
 
opq said:
Has anyone brought up the fact that Universal binaries (even on the PPC platform apps are affected) take up almost twice the space now? iWork went from ~800MB to well over ~1.7GB.

Are you comparing iWork '05 with iWork '06? I assume so as there is no Universal version of '05. Of the 900Mb almost all is taken up by more/new templates that are shared between the Intel and PPC version (i.e. there is a single copy). The only bit that gets bigger is the executable itself. On most apps this is a very small proportion of the total size.

I don't have iWork installed but using iWeb as an example:
My install of iWeb is 98Mb (after stripping out all the non-English languages). The executable but of iWeb is 3.6Mb. For a PPC only build it would be around half of that (1.8Mb). So a PPC only build of iWeb would still be around 96Mb, basically no saving at all.
 
nagromme said:
Remember the reason why Rosetta running slower than native is perfecly OK as a transitional measure:

Because most people buying an Intel Mac are not sidegrading from a G5 Mac... they upgrading from a G4!

Anybody who is working now on a recent G5 and then switches to Intel with non-multiprocessing Rosetta apps and is "disappointed" by the speed is in the minority. Most of those people are keeping their G5 until a later date when it makes more sense to get a new machine.

Likewise switchers are probably not switching from a very recent fast PC. Normally you buy a new machine when your old one gets, well... old :) (But for people who do switch from a fast PC, hopefully they aren't running out in droves to buy non-Universal speed-intensive apps like Photoshop. Best to wait a bit on that.)

I agree. I just bought an Intel Imac and I was switching from a P3 PC as I'm looking to take some graphic design classes . . . I'm sure even in Rosetta, any apps will be faster than what i'm used to, especially when I load it with RAM . . .
 
digitalbiker said:
So the average app receives a 20 to 30% increase using the Duo Core Intel even though it is being compared to a single core G5. The very best app received an 80% boost.

So where is the 2x faster boast that SJ and Apple are claiming?

A 100% boost is 2x faster. 80% is 20 points shy. There is no such thing as 1x faster.

All this talk of X% boosts and 2x, 3x faster are meaningless without an agreed upon baseline. Most of the benchmarking is coming from different "baselines"
 
I have not heard autodesk coming out with mac versions and why would they if microsoft can provide an emulator for windows. I think this is a major reason some of the big players aren't porting, just no customer base yet! If anything I predict a couple of years before you see autodesk and solidworks ported (at least 2 years). Until then, no reason to jump into intel macs as yet.
 
nagrome: Good points.

By the way, render speed isnt what Im concerned about. It's straight up interactive animation speed in the view panel. Renders are a nighttime thing and I dont care about em really.
 
agreenster said:
I just think Apple probably knew long before WWDC, and should have gotten developers on board long before June of 2005. Because NOW, there are very very few options for digital content creators.
As I see it, Apple did get developers on board long before June 2005:
https://forums.macrumors.com/posts/2072764/


917press said:
A 100% boost is 2x faster. 80% is 20 points shy. There is no such thing as 1x faster.

All this talk of X% boosts and 2x, 3x faster are meaningless without an agreed upon baseline. Most of the benchmarking is coming from different "baselines"
Well technically, "1x faster" IS a 100% boost, the same as "2x as fast." Which is what Apple claims, "up to." Apple doesn't claim "up to 2x faster"--that would mean "up to 3x as fast." Clear? ;)

And yes, all pretty meaningless. Just marketing--which any company has to do. Real-world tests of what YOU do are all that matter.


skunkworks said:
If anything I predict a couple of years before you see autodesk and solidworks ported (at least 2 years). Until then, no reason to jump into intel macs as yet.
No reason if those particular apps are the main use you have for a computer ;) But yes, a growing market will bring the apps!

BTW, in discussing Autodesk above I assumed he was really talking about Maya, which IS on Mac (Alias is being bought by Autodesk IIRC).
 
guez said:
A lot of contributors to this site have pointed out how attractive the Intel iMac is to programmers and to people who primarily use iApps. But how many people fall into one or both of these categories AND are willing to shell out $1299?
Waves hand ... that's me ...
 
I say cheers to Apple for getting an emulation layer running so smoothly. Anyone remember the headaches of using Classic?

That said, I think I will stick with my G4 1.5 PB for another couple of years.. I love it too much.

(P.S., how do the iMacs look when they boot up? is it the same grey apple screen, with a chime? or do you see a character-cell POST?)
 
Yes... Yes...

plinden said:
guez said:
A lot of contributors to this site have pointed out how attractive the Intel iMac is to programmers and to people who primarily use iApps. But how many people fall into one or both of these categories AND are willing to shell out $1299?
Waves hand ... that's me ...

Okay: you want to buy a new shiny Mac. My point was that contributors to this site are not necessarily representative of the market as a whole. A bunch of people who are more or less committed to buying the latest, greatest Mac product would make for a very bad focus group.
 
nagromme said:
Well technically, "1x faster" IS a 100% boost, the same as "2x as fast." Which is what Apple claims, "up to." Apple doesn't claim "up to 2x faster"--that would mean "up to 3x as fast." Clear? ;)

And yes, all pretty meaningless. Just marketing--which any company has to do. Real-world tests of what YOU do are all that matter.


1x is not faster just as 1x3 is not larger than 3, it is 3.

Without an accepted benchmark not only is the marketing useless, but so is the chatter here and elsewhere.
 
Well, I'd like some silence. My laptop (Win XP :( ) is running all night and the vents and the hard drive are pretty loud. I got used to it now (actually, I can't sleep so good whithout it when I'm alone :eek: ). But I expect an iMac to be silent so I can watch movies on low volumes at night without the neighbor wanting to kill me. I got a nice radio station in iTunes (/electronica/Beatblender) that is perfect for working and for active bed time. But when streaming it live the computer is just too loud for my taste. Had to burn it to MP3CDs, girlfriend likes that better, too btw.

Can you play the iTunes radio stations with front row? I heard you can't play network libraries...

Actually, I'd love an option where I can switch between "slow and silent" and "fast and loud". So when I'm home, I can set it to silent, and when I'm away, it goes to max speed. Even cooler would be some intelligent software that knows when you're in the room (movement detection with iSight) and sets the Mac to silent mode and switches on the display when you're in the room. So I would't have to change settings all the time.

How's that huh? :)
 
opq said:
Has anyone brought up the fact that Universal binaries (even on the PPC platform apps are affected) take up almost twice the space now? iWork went from ~800MB to well over ~1.7GB.
This has a lot more to do with the size of the new HD themes in Keynote than the size of the Universal Binary.
 
revfife said:
...I guarantee that 80% of software will be universal binary ...with free or discount upgrade to the universal binary

I most certainly would not bet on the "free" upgrade... :(

MrCrowbar said:
Well, I'd like some silence. My laptop (Win XP :( ) is running all night and the vents and the hard drive are pretty loud. I got used to it now (actually, I can't sleep so good whithout it when I'm alone :eek: ). But I expect an iMac to be silent so I can watch movies on low volumes at night without the neighbor wanting to kill me.

Well you sure got a point. I was referring to working more than leisure time ;) :)

Nice idea about the movement detection...
 
*LOL* Do you belive your own BS?

AidenShaw said:
The FPU is 64-bit, and has been on every Pentium. Where does this crap come from?

And it's not SSE3 - SSE (before SSE2 and way before SSE3) had 64-bit integer support (http://arstechnica.com/articles/paedia/cpu/pentium-2.ars/3) It was improved in SSE2 and SSE3, but it was present in SSE. (And to a limited extent in MMX, but too limited for me to claim that it really supported 64-bit integers.)

The SIMD(SSE) Is not a FPU you freak. :D Your link is refering to the SIMD, multimedia extension, MMX, which I figured you were talking about in the original post I replied to. Holy CRAP!!! :D

Now once again, a FPU is completely different than a SIMD. I apologies abou forgeting the "S," but at least I know the difference between the two, so maybe you should apologies for confusing the them. :p

The Pentium's FPU does share some registers with it, but it like the Pentium's integer, it is "NOT" 64-bit.

Read; :)
http://en.wikipedia.org/wiki/Streaming_SIMD_Extensions

So is SSE. Your point?
I guess you forgort that AltiVec is 128-bit.
I stated this comment, because once again, you were refering the Pentium's 64-bit SIMD.

I was unable to find any posted benchmarks comparing Maya 32-bit to Maya 64-bit on the same hardware.

I would be very surprised to see a 60-fold improvement (hours to minutes), however. :eek:

I'm making assumptions about MR's 64-bit performance, since it's still new, but I'm basing my assumptions on other 64-bit renderers and I was only exaggerating a little-tiny-incy-bit. :p

If I'm rendering 24 frames, and it takes 10 minutes to render a frame, it will take 4 hours to render. 64-bit rendering on average is about 20% (No I didn't get this from you.) faster and for the much larger scenes, which are extremelely complex, the performance gap will grow. Anyways at a 20% saving, it would take 3.2 hours. Now increase from 24 frames to hundreds of frames, and the gap gets even wider. Blah. See, I was pretty close with my guesstimate.

Go here and read their blurb about 64-bit;
http://www.alias.com/glb/eng/produc...MLD3JNY5QCLCWSSM44AJMK0IJVC?productId=1900011

Go here to see what's possible with 64-bit rendering now;
(Note that this app requires a 64-bit GUI, so no OS X version is available. They could've taken the same approach as Mental Ray/Maya, but I think Maxxon is trying to make a play for market share.)
http://www.maxon.de/pages/products/c4d/64bit/64edition_e.html


You should try Windows 64-bit then, all the old Windows 32-bit applications run just fine.

F*** no!!!. Windows 64-bit for the longest time lacked even basic driver support for perhiperals, which is another reason why OS X shouldn't move to a 64-bit GUI now. Telling me 32-bit apps work fine under it, is complete BS. My friend "was" running Win 64-bit and he ended up uninstalling it, do to the fact that most 32-bit apps don't run fine, and this was last year.

The 32-bit applications run as fast as on a 32-bit system, and 64-bit applications run even faster (typically 20% faster than 32-bit applications
on the same hardware).

LOL. That doesn't even make sense. If your statement were true, then walking would be faster than running. :CRAZY:

If the task at hand requires a larger than 32-bit integer, this is where a 64-bit proc excels. A 32-bit proc will need to cycle many times to do what a 64-bit proc can do in one pass.

64-bit addressing can be slow, but not as slow as a 32-bit system going swap crazy, or even worse, not being able to complete the task, because it exceeds its limits.

Umm, CS2 is a GUI app - it's completely 32-bit. Same with Apple's apps.

(Don't you think that someone would have noticed that an Apple security update broke Photoshop and all of Apple's own apps?)

This claim is 100% BS, maybe 200% if you use 64-bit. :rolleyes:

:rolleyes: Dude you're truly an ignorant sometimes. Yes the GUI is 32-bit, but that doesn't mean CS2 doesn't have access to 64-bit addressing. The GUI does not confine the applications memory limit.

CS2 supports over 3Gigs of RAM in the preferences. (3072mb) But if needed, it can and will use all of your available ram, so more than the preference limit. Apple's pro apps as already mentiond do this.

These peeps noticed it first;
http://www.barefeats.com/cscs2.html

Just like any other 32-bit virtual memory OS. (Windows 32-bit supports up to 64 GiB of RAM - 32 applications can each have their own 2 GiB of physical memory.)

WRONG!!!

OS X can give each 32-bit app 2 Gigs of "real" memory, not just virtual memory. It has been able to do so since Panther. I own PCs and have used them longer than Macs, so don't try and feed me this BS!!!

Windows does have the capability to give one app 3-gigs, leaving 1 gig for the system, which only AfterEffects supports that I know of, but this is different and part of a different discusion.

Look above for the links - they definitely screwed up and posted an updater that killed all 64-bit apps.

I saw a later post that mentioned there was a screw up, so you're correct, but I also saw the post stating Apple fixed it promptly. And it was an issue with applications, not the system.

You're really making something of nothing in this case.

But, like a Windows 32-bit system that can support 64 GiB, your 32-bit OSX system could still support 5 GiB of RAM.

A 32-bit 10.3 system, which nobody would claim had any 64-bit addressing support, could also support your 5 GiB. 'nuf said about needing 64-bit to support more than 4 GiB in a system.

You are truly the most "willing" ignorant peep I've encountered lately.

Let me fill you in on facts, not speculations and assumptions; I was an ADC developer and was given lots of material to read and watch. I paid my fee, so that I could use the developer store. :)

Panther could see larger than 32-bit since day one for the system. Both the system and apps had access to 64-bit computations. Tiger of course added support for the full 64-bit address range and application support for 64-bit memory addressing.

If your BS were true, then why bother moving to 64-bit at all? According to you, 32-bit is faster when running 64-bit apps and 32-bit OS's can address just as much memory.

And just to be anal like you earlier, it's "GIG," not GIB. :p

Do you even own a Macintosh?

<]=)
 
Mossberg just wrote an article stating that the new iMac Core Duo ran Doom3 with acceptable results.... unless it is universal I am calling a big fat BS on that one.

link
 
JackAxe said:
:rolleyes: Dude you're truly an ignorant sometimes.

Careful now...

JackAxe said:
Yes the GUI is 32-bit, but that doesn't mean CS2 doesn't have access to 64-bit addressing. The GUI does not confine the applications memory limit.

CS2 supports over 3Gigs of RAM in the preferences. (3072mb) But if needed, it can and will use all of your available ram, so more than the preference limit. Apple's pro apps as already mentiond do this.

The CS2 application uses 32 bit virtual addressing. This gives it a 4 GiB virtual memory space of which something around 3 GiB should be available for use by the application (likely largest contiguous allocation possible is around 2 GiB, assuming a clean virtual memory space... recently launched CS2 basically).

The CS2 application (and any application using standard file system APIs) use 64 bit offsets to index into files. The use of 64 bit offsets if VERY different then having a 64 bit virtual address space. The use of 64 bit offsets has been around since Mac OS 9.

Mac OS X from its early beginning has had a thing called the Universal Buffer Cache (UBC). This is a cooperative between the vfs layer (file systems) and the virtual memory system that allows them to share the physical page pool (RAM). The intent of the UBC is to cache all data read from disk (locally owned ones) and cache it in any physical page that is not needed by processes running on the system. In Mac OS X 10.2.8 and later when running on a PowerMac G5 the UBC can and will utilize all available physical memory which can be larger then 4 GiB, in other words the virtual memory system (and UBC) uses 64 bit physical addressing when on the G5.

So an application will gain a performance boost by being able to transparently load data from RAM instead of disk when the file data needed happens to be cached in the UBC. This caching happens for free and isn't anything special to CS2 (however CS2 may do some operations to pre-heat the UBC).

This however is not 32 bit virtual addressing, it is a purposeful effect of the UBC and happens without any work on the part of an application.

As a reference I suggest Apple 64-Bit Transition Guide in particular review the Alternative to 64-bit Computing that CS2 is possibly indirectly or directly following.

As an example...

Code:
[serickson@serickson-pmg5:~]
--- create 500 MiB file ---

[serickson@serickson-pmg5:~]
[0:505] > dd bs=1m count=500 if=/dev/random of=/tmp/500mbFile
500+0 records in
500+0 records out
524288000 bytes transferred in 44.254794 secs (11847033 bytes/sec)

[serickson@serickson-pmg5:~]
[0:510] > ls -lh /tmp/500mbFile
-rw-r--r--   1 serickso  wheel       500M Jan 19 15:33 /tmp/500mbFile

--- rebooted system to clear UBC ---

--- read the file the first time ---

[serickson@serickson-pmg5:~]
[0:507] > dd bs=1m count=500 if=/tmp/500mbFile of=/dev/null         
500+0 records in
500+0 records out
524288000 bytes transferred in 9.315147 secs (56283383 bytes/sec)

--- read the file a second time ---

[serickson@serickson-pmg5:~]
[0:508] > dd bs=1m count=500 if=/tmp/500mbFile of=/dev/null
500+0 records in
500+0 records out
524288000 bytes transferred in 0.647901 secs (809209976 bytes/sec)

So in the above example the first time I read the file it was read at a rate of 53.68 MiB/s (just about as fast as the drive could support). The second time I attempted to read the file it was read at a rate of 771.72 MiB/s. The difference in read performance is a result of the file being cached in the UBC (my inactive memory increased by about 500 MiB and free memory dropped by about the same, this reflects the UBC caching of the files data). To be clear in the second attempt to load the file no IO requests went out to the disk.


...as to the rest of your post you are intermixing to many orthogonal and parallel concepts/issues that you really don't make much sense... so I wont even attempt to respond but I will say it contains many inaccuracies... and mischaracterizations of what AidenShaw was saying.
 
JackAxe said:
The Pentium's FPU does share some registers with it, but it like the Pentium's integer, it is "NOT" 64-bit.

The Pentium's FPU only shares registers with the MMX unit not SSE

JackAxe said:
Go here to see what's possible with 64-bit rendering now;
(Note that this app requires a 64-bit GUI, so no OS X version is available. They could've taken the same approach as Mental Ray/Maya, but I think Maxxon is trying to make a play for market share.)
http://www.maxon.de/pages/products/c4d/64bit/64edition_e.html
Programs shouldn't need 64-bit GUI's so this program could be designed to run on OSX. No GUI these days needs 64bit. All the 64-bit calculation can be done in a seperate thread in the back ground. If the program does actually need a 64-bit GUI then it's done wrong.
JackAxe said:
If the task at hand requires a larger than 32-bit integer, this is where a 64-bit proc excels. A 32-bit proc will need to cycle many times to do what a 64-bit proc can do in one pass.

A 32bit processor will only need 2 passes to do 64 bit calculations.

JackAxe said:
WRONG!!!

OS X can give each 32-bit app 2 Gigs of "real" memory, not just virtual memory. It has been able to do so since Panther. I own PCs and have used them longer than Macs, so don't try and feed me this BS!!!

Windows does have the capability to give one app 3-gigs, leaving 1 gig for the system, which only AfterEffects supports that I know of, but this is different and part of a different discusion.

All memory allocated by the system is "virtual" memory. The program will think it has access to 2GB ram but there might only be 512MB available. The other 1.5GiB's will be handled in the page file. Using real mode memory is dangerous and hasn't been used widely since the 386 days.



JackAxe said:
Panther could see larger than 32-bit since day one for the system. Both the system and apps had access to 64-bit computations. Tiger of course added support for the full 64-bit address range and application support for 64-bit memory addressing.

If your BS were true, then why bother moving to 64-bit at all? According to you, 32-bit is faster when running 64-bit apps and 32-bit OS's can address just as much memory.

And just to be anal like you earlier, it's "GIG," not GIB. :p
<]=)

Also all new 32bit intel processors can support more than 4GB's however the max memory allocated to each program can only be 2GB. This is why we need 64bit.

Oh and it is GiB :)

You should read up properly before you slate others who are actually right :)
 
JackAxe said:
And just to be anal like you earlier, it's "GIG," not GIB. :p
Actually no he means GiB which specifically means 2^30 (1,073,741,824) while GB means 10^9 (1,000,000,000) in most fields (SI unit) however in the computer software/hardware fields GB is also used to mean 2^30.

Review http://en.wikipedia.org/wiki/Gibibyte and learn...

In reality you buy memory in mebibytes (MiB) / gibibyte (GiB) units and disks in megabytes (MB) / gigabyte (GB) units... crazy world.

TBi said:
A 32bit processor will only need 2 passes to do 64 bit calculations.
To be clear when operating on a 64 bit number on a CPU that only has 32 bit wide registers for that type of number the CPU will have to do 2 (or more) 32 bit operations to simulate the 64 bit operation (the compiler generates the needed instruction stream).

Additionally the instructions in that instruction stream have data dependencies between them, this prevents them from being in flight at the same time. So not only does the CPU have to do more but it has to execute them in a less efficient fashion.

To summaries "G#" capabilities...

  • The G3/G4/G5 family of processors have 64 bit wide registers for floating point numbers.
  • The G3/G4/G5 family of processors have 32 bit wide registers for integer numbers and addresses.
  • The G5 family of processors have 64 bit wide registers for integer numbers (doesn't require the use of 64 bit address for an application to use internally).
  • The G5 family of processors have 64 bit wide registers for addresses (requires 64 bit addressing support and 64 bit clean API).
  • The G4/G5 family of processors have vector registers (AltiVec) that are 128 bits wide that can be operated on as an array of 128 bits, an array of sixteen 8 bit integers, an array of eight 16 bit integers, an array of four 32 bit integers or floats. (see following diagram)

image_043.jpg


One nice addition that SSE has over AltiVec is the ability to operate on two 64 bit integers or floats in an 128 bit register.
 
whooo lots of geek talk here...

Oh yea, in french, a "Mega Bit" means a huge wiener. :p
 
JackAxe said:
AidenShaw said:
Just like any other 32-bit virtual memory OS. (Windows 32-bit supports up to 64 GiB of RAM - 32 applications can each have their own 2 GiB of physical memory.)
OS X can give each 32-bit app 2 Gigs of "real" memory, not just virtual memory. It has been able to do so since Panther. I own PCs and have used them longer than Macs, so don't try and feed me this BS!!!

God... sorry just couldn't let this stupid statement slide... :(

Ok you are wrong... on Mac OS X applications get a virtual memory space that is 4 GiB in size (addresses are 32 bits wide)... since the dawn of Mac O X. They do not get access to "real" memory, they operate in a virtual memory sandbox that the Mac OS X virtual memory subsystem maps into physical memory (RAM) as needed. Only when running inside the kernel can you potentially get access to physical memory addresses (usually only when you need to DMA to hardware). Windows acts the same way.

Additionally since Mac OS X 10.4 (when used on a G5) you can implement applications (ones without a GUI, using only libSystem) that have access to a 16 TiB virtual memory space (addresses are 64 bit wide).

Windows XP "32-bit" applications get a virtual memory space that is 2 GiB in size (addresses are 31 bits wide, the high order bit is used for kernel mapping). Windows XP "64-bit" applications get a virtual memory space that is 16TiB in size (actually 16TiB minus 2GiB IIRC).

Both Windows and Mac OS X support 64 bit addresses for physical memory (RAM)... of course the chipset and CPU in the system will often limit the physical address size to something smaller then 64 bits wide. So some aspect of the 64 bit physical address will go unused.
 
More chatter

917press said:
1x is not faster just as 1x3 is not larger than 3, it is 3.

Without an accepted benchmark not only is the marketing useless, but so is the chatter here and elsewhere.
No need to sling insults :D I never said "1x" is faster. "But 1x faster" IS. Read my statement carefully re the difference between "as fast" and "faster" :) It's a minor point, but one I'm correct about. Remember that 1x is the same as 100%. 100% of something is no improvement. But 100% ADDED (key word) to something is--it's double. That's what 1x faster means--added speed--while 1x as fast does not :D Luckily Apple makes it clear: "up to 2x as fast." 200% as fast. Same thing as saying 100% faster.

20/20 now? :p
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.