Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The benefit here is staggering. Without a true 64 bit GUI in Leopard you couldn't have a 64 bit Final Cut Studio. Under Tiger the only way to develop a 64 bit application is to build your application as a 64 bit Unix app with no Aqua and then communicate with that faceless engine using a traditional 32-bit Aqua user interface application. You can't link the 32 bit Aqua libraries into a 64 bit application at present.

Moving the GUI to 64 bit means that application developers can now develop 64 bit applications (like databases, video editing software, photo editing software, etc...) that have GUI interfaces.
Seriously? So it has nothing to do with GUI performance, but with the ability to build integrated apps... Well that is a big deal.

Thanks.
 
(Hoping the new screens are all mutlitouch display, and Leopard's secret feature is, also, multitouch)

I have actually been thinking about that recently, and I agree. Even though I did dismiss touchscreen as useless a while ago. I would just LOVE to see Apple move Multitouch to their entire product-line! Why leave it at just iPhone/iPod? Why not put it in iMac, laptops and Cinema Displays as well?

They couldn't have demonstrated Multitouch in Leopard yet, since it would also mean announcing new hardware that supports that feature. They are not yet ready to do that, so they kept Multitouch under wraps for now. Soon they will release new displays and laptops, and they could at the same time announce the "top secret" feature of Leopard: Multitouch-UI.

Hey, one can dream, right?
 
I can't believe the 64bit debate is happening again. The primary benefit of the current 64bit processors is memory depth. This is good for huge volumes of video data and enormous databases but doesn't mean squat for spreadsheets, word processors, or most of what people do today.
In the case of x86-64 you also can get better performance for applications that don't need such a large memory space. The primary reasons are the following: 2x the number of registers under programmer/compiler control, using registers to pass parameters in function calls, better support of PIC, and of course native support for 64 bit integer types.

As an example (notice how few instructions x86_64 takes and how many parameters are passed via registers... aka no need for the called function to read back those off the stack)...

Code:
void foo(int i1, float f1, double d1, short s1, double d2,
        unsigned char c1, unsigned short s2, float  f2, int i2)
{
   printf("");
}

int main (int argc, const char * argv[]) {
   int i1;
   float  f1;
   double d1;
   short s1;
   double d2;
   unsigned char c1;
   unsigned short s2;
   float f2;
   int i2;

   foo(i1,f1,d1,s1,d2,c1,s2,f2,i2);

   return 0;
}

x86
	pushl	%ebp
	movl	%esp, %ebp
	pushl	%esi
	subl	$100, %esp
	movzwl	-18(%ebp), %edx
	movzbl	-19(%ebp), %ecx
	movswl	-34(%ebp),%esi
	movl	-12(%ebp), %eax
	movl	%eax, 40(%esp) >> i2 (on stack)
	movl	-16(%ebp), %eax
	movl	%eax, 36(%esp) >> f2 (on stack)
	movl	%edx, 32(%esp) >> s2 (on stack)
	movl	%ecx, 28(%esp) >> c1 (on stack)
	movsd	-32(%ebp), %xmm0
	movsd	%xmm0, 20(%esp) >> d2 (on stack)
	movl	%esi, 16(%esp) >> s1 (on stack)
	movsd	-48(%ebp), %xmm0
	movsd	%xmm0, 8(%esp) >> d1 (on stack)
	movl	-52(%ebp), %eax
	movl	%eax, 4(%esp) >> f1 (on stack)
	movl	-56(%ebp), %eax
	movl	%eax, (%esp) >> i1 (on stack)
	call	_foo

PowerPC
	mflr r0
	stmw r30,-8(r1)
	stw r0,8(r1)
	stwu r1,-144(r1)
	mr r30,r1
	stw r3,168(r30)
	stw r4,172(r30)
	lhz r0,96(r30)
	extsh r2,r0
	lbz r0,82(r30)
	rlwinm r9,r0,0,0xff
	lhz r0,80(r30)
	rlwinm r0,r0,0,0xffff
	stw r0,56(r1) >> s2 (on stack)
	lwz r0,72(r30)
	stw r0,64(r1) >> i2 (on stack)
	lwz r3,116(r30) >> i1
	lfs f1,112(r30) >> f1
	lfd f2,104(r30) >> d1
	mr r7,r2 >> s1
	lfd f3,88(r30) >> d2
	mr r10,r9 >> c1
	lfs f4,76(r30) >> f2 
	bl _foo

x86-64
	movzwl	-10(%rbp), %ecx >> s2
	movzbl	-11(%rbp), %edx >> c1
	movswl	-26(%rbp),%esi >> s1
	movl	-4(%rbp), %eax
	movss	-8(%rbp), %xmm0
	movsd	-24(%rbp), %xmm1
	movsd	-40(%rbp), %xmm4
	movss	-44(%rbp), %xmm5
	movl	-48(%rbp), %edi >> i1
	movl	%eax, %r8d >> i2 (on stack)
	movaps	%xmm0, %xmm3 >> f2
	movapd	%xmm1, %xmm2 >> d2
	movapd	%xmm4, %xmm1 >> d1
	movaps	%xmm5, %xmm0 >> f1
	call	_foo
 
Seriously? So it has nothing to do with GUI performance, but with the ability to build integrated apps... Well that is a big deal.
No it has a to do with performance as well... and allowing developers to more easily make applications that need to work with large data sets (modern media editing applications are such things now... they are working with large numbers of large images, streams of video with rendered effects, etc.)
 
The benefit here is staggering. Without a true 64 bit GUI in Leopard you couldn't have a 64 bit Final Cut Studio. Under Tiger the only way to develop a 64 bit application is to build your application as a 64 bit Unix app with no Aqua and then communicate with that faceless engine using a traditional 32-bit Aqua user interface application. You can't link the 32 bit Aqua libraries into a 64 bit application at present.

Moving the GUI to 64 bit means that application developers can now develop 64 bit applications (like databases, video editing software, photo editing software, etc...) that have GUI interfaces.

What spped bumps are we talking about here within FCP6 rendering? 10X, 20X, 100X?

64bit fcp seems like such a leap! I've been looking forward to this for years.
 
4K - Comes from film industry, not computer's

plus? The biggest horizontal resolution listed for "4k" on wikipedia is 4096, and the biggest vertical is 2664. So assuming a 16:9 form factor, a display capable of handling all these resolutions would have to be at least 4736x2664.

On a side note, what's with the designation "4k"? Sometimes these industry terms seem designed to deliberately confuse...

These terms come actually from film scanning formats, nothing to do with computers or any existing video format.

4K is considered the scanning resolution (4096x3072) at which there is no visible loss of quality when scanning 35 mm film. However, the industry generally works at half that rez -yes, it's called 2K- basically because visual
quality will be normally the same but at a fraction of the storage capacity needed.

And yes, both 4K and 2K (you're right, 2K is exactly 4K/2) are 4:3 formats, simply because so is 35 mm film negative. There are many variants on this, but basically what happens is that you crop the top and bottom of the frame, to get to 16:9-ish formats. This allows for what's called "re-framing", that is, moving the frame behind the 16:9-ish mask to get rid of boom mics and things like that.

So basically a 4096x3072 screen will be able to display a full frame at 4K ("full meaning before masking") -more than enough.

About the ability to actually play realtime 2K or 4K frames... well of course that needs some serious disks systems, although I was happy to learn at a recent expo about some companies offering raid systems fit to the task for not such a huge amount of cash considering what we are talking about. :D



"Detras de nuestras mascaras estamos vosotros"
 
You also just said the 10.4's support for 64 bit wasn't useful.
But that didn't stop the Apple Marketing Machine from full-court 64-bit hype, did it?

And, I said that 64-bit in 10.4 was "next to useless". There are some cases where there's at least a theoretical benefit. Even that was lost in the Intel migration, however.

You said that the iMac was the only system that lost 64-bit capability. It was the only system that lost a 64-bit processor, but not the only system to lose 64-bit support.
 
I'm sure that many Merom (AKA "Core 2") buyers were very aware that they were future-proofing a bit by buying a 64-bit CPU.

Most people look a bit to the future when making a large purchase. A corporate user spending someone else's money - maybe not. But most do....

I am an IT Manager. I decide what we buy, but not always when we buy it. We definitely approach the decision from a strategic perspective. But, you are right. Many of my contemporaries do not apply much 'forward thinking' to their purchases. Years of observation have led me to hypothesize, the more layers of approval, a purchasing decision requires, the worse they become.
 
I can't possibly understand how any 32 bit Macbook buyer could possibly complain. Everybody and their brother on this forum and just about every other forum on the net has strived to point out the direction that the computing world is moving in. 64 bit multi core processing is no longer the future, it is todays reality. More so it has been a reality for a year now!

So to put it bluntly, anybody complaining about their "32 bit Pro" machine should really just close their mouths and acknowledge that they can't listen. In any event as others have pointed out you won't be doing much pro editing on any of Apples current portables Pro or not.

Thanks
Dave

I remember similar discussions, moving from 16 bit to 32 bit (circa 386/486). When the 486 was introduced, SW had really not even exploited the capabilities of the 386. Many technical journalists basically said the new technology was a waste of money for the vast user community. My sense was that 32 bit OS and apps would not arrive until there was a sufficient final incentive for developers to create them. Thus, the SW would always follow the hardware. Certainly this is self-evident and not particularly original.
 
Regarding Displays

I think what the pro line needs are TFT screens that can show and hold the AdobeRGB colour space. The current colour gamut on the Mac screens is fine for home use but poor for the Pro market. You can get true sRGB monitors but they are way expensive. I would like to see Apple push into this area, giving designer’s etc a tool to help do their job properly. After all, everyone can be fussy over colour especially if it is on their company logo.
 
well

I know one thing 64bit should be the standard by now.

HD displays are nice.

Video editing on the MacBook Pro I am not sure about. Maybe Santa Rosa will be able to handle this, but I am not sure. MacBook Pro is going to need a lot of help to be pusing those kinds of graphics and the app churning. I hope this works out.

I am waiting for an iMac that is 30" and a better resolution, C2D, TV in card, BlueRay/HDDVD drive. Any else would be uncivilized.:eek:

-Jesse
 
With the current FCP 5.x.x, when you import .mxf files into your HD, FCP will have to convert .mxf to Quicktime. This defeats the purpose of tapeless P2 workflow. Currently FCP 5.X.X does not support native .mxf files.

DVCPRO HD is 100mbps, so u do the math for time needed to convert say a 60 minute .mxf to QT just so you can edit it with FCP.

As of recently, Avid, Liquid and EDIUS support .mxf natively and Premiere Pro with AXIO also supports .mxf natively. There is no reason why Apple should not support .mxf natively.

How exactly does re-wrapping an .mxf as Quicktime, while keeping the essence the same, defeat the tapeless workflow? Especially considering how slick the P2 interface in FCP is, or how fast the import/conversion process is on a modern Mac? I'd love to hear you explain it, if you could. Maybe you meant working in acquisition-native format? Which FCP is more or less the king of? Remember again, FCP deals with DVCPROHD 100% natively, Kona even accelerates the performance -- you are simply re-wrapping the essences of the file. It's not a long, drawn-out conversion process that degrades the quality of the video in any way -- it is 100% native on the codec level.
 
Does anyone have any "intelligence" on the real speed gains of fcp in its speculated 64bit outfit?:cool:
 
In the case of x86-64 you also can get better performance for applications that don't need such a large memory space. The primary reasons are the following: 2x the number of registers under programmer/compiler control, using registers to pass parameters in function calls, better support of PIC, and of course native support for 64 bit integer types.

As an example (notice how few instructions x86_64 takes and how many parameters are passed via registers... aka no need for the called function to read back those off the stack)...
Ok, so we saved 8 instructions before a function call with more than that likely has hundreds or thousands of them. A benefit, yes, but not a revolutionary one and not one that really has anything to do with word size.

The main benefit of 64bit CPUs is still memory depth-- that's inherent in the increased pointer width. 64bit integer math is another inherent benefit, but not one that has much applicability (how often do you code long long's?).
No it has a to do with performance as well... and allowing developers to more easily make applications that need to work with large data sets (modern media editing applications are such things now... they are working with large numbers of large images, streams of video with rendered effects, etc.)
"allowing developers to more easily make applications that need to work with large data sets"-- that's what I said. It's not that the GUI operates faster, it just makes it easier to build integrated applications where the 64bit portion isn't running as a separate process...
 
The x86 world saw up to 20% speed increase for moving to 64-bit capable processors. None of it came from the 64-bitness, but from the fact that such processors had twice the ammount of processor (small memory slots deep inside the vey core of the processor).

The other benefit came from using large ammount of RAM, more than 4 GB. This is not just physical RAM but page files in the operating system's virtual memory.

In these two cases a 64-bit x86 processor will get som speed increases even if the computer doesn't have more than 4 GB installed. In fact.. the supporting chipsets of some 64-bit version of x86 processors is so bad architectural wise that operating systems can hardly use more than 3 GB anyway. Such is the case in iMacs and MacBook Pros, not the Mac Pro. That's why you can't buy such systems with more than 3 GB RAM. If you put in 4 GB, it wouldn't use more than 3 anyway. PowerPC systems doesn't have this limitation, the operating systems will and can use up to 4 GB RAM on 32 bit systems.

So.. what speeds can we expect from a 64 bit version of FCP? At least 20% for starters. With a true 64-bit operating system (Leopard) we'll see additional speed ups due to a new memory sub systems with a more efficient paging model (+5-10%). If you then have the RAM that match the new paging system (>4GB) then you'll reap the real benefits of 64-bitness where you probably could see some serious acceleration.

If one were to study benchmarks from running 32- and 64-bit versions in 32-bit and 64-bit Windows on 64-bit hardware, you hardly see any speed ups. I think this is due to the fact that developers really hasn't started buildning aplications for 64-bit architectures yet. If one were to really optimize things there would be some serious rewriting of some very low level stuff in applications so they simpley havn't doen it just yet. Apple and Mac developers are relying quite heavily on exernal frameworks for some of this (at least they should) and we might get to see better utilization of the 64-bit features. I wouldn't expect more than a 20% gain though.

Take Photoshop for example.. Adobe won't go 64 bit until CS4 at the earliest.. They virtual memory structure is so archaic and integrate in the product that they just can't replace it with one that can access more than 2 GB of RAM. Too bad.
 
the supporting chipsets of some 64-bit version of x86 processors is so bad architectural wise that operating systems can hardly use more than 3 GB anyway. Such is the case in iMacs and MacBook Pros...
The term "bad" is a bit strong - it's just that the low end mobile-based chipsets have 32-bit physical addressing. Some address space has to be reserved for OS uses (such as I/O pages, mapping the video ram, ...) - therefore these mobile chipsets can't utilize all of 4 GiB of RAM.

It's not "bad" (as in defective) design - it was a tradeoff that was made for mobile systems.

The next generation (Santa Rosa) of mobile chipsets has additional memory addressing, and can support more than 4 GiB.

PowerPC systems doesn't have this limitation, the operating systems will and can use up to 4 GB RAM on 32 bit systems.
Really? Please tell me which PowerPC G4 systems (32-bit) supported more than 2 GiB of RAM?

Only the 64-bit PPC970 (G5) broke the 2 GiB barrier, as far as I can tell looking at Apple History. http://www.apple-history.com/body.php?page=gallery&model=g4_800&performa=off&sort=date&order=ASC

(By the way, 32-bit Windows and Linux operating systems on 32-bit CPUs support up to 64 GiB of RAM when used with a chipset that supports the 36-bit addressing extension (PAE). 32-bit PowerPC has a similar 36-bit addressing extension, but Apple never supported the feature.)
 
How exactly does re-wrapping an .mxf as Quicktime, while keeping the essence the same, defeat the tapeless workflow? Especially considering how slick the P2 interface in FCP is, or how fast the import/conversion process is on a modern Mac? I'd love to hear you explain it, if you could. Maybe you meant working in acquisition-native format? Which FCP is more or less the king of? Remember again, FCP deals with DVCPROHD 100% natively, Kona even accelerates the performance -- you are simply re-wrapping the essences of the file. It's not a long, drawn-out conversion process that degrades the quality of the video in any way -- it is 100% native on the codec level.

Yeah this Alpinism guy is nuts; Panasonic's own web site even touts FCP as being usable with P2 tech. I would bet my whole edit suite that copying the video files from the P2 card to my edit drives and then importing them into FCP whether it re-wraps it into QT or not, is still a lot faster than doing a real time digitize in FCP over Firewire from the camera reading the P2 cards.

-mark
 
Ok, so we saved 8 instructions before a function call with more than that likely has hundreds or thousands of them. A benefit, yes, but not a revolutionary one and not one that really has anything to do with word size.

...and those parameters are passed in registers which can reduce pressure on the memory system (usually L1). It doesn't have anything to do with word size but it is a capability enabled when x86-64 is operating in 64b mode.

The main benefit of 64bit CPUs is still memory depth-- that's inherent in the increased pointer width. 64bit integer math is another inherent benefit, but not one that has much applicability (how often do you code long long's?).
64 bit ints are used a lot more then you may think these days.

My point is that while x86-64 allows a large memory space it allows secondary capabilities that can improve performance (in specific real-world cases 2x if not a little more).

"allowing developers to more easily make applications that need to work with large data sets"-- that's what I said. It's not that the GUI operates faster, it just makes it easier to build integrated applications where the 64bit portion isn't running as a separate process...
Which is wrong. It is both (in the case of x86-64).... better performing and easier to implement... which is my point.
 
64 bit ints are used a lot more then you may think these days.
True, for example most filesystems are all 64-bit (you can tell the 32-bit filesystems - they're the ones that can't support files greater than 2 GiB or 4 GiB).

On the other hand, doing an occasional 64-bit integer calculation on a superscalar superpipelined 32-bit CPU isn't significantly slower than doing it on a 64-bit CPU. (The key word is "occasional".)

There are examples of programs that make heavy use of 64-bit integers, but they're not important to most users.
 
Yeah this Alpinism guy is nuts; Panasonic's own web site even touts FCP as being usable with P2 tech. I would bet my whole edit suite that copying the video files from the P2 card to my edit drives and then importing them into FCP whether it re-wraps it into QT or not, is still a lot faster than doing a real time digitize in FCP over Firewire from the camera reading the P2 cards.

-mark

Obviously you guys never worked with DVCPRO HD codec at all.

I never mentioned editing straight from P2 card nor firestore. As of right now, importing .mfx files
from a p2 card to FCP will required wrapping up mfx files with QT wrapper. THAT TAKES CPU TIME.

Where by with native support, you dont need no wrapper. Like in EDius, Premiere Pro with Matrox Axio and Avid where you simply drag the downloaded footage to FCP and start EDITING.

THat is also why current Firestore owner has to pay $49 for upgrade to firmware 3.0 to enable native support. Meaning firestore will wrap all mfx files with QT wrapper so you wont waste your time doing it with FCP.

If you never worked with DVCPRO HD codec , please dont go mouth off based on some brochures.

AND GO BACK TO YOUR 4:2:0 25mbps HDV !! :p
 
4k explained....

plus? The biggest horizontal resolution listed for "4k" on wikipedia is 4096, and the biggest vertical is 2664. So assuming a 16:9 form factor, a display capable of handling all these resolutions would have to be at least 4736x2664.

On a side note, what's with the designation "4k"? Sometimes these industry terms seem designed to deliberately confuse...

The term 4k orignated from the scanning of film negative in data scanners such as the oxberry and cineon systems. Arguably the highest 4k res for scanning refered to Super35mm ocn (also known as Full ap 35mm) which at 4k is 4096x3112.

The rumor says uncompressed but it doesn't say at what bit depth. The bulk of people working in DI assume that 10bits per componet such as the DPX or Cineon standard are the norm. Each file in full ap 4k uncompressed in 10bit is around 40 megs a pop. Arguably you want to play 24-30 of them a second, leads to around 1 gigabyte per second from disk in to memory. This is most likely where the 64bit requirement will come in to play. You going to need to have a ton of ram available to the system in order to anything usefull with all of those frames.

That is hard core and there are only a handfull (as in less than five) systems available in world that can do it. There's even fewer true 4k monitoring options. This rumor - if true - is ****ing momentous.

Chris

--
Chris Noellert
Senior Flame / Digital Post Technical Director

Nordiskfilm Post Production Stockholm
(Formerly Filmteknik/Frithiof Film to Video. AB)
Tulvaktsvagen 2
115 40 Stockholm

Tel: +46 8 450 450 00
Fax: +46 8 450 450 01
Dir: +46 8 450 450 17
Mob: +46 7 024 616 31
AIM: cmnoellert

Reel: http://se.nordiskfilm-postproduction.com/movieviewer.aspx?movie=Video_250018.mov
Web: http://www.nordiskfilm-postproduction.com
 
Ouch, those are some strict system requirements. If my MacBook Pro can't run Final Cut Pro 6, I'll be pretty damn upset.

Yep was thinking the same.

Our main mac used for video editing our projects is an iMac Core Duo and a MacBook when on the field.

Whilst I do have a mac pro it is never ever used for video editing, only the occasional Motion project.

Video editing generally doesnt have such need for really high technical specifications so I'm very skeptical about this requirement for 64 bit processor. Surely apple wouldnt be so silly as to cut out a huge market.

If they are, then they have lost me - as I simply will stick with Fincal Cut Studio.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.