Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
aegisdesign said:
Sorry, but you're completely mad.

Sorry, but do you work for adobe?

My point is, they already have the code for x86/SSE used on the Windows platform.

That code (the functions that handle the rendering of effects etc.) ought to be separate from the code that represents the 'gui layer' of the software (for the most part).

When did I say that x86 code = OSx x86 code?

I don't think I did.
 
please accept my apologies...

I'm sorry, I didn't realize how large that photo was....

The IBM web page scales it with "width="420" height="369"", so it looked normal-sized.

I've scaled it, and made it a link.

Sorry...
 
BGil said:
aegisdesign said:
Sorry, but you're completely mad.

snipQUOTE]

Oh yeah, Adobe software for Vista requires no modifications or porting. Nearly everything from XP will work on Vista. The only possible porting will be to 64-bit (After Effects, Premiere Pro, Audition)

How did I know you'd start up with the Vista fanboy stuff?

Why in the world did you have to bring up Vista? :confused:
 
it wasn't meant to be literal...

longofest said:
You can't just precision cut a dual-core proessor and get 2 separate processors. The two have connections to each other that would be severed that are used for cache "snooping" (which is a nice advancement of dual-core systems over dual-proessor systems).

While each core might have separate caches depending on the CPU type (different companies have different implementations), you always cannot just split them and have 2 cpus. The cores are totally reliant on each other.
Are you sure?

I've made single-CPU systems out of dual processor systems before - just used the bandsaw to cut through the case, motherboard, power supply and disk drives.

Works fine.... :D
 
AidenShaw said:
Are you sure?

I've made single-CPU systems out of dual processor systems before - just used the bandsaw to cut through the case, motherboard, power supply and disk drives.

Works fine.... :D
Yeah, but only if you use a fine, diamond-tipped blade for precision cutting.
 
MacTruck said:
Nicely done inspector. So again and even better, its a fake.
A fake created with the intention of distracting rumormongers from more speculation about tomorrow event? ;)
 
The press will decide the issue

AidenShaw said:
IMO the smallest unit that fits the criteria is the core. Look at the PPC970MP picture. It's pretty clear that this chip has two complete "thingies" on it - cut it down the middle and you'd have two CPUs, so wouldn't you have 2 CPUs even if you don't cut it?
Actually in this case, no. From IBM statements, the 970 uses interchip communication, which means that unlike the Pentium D, it's not just two processors on the same die. When looking at the more advanced multi-core chips, the IBM Power 5+ comes to mind, there is a great deal more logic and wiring that links the cores together.

The other place where this logic fails, is when referring to chips like the "Cell". The "Cell" as implemented for the PS2 will have 9 cores. One of them is a stripped down general purpose core, which could act as a stand alone processor The 8 SPEs could probably not be separated to work as stand alone units, without reworking them.

Generally I refer to the die as the CPU.
AidenShaw said:
Before too long, maybe new clearer terminology will be settled upon, so that the characteristics of how many "thingies" in how many "whatchamacallits" in your system will be clear.
I agree with this, unfortunately media outlets and marketing departments in general don't seem to care in most instances.
Intel said:
http://www.intel.com/personal/desktopcomputer/dual_core/index.htm
Dual-Core Processing: Intel® Pentium® processor Extreme Edition
Apparently they use "Processor" to refer to a single piece of silicon regardless of the number of cores.

AMD said:
http://www.amd.com/us-en/Processors/ProductInformation/0,,30_118_9485_13041,00.html
Do more in less time with the AMD Athlon 64 X2 dual-core processor.
Apparently AMD agrees with Intel.
ZDNet said:
http://reviews.zdnet.co.uk/hardware/processorsmemory/0,39024015,39193811,00.htm
Intel's first dual-core CPU benchmarked...

...Simply put, dual-core technology places two independent execution units onto the same processor die -- think of it as two processors in one.
Ziff Davis agrees with me on "CPU", but also with Intel on "Processor"...

BTW: I googled for cpu core. The results, while not agreeing with me were fairly decisive.

Ultimately it'll be the marketing departments and the press that will decide this one, and it won't be based on logic. This is the dumbed-down, catch-phrase-laden world we live in these days. I barely have the energy to attempt to curb users from calling Word "Microsoft", and Intenet Explorer "The Internet". People are willfully ignorant, you can rant about it (and I do) but it doesn't seem to change anything. The sad thing is that by using their pet names and referring to things using incorrect terminology, they're putting a big "I'm a sucker, take my money" sign on themselves when dealing with computer salemen, and techs.
 
igetbanned said:
Sorry, but do you work for adobe?

My point is, they already have the code for x86/SSE used on the Windows platform.

That code (the functions that handle the rendering of effects etc.) ought to be separate from the code that represents the 'gui layer' of the software (for the most part).

When did I say that x86 code = OSx x86 code?

I don't think I did.

Much of what is referred to as x86 code isn't. It's Win32 code. Very important difference. The code makes windows system calls, calls on windows libraries, and is written in a completely different compiler (Visual Studio Perhaps???). The only parts that would be x86 specific are the parts written in x86 assembly, which is probably a very very small percentage of the total program.

While it's can be rather simple to write a x86 FreeBSD program using standard libraries and make it run on OSX PPC, it's much harder to write a x86 FreeBSD program and make it run on x86 Windows.
 
igetbanned said:
My point is, they already have the code for x86/SSE used on the Windows platform.

That code (the functions that handle the rendering of effects etc.) ought to be separate from the code that represents the 'gui layer' of the software (for the most part).
Yeah, they ought to be separate. But since when do corporations do things the way they ought to?

A large application that's built with portability in mind should be properly segmented into purely portable code, CPU-specific sections (x86, PPC, etc.), OS-specific sections (locking, networking, etc.) and GUI-specific sections (Win32, Quartz, X11, etc.). This way, a move like this becomes simple - undefine the macros for PPC and define the macros for x86. Leave the OS and GUI macros set for Mac OS/Quartz. Recompile and test.

I suspect organization like this is why Mathematica was ported so quickly. The "10 lines" of code they had to change are probably the macros that identify the platform.

I suspect Adobe's code is not even close to being organized like this. So they're going to have to review/audit everything, even if the actual changes end up being minimal. Which is why they're saying it will take a long time.
 
MagnusDredd said:
Much of what is referred to as x86 code isn't. It's Win32 code. Very important difference.

Now that makes perfect sense.

The only parts that would be x86 specific are the parts written in x86 assembly, which is probably a very very small percentage of the total program.

You say probably.

I would have guessd that all of the effects rendering code, format conversion code, etc. would make up the bulk of the program (or at least the more technically difficult part that I would dread redoing from 'scratch' or near scratch).

The rest of the 'fluff' comes from the windows API.
 
shamino said:
Yeah, they ought to be separate. But since when do corporations do things the way they ought to?

A large application that's built with portability in mind should be properly segmented into purely portable code, CPU-specific sections (x86, PPC, etc.), OS-specific sections (locking, networking, etc.) and GUI-specific sections (Win32, Quartz, X11, etc.). This way, a move like this becomes simple - undefine the macros for PPC and define the macros for x86. Leave the OS and GUI macros set for Mac OS/Quartz. Recompile and test.

I suspect organization like this is why Mathematica was ported so quickly. The "10 lines" of code they had to change are probably the macros that identify the platform.

I suspect Adobe's code is not even close to being organized like this. So they're going to have to review/audit everything, even if the actual changes end up being minimal. Which is why they're saying it will take a long time.

You also make perfect sense.
 
shamino said:
I suspect organization like this is why Mathematica was ported so quickly. The "10 lines" of code they had to change are probably the macros that identify the platform.
Mathmatica was written using Project Builder/Xcode...

Project Builder/Xcode was originally designed to be cross platform. Before Apple bought it, MacOS X's (It was called NeXTstep then) programming environment reportedly would support multiple platform binaries with a click of the mouse. NeXTStep ran on x86, PA-RISC, Sparc, and m68k CPUs. I've heard that it was nearly effortless to create a single application that would seamlessly run natively on all four CPUs.

In effect, you could have a program on a File server on the network. A person on an HP 9000 series machine (it's not x86 based) could run the program, a person on a Sparcstation (microSparc CPU) could open the same application, a person on a "PC" (x86 based machine) could likewise open the program. This would also be true for applications on CD, one version for all four platforms. The OS was designed from the ground up to support multiple architectures in such a way that the user would never know the difference.

I'm not exceptional with C++, but in reading the documentation on making universal binaries from Cocoa-based programs, I completely understood what changes would have to be made (damned few) and how to do it. It took me all of 5 minutes to understand what to do, and I've learned C++ from an outdated C++ in 21 days book. The cheesy command line stuff I've written requires no changes at all. The reason I don't have to do much is that all of the stuff I've written uses standard libraries, and Xcode for development.

For more on NeXT:
http://en.wikipedia.org/wiki/Nextstep
shamino said:
I suspect Adobe's code is not even close to being organized like this. So they're going to have to review/audit everything, even if the actual changes end up being minimal. Which is why they're saying it will take a long time.
I'd guess that you're at least half correct. I'm pretty sure that it'll take some substantial work to move the major Adobe apps to Xcode. My reason is as follows:

Carbon is a completely different story. It's derived from the "Macintosh Toolbox", which is the programming interface that was developed for the original Mac. The Macintosh Toolbox was original partially implemented in the ROM (hardware) of the machine. A terrible practice, but at the time it was faster. While it worked for Apple for a while, it was horribly tied to the hardware, and not very expandable or easy to update. The virtual memory system was unbelievably bad, the ability to keep progams from invading each other's space (usually crashing the machine) was non-existant, etc. From my experience, it was all UI with nothing underneath. The Mac toolbox was not designed to be portable, quite the opposite. This is why Apple's move to PPC from 68k, which seemed to occur seamlessly, amazed me. It may not have been that easy of a transition but it seemed like it from the outside. I wasn't a "Mac user" at the time.

Carbon is basically 90% of the old mac API (Application Programmer Interface) ported to a new OS + some new instructions. 10% of the old mac API was considered bad enough to be thrown out. New instructions and methods were added to replace the 10% that sucked badly, as well as to bring some new funtionality in. The problem is that companies who had written software for the Mac Toolbox had all of this old code still in the programs that had to be changed.

The reason this is so hard for applications that have been around a long time is because, most of the time, programmers/companies don't replace the old code. They simply add to it. It's not always the best way, but sometimes it's the only affordable, and fast enough way to release new programs. Windows XP is a grand example of the "Winchester Mansion" mentality, in that it is Windows 2000 with about 4 or 5 million new lines of code added to the old. Since I don't work for M$, I cannot say how much of the old code was replaced, but I'm guessing it was a very small percentage.

There's more to it than that of course, but you should be able to see why some developers will be able to port their programs in a day or so, and other companies will have a much harder time. The reason Adobe will have a hard time is pretty certainly not about anything x86 at all. It'll be moving an application with millions of lines of code to a new programming kit that doesn't work the same way, and looking through tons and tons of old code. I'm not thinking it's rewrite level of nastyness, but a serious pain nonetheless.

The point is that the more of the old code (Mac toolbox) still lurking in the recesses of a program, the harder it'll be to convert. The programs that have been on the Mac the longest without a major overhaul will probably be among the hardest.
 
SPUY767 said:
Yup, Ht is pretty well useless, save for the fact that it speeds up multitasking, if only very slightly. Hyperthreading makes the computer processor think that it has a siamese twin, and allows the execution of two process threads simultaneously. This is all well and good, but half the time, it makes things run slower than than if they weren't running HT.
I disagree. HT shows a significant improvement (15% or more) vs a single thread on the processor. App's that are well-threaded by design will see even greater improvement from warmer caches. To avoid getting too deep into a description of how Intel's complex instructions are actually comprised of smaller dependency "chunks", lets use this analogy:

We have only one mouth, with which we can consume a certain amount of food and drink over a period of time, resulting in an average rate of consumption; ie our eating performance. Some bites have to be chewed longer than others, meat must be cut into smaller bites, when we drink liquid it goes down quickly. Often, taking a drink helps us swallow a mouthful of food. This is a sequential process. So, God gives us two hands [dual bi-directional memory busses] with which to feed ourselves, and this speeds up the preparation of food as well as moving it from plate to face, but only one hand can stuff food or drink into the mouth at a time, and (regardless of how hard/fast we try to stuff it in there) we can't consume it any faster than we can bite/chew/swallow/drink/etc, although our cheeks [caches] store some extra food and help keep the mouth [CPU] in business while we use our arms. Now, if we had two mouths [dual-core] we could process the food faster, but with only two hands, we're still limited by how fast we can prepare and deliver the food. Luckily, we don't, but then if we really wanted to eat as fast as humanly possible, we'd dump all of the food and drink into a blender to mince it up and gulp it down as a constantly-flowing slurry... that's HyperThreading. It goes in, gets processed and comes out just the same, but it happens quicker and all at once. And we (hopefully don't want to) see that the end product isn't any different... :eek:
 
MagnusDredd said:
Since I don't have access to a DevKit, I appreciate the info. The Hyperthreading display sounds like a dead giveaway that you'd have to have a developer kit to know about.

So, almost certainly a fake for that reason alone.

I had noticed the icon. The icon can easily be explained for, by Apple not adding the ability for the dock icon to reflect more than one physical CPU. It's hard to say whether it'd display activity at all or show up as the screenshot would suggest due to lack of ability to track 4 CPUs.
On a DTK, if HT is enabled, the Activity Viewer's Dock icon shows two columns, just like a Dual-G4/G5.

MagnusDredd said:
Hmmm, I'll have to take another look at that.

I've not seen the GUI of an x86 machine running OSX, but I've logged into one. It wasn't a dev kit though. An aquaintance I met in a chat room had me log into it via SSH. It was a Dell. So while I've seen output from system_profiler, it was via the command line.

Also, If I were going to do this and post screenshots, I'd use my VGA -> analog (TV) converter. It'd fuzz out much of the detail, but subtle (and not so subtle) watermarks would be trashed.

Thanks again for the heads up.

Glad to be of service... ;)
 
aegisdesign said:
<snip> if you're running an application that only has 2 threads then you're not going to see any advantage from a 4 CPU system.
Yes you certainly will. The two threads that an app might have running concurrently are only for its own process. Those threads will "block" waiting for other things to happen on the system, both as part of the normal callouts into the Kernel and support libraries, as well as all the other processes (tools, app's, etc) running at the same time, but the big advantage is for the Quartz Window Server (CoreGraphics) which performs the drawing, compositing, and display of everything you see on the screen.
aegisdesign said:
And now it's about to bite both Windows and Mac developers as their inherently single threaded or at best double threaded applications will get beaten by applications that scale better to multi-cpu, multi-core setups.
Amen! My application happens to benefit from that exact scenario - and AltiVec of course. BTW, no current or announced Intel CPU is going to beat the G5 in FP throughput. Poorly-written apps, simple apps, and those that use only integer registers are going to realize the biggest improvement over PPC. Multi-media app's will suffer with SSE, compared to AltiVec. Sad but true...
 
AltiVec guru said:
Poorly-written apps, simple apps, and those that use only integer registers are going to realize the biggest improvement over PPC. Multi-media app's will suffer with SSE, compared to AltiVec. Sad but true...

Simply wrong, the only problem are very FP intensive applications.
Programs with complex program control logic like compilers run much much faster on a modern Intel system. And SSE3 gives you the same speed as AltiVec, while it may a little bit harder to programm it is not slower. And normally only a very few persons ever need to touch AltiVec at all (it's all about using optimized libraries).

And i don't believe the screenshot is a fake it simply is a Pentium-D Extreme Edition running a cracked MacOSX version. The only reason, why this is not useable for real word working is that the MOBO's where the P-DEE runs support only 1024x768 VESA video mode.
 
me said:
I had noticed the icon. The icon can easily be explained for, by Apple not adding the ability for the dock icon to reflect more than one physical CPU.
AltiVec guru said:
On a DTK, if HT is enabled, the Activity Viewer's Dock icon shows two columns, just like a Dual-G4/G5.

Apologies, I mistyped, I meant to say more than two physical CPUs. What would happen to the icon if you threw 4 chips at it??? Would it display activity at all, or would having 4 CPUs cause the part of the app that updates the icon to become non-functional due to unimplemented code??? Would 8 CPUs crash the app, or screw with the icon's display....

I've administered OSX server on dual CPU boxes before, so i know what it looks like when you have a dual processor system. At this point I'm not sure what would be displayed if you exceeded two chips. Problem is that to figue it out, you'd have to violate some licensing at the very least, and pirate, crack, and violate licensing on top of that at the worst. Well you could shoot a guard and steal a dev kit and un over a few nuns in the getaway car.... That would be worse, but not likely I think.

Sorry, I'm one of those "what if we did this?" type of guys. I have to screw with things. I have a CLI version of 10.2.6 that is less than 60MB as a compressed DMG. Including swap it weighs in at under 200MB of disk space when in use. I use it to reimage machines across the network.... That was one of my "what if we did this...???" things. While I tend to seriously break software while ripping it apart to figure it out, I generally can get a good idea of what can be done, regardless of whether the program was created to be able to handle what I want it to do or not. I'm very conservative about what I implement professionally, but in my personal time, I tinker with things...
 
MagnusDredd said:
Mathmatica was written using Project Builder/Xcode...

Project Builder/Xcode was originally designed to be cross platform....
On the Mac, it is. Mathematica is also sold for platforms that have nothing to do with NeXTStep - like Windows and Solaris.

Its portability comes from a lot more than just the choice of development environment.
MagnusDredd said:
Carbon is a completely different story. It's derived from the "Macintosh Toolbox", which is the programming interface that was developed for the original Mac.
Carbon was designed to use similar APIs, in order to make porting as easy as possible. But it is not the original toolbox. And it was designed to be portable across multiple platforms.
MagnusDredd said:
The Macintosh Toolbox was original partially implemented in the ROM (hardware) of the machine.
Although these ROMs remained present for a very long time, Mac OS stopped actually using the ROM code sometime around system 7.

The toolbox was designed so that software could override firmware, to allow for updates. After several versions of the OS, nearly everything was replaced. Macs with the "new world" architecture don't have the Toolbox ROMs at all - Apple finally eliminated them after realizing that they aren't used for anything anyway.
MagnusDredd said:
A terrible practice, but at the time it was faster.
It was necessary at the time. Remember, they had to cram the entire OS and application software only a 400K floppy disk, and it had to run in 128K of RAM.
MagnusDredd said:
The virtual memory system was unbelievably bad, the ability to keep progams from invading each other's space (usually crashing the machine) was non-existant, etc.
You're confusing your history. The original system software was single tasking. Every app had the entire system. Multifinder was only introduced in System 6, as an optional component. It wasn't until System 7 that multitasking became standard. None of this has anything to do with the Toolbox code.
MagnusDredd said:
Carbon is basically 90% of the old mac API (Application Programmer Interface) ported to a new OS + some new instructions. 10% of the old mac API was considered bad enough to be thrown out.
Don't discount that "ported to a new OS" bit. Although the function prototypes didn't change, the implementation was changed quite radically. The 10% that was thrown out was code where the external specification was tied to the hardware architecture - where the act of making the call portable would break apps anyway.

Carbon has been around for a very long time, and it is portable. Which is why Carbon apps can run natively on Mac OS 8 as well as the latest OS X. Porting an actual Carbon app to x86 should not be any more difficult than porting a Cocoa app, as long as the code is not using third-party toolkits (which, obviously, would have to be ported first.)
MagnusDredd said:
The reason this is so hard for applications that have been around a long time is because, most of the time, programmers/companies don't replace the old code. They simply add to it.
An app that was "carbonized" no longer has any calls to the old Toolbox code. If it did, it wouldn't run in the Carbon environment.

I think you're confusing Classic with Carbon. It's easy to do, because the APIs are similar, but they are very different environments.
MagnusDredd said:
The point is that the more of the old code (Mac toolbox) still lurking in the recesses of a program, the harder it'll be to convert.
If any of this code is still being compiled in, the app won't compile for the Carbon environment at all.

Which is why Adobe took so long to make OS X versions of their apps. They had to carbonize everything - which they should've done back in the days of OS 8, but they weren't interested until market forces left them with no choice.

I stand by my assertion that Adobe will be one of the last to support x86, not because of technical issues, but because their management won't want to begin the process of porting code until after the platform is well-established in the marketplace.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.