Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
andrebsd said:
Uhm that technoology already exists dude... www.pearpc.net

Edit: Forgot to note one thing... I use that for my main operating system, (obviously windows is behind it though) so its pretty spiffy; even moved the pc to a g3 case (even though its emulating a g4 system)

you must have more patience than a basset hound, since Pear PC is excrutiatingly slow.
Wired spoke with the Developers of PearPC: "Biallas and Weyergraf warn PearPC is only a version 0.1 release and is still very experimental. By their admission, it is incomplete, unstable and painfully slow -- running about 500 times slower than the host system."

and

Despite the sluggish performance (one user estimated PearPC would need a 150-GHz PC to run OS X in real time) and a painfully convoluted installation procedure, the system is being enthusiastically embraced by curious geeks.

So, are you The Curious Geek? or The Incesant Liar?
 
So all this does is emulate old code for various systems in a bubble to work on different system HW.


Why could they not say effective multi-platform emulation?.......confusions with the word and stigma in regards to emulation that its slow.........big deal so lets make up something else still makes it an emulator in a new package with a new name :rolleyes:


All this hype for nothing, one would belive you can run cross platform application without installing an OS and then opening an emulated OS with an emulated application.



anyhow Great for OEM's I guess.
 
adamjay said:
you must have more patience than a basset hound, since Pear PC is excrutiatingly slow.
Wired spoke with the Developers of PearPC: "Biallas and Weyergraf warn PearPC is only a version 0.1 release and is still very experimental. By their admission, it is incomplete, unstable and painfully slow -- running about 500 times slower than the host system."

and

Despite the sluggish performance (one user estimated PearPC would need a 150-GHz PC to run OS X in real time) and a painfully convoluted installation procedure, the system is being enthusiastically embraced by curious geeks.

So, are you The Curious Geek? or The Incesant Liar?

No... because im running 0.4, 0.1 is older than dirt and yeah 0.1 is slower than dirt too... 0.4 is very improved, most the developers run it constantly... darkf0x for example if you stop by the irc chat room on freenode.

Sofar what the origional developers are saying is being proven wrong by the newest builds. ~ You may want to try the stuff before you think that I'm full of ****; I've been with the pear developers since the day 0.1 came out.
 
andrebsd said:
No... because im running 0.4, 0.1 is older than dirt and yeah 0.1 is slower than dirt too... 0.4 is very improved, most the developers run it constantly... darkf0x for example if you stop by the irc chat room on freenode.

Sofar what the origional developers are saying is being proven wrong by the newest builds. ~ You may want to try the stuff before you think that I'm full of ****; I've been with the pear developers since the day 0.1 came out.

how fast would you say the emulation is in 0.4 ?
 
Read the article:

Los Gatos, Calif.-based Transitive is pitching the technology as one that can bring new applications written on legacy hardware onto up-to-date platforms, such as IRS software that was written in the 1970s, according to Bob Wiederhold, the company's chief executive and president.​

and

On all four architectures, the QuickTransit technology can virtualize any mainframe OS, the company said. In addition, the Itanium, Opteron and X86 back-ends will virtualize the MIPS architecture. Both the Opteron and X86 products will also allow a virtualized POWER or PowerPC architecture to be run on it; likewise, a PowerPC chip can also run an X86-designed OS, such as Windows, on top of it.​

and especially

On average, translating the various instructions will result in about 80 percent of the computational performance of a native compilation, said Frank Weidel, lead solutions engineer at Transitive. The QuickTransit kernel also requires a memory penalty of about 25 percent per applications, Weidel said. The amount of memory an application uses for data is not affected. However, the multiple instances of the technology will run side-by-side; for example, the company has been unable to break the QuickTransit application running 200 instances of the technology alongside one another, he said.​

Say that you run two programs. That's a 20% performance hit on each one, plus a 25% memory hit. You're not going to be using this thing to run a whole operating system because that would mean hundreds of instances of emulated programs.

Sorry to burst bubles, but this doesn't mean Windows on PowerPC or OS X on x86. It does mean API compatibility the way that Wine/Darwine are trying to manage things, and I seriously doubt the claims of high performance in anything that isn't already a UNIX-derived application.
 
greenmonsterman said:
Are we sure this isn't an Early/Late April fool's joke? The CEO's name is Weinerhold after all...

the wired article calls him Wiederhold
 
Why don't Apple Computer sell Mac OS X for PCs at a premium and/or ship hardware with x86 inside, using this QuickTransit to run all the old PPC-based apps?
 
theBeatles said:
Why don't Apple Computer sell Mac OS X for PCs at a premium and/or ship hardware with x86 inside, using this QuickTransit to run all the old PPC-based apps?

That would not be logical, It is my understanding that this software is alot like Classic. Only instead of Mac OS 9 its's windows.
 
Spock said:
That would not be logical, It is my understanding that this software is alot like Classic. Only instead of Mac OS 9 its's windows.

Seems logical to me.

We all "know" (=suspect) that Apple Computer have Mac OS X running on x86 hardware deep in the Cupertino labs. What they need is a way of making everyones apps still work transparently. Come to think of it -- just like they did in the 68K to ppc switch days.
 
theBeatles said:
Seems logical to me.

We all "know" (=suspect) that Apple Computer have Mac OS X running on x86 hardware deep in the Cupertino labs. What they need is a way of making everyones apps still work transparently. Come to think of it -- just like they did in the 68K to ppc switch days.

I DONT WANT A INTEL INSIDE STICKER ON MY MACINTOSH!!!
 
Several messages...

BWhaler said:
People claim it does not allow OSX run on PC's, for example, but rather let's programs run on any operating system.
SO then how is it a hardware emulator? Don't programs make calls to the operating system, not the hardware layer? (I know there are exceptions, but generally speaking.

Well, the only part of the hardware that is actually emulated is the processor, everything else is done by mapping system calls to the native API, thus QuickTransit can only run applications but no operating systems.

As for more my general reaction, if this let's me run any Windows app on OSX with similar speeds, but does not let OSX run on WinTel, I think this would be a HUGE boost for Apple. Fingers crossed...

Currently this seems to be for Unix/Linux type operating systems, most likely due to their familiar API. This means no Windows applications will run, because there is no API mapper for it.

shamino said:
The only production example of this concept (an intermediate assembly language) that I'm aware of is the GNU assembler (part of their binutils package). It's my understanding that the gcc compiler works by generating an intermediate meta-assembly language, which is then assembled into native code using this assembler.

Well, there is also the virtual processor (VP2) used by the TAO Group to dynamically generate code even on hybrid multi-processors, which the post preceding yours mentions. I almost mentioned it myself, when I noticed a subtle, but important difference: With VP2 and binutils the source is a high-level language, while in this case the source is machine code.

I suppose you could apply similar techniques to compile one processor's assembly language into another.

Not really. Even if you take very basic operations you'll find subtle but problematic differences between various instruction sets.
Some processors have all arithmetic operations generate condition flags, others do it optionally (eg. SPARC, PowerPC, ARM), and some don't have any condition flags (eg. MIPS, Alpha).
Extended subtraction with carry is handled in two ways either as carry or as borrow (even the mnemonic for x86 is SBB).
Some RISC architectures have no rotation at all (eg. MIPS, Alpha), at least one has no separate shift and rotation instructions but does it combination with another instruction (ARM).
I already mentioned that division is particularly diverse:
Alpha and ARM have no integer division instruction at all;
MIPS and SuperH perform that calculation in a special unit and have to fetch the result and the remainder from HI/LO registers;
PowerPC doesn't calculate the remainder and also has no special remainder instruction but has to calculate the remainder wih a multiplication and subtraction following the division;
PA-RISC and SuperH calculate division in steps, ie. they have to use an instruction for each bit of the result (IIRC).
And last but not least of these few examples, PA-RISC and x86 are the only current architectures I'm aware of having BCD capabilities.

As I wrote before, there are two extremes to handle that:
Either you add every feature to the intermediate representation, thus making it so bloated that it'll be a nightmare to retarget the native code generator to another architecture, because you have to support all those features in the backend.
Or you make the intermediate representation so basic that a lot of simple source instructions end up as big sqeuences in the intermediate step, thereby making optimization a real nightmare.
Of course there might be the possibility of a compromise, but frankly, I haven't found a decent compromise after analysing over a dozen architectures in search for one. Well, maybe those guys are a whole lot brighter than I am, who knows?

I'm not saying it's impossible to do retargetable dynamic binary translation, but I don't think they are able to do it AND achieve 80% of the native performance as well.

MadMan said:
According to this, they claim it can run OS X on x86?!?!

http://www.extremetech.com/article2/0,1558,1645408,00.asp

Either the claim "Company executives said the technology could be used to run the Apple Macintosh OS on top of an X86 processor." is plain wrong, or the author of the article didn't understand them correctly.

Taken from the official website:
"First, an integration “FUSE” allows QuickTransit to be easily integrated into the target system. Second, a dynamic binary translator tackles the challenge of moving from one instruction set architecture to another. Third, an operating system mapper translates operating system calls from the source system to the target system in situations where the source and target operating systems are different."
http://www.transitive.com/technology.htm

If this information is correct, it means that they cannot run MacOS with QuickTransit, because you have to emulate more than just the CPU to run a full operating system.
Of course they COULD run MacOS applications, IF they had API mappers for Cocoa, Carbon, and QuickTime, which I doubt. Which leaves Darwin applications, and those can run on a x86 processor anyway...

BTW, the article also mentions that "likewise, a PowerPC chip can also run an X86-designed OS, such as Windows, on top of it".
But it CANNOT, because according to the Transitive technology page "QuickTransit supports operating system mapping between any two Unix/Linux-like operating systems, as well as mapping between mainframe and any Unix/Linux-like operating systems." Since Windows doesn't belong to that group of operating systems, no chance.
Of course they COULD run Windows applications on a PowerPC machine, IF they had mappers for the Windows API, but similar to MacOS that's a big "if".
 
theBeatles said:
Why don't Apple Computer sell Mac OS X for PCs at a premium and/or ship hardware with x86 inside, using this QuickTransit to run all the old PPC-based apps?
The question is why would Apple be interested in Intel? Or rather, why now when they weren't before?

I think any movement towards Intel would have to be part of an overall strategy - whether that is to switch platforms, license OS X, or some "cross platform" strategy. The Transitive compatibility would be a tool that could assist any of those strategies.

In terms of switching - maybe Intel hardware would be cheaper for Apple to produce at the same speed? A virtual PC would certainly be much faster (if MS released the product). But I think Apple is quite happy with IBM right now.

Licensing OS X? Maybe. Apple burned a lot of computer makers last time though. I guess if Apple licensed MacOS X for Intel the PC makers wouldn't have to take a risk as they could always switch back to Windows. Still, unless Apple switched to Intel too this would be a cross platform strategy.

So - what about cross platform? Releasing mac OS X for all Intel hardware is too difficult. So something less risky - just their own machines, OR working with HP/Compaq? or IBM? or just AMD/HyperTransport machines? I guess I just don't see Apple making enough money from doing it. However... iLife for Windows (sold or bundled on Compaqs)?, maybe a new Appleworks?, Safari, iChat, iCal, .Mac (all Windows versions... and/or Linux?) - they seem far easier, and more likely to make some money. And they don't need Transitive (or Mac OS X on Intel). And Apple doesn't seem interested in them either.

Cocoa + Transitive??
If Apple really wanted to be able to run Mac software on Intel without recompiling - a combination of Transitive's stuff and Cocoa running on Windows and Linux could allow developers to write one app for all 3 platforms. That MIGHT get developers interested. But Apple has had Cocoa for a while and restricted it to Mac OS X (though the technology is there to write a program in Cocoa, compile for Mac, Unix, and Windows) - has their goal changed? Cocoa allows fat binaries for multiple platforms too, so Transitive isn't needed (though if Apple wanted cross platform stuff, the transitive technology might be a good interim step so that existing cocoa apps run straight away on Windows and Linux.... but again, how useful is that?). It comes back to what does Apple want....

Apple on Intel - an interesting question with lots of possibilities, bogged down by Apple's lack of interest.
 
KingOfPain said:
Not really. Even if you take very basic operations you'll find subtle but problematic differences between various instruction sets. ...
I'm not saying it's impossible to do retargetable dynamic binary translation, but I don't think they are able to do it AND achieve 80% of the native performance as well.
You're assuming that you convert the machine language opcode-for-opcode.

Suppose you treat a source machine language as a high-level language.

For example, turn each source opcode into a sequence of statements in a C program. Then take the resulting (huge and probably impossible-to-understand) program and compile it into your native code with a good off-the-shelf optimizing C compiler.

Obviously, this off-the-cuff idea isn't completely thought through, but I think such an implementation would not be impossibly-complex as your descriptions seem to imply. The front-end would be entirely portable code and the back-end would be a generic C compiler. If it has a good optimizer, and the C code you generate is of a form that's easy to optimize, I think you could get performance good enough to be useful.

I don't think a machine-language-to-machine-language compiler should be substantially more complex than a Java just-in-time compiler - of which there are many commercial implementations. You'd need a separate front-end for each source language, but all the other components have already been invented.
 
Your idea is perfectly sensible, shamino. I've never understood how one could map machine instructions directly since even minor differences between architectures can prevent instructions from having identical semantics (behavior with all possible arguments). But the method you describe is still basically an emulator, with potentially many instructions for each original mapped instruction. Getting 80% of the original performance that way seems to me to be a tough goal to meet.
 
adamjay said:
how fast would you say the emulation is in 0.4 ?

I have a 600mhz ibook (only 128mb ram though)... programs load at about the same speed, but the video is still lagging on pear where its kinda irritating sometimes.... The host pc is 2.4ghz with 512mb ram.

Once the nvidia stuff is finished ill be glad... then video shouldnt lag :) ~ Currently it just runs for a couple minutes and takes a dump... so yeah.

Edit: Figured id say that osx 10.3 recodnises it as a 1ghz G4 with 512mb ram... but the timing is still off, so its running about half that... 500mhz with 512mb ram. (which explains why its running like my 600mhz ibook with 128mb ram)

And check below for one of my screenshots; (this is 0.3 actualy, when it was just bearable as the main system, lol) I dont take all that many since well my desktop isnt all that interesting... (bunch more on pearpc.net showing different speeds, and programs running)[and no incase your wondering, vpc does not start the guest system... that was just to see if it could do anything at all]

http://pearpc.net/images/screenshots/sotw_33-2004.jpg
 
GregA said:
In terms of switching - maybe Intel hardware would be cheaper for Apple to produce at the same speed? A virtual PC would certainly be much faster (if MS released the product). But I think Apple is quite happy with IBM right now.

Apple wouldn't be producing "Intel hardware." Most likely, they'd be buying third-party parts, which is what would provide any savings in cost. That being said, it would become a support nightmare when people try to put off-the-shelf parts in and they don't work. One reason that Windows is so horrible is that they have to support an infinitude of vendors that may or may not code to standards, meet specs, or have a user-friendly policy.

If you want that, just buy a PC. It's not what the mac is about.

Licensing OS X? Maybe. Apple burned a lot of computer makers last time though. I guess if Apple licensed MacOS X for Intel the PC makers wouldn't have to take a risk as they could always switch back to Windows. Still, unless Apple switched to Intel too this would be a cross platform strategy.

Wrong. Wrong, wrong, wrong.

Apple "burned" the clone manufacturers because they lied about their intentions. The original intent was for Apple to surrender the low end to people who could - potentially - build in that market more effectively, allowing people who couldn't otherwise afford macs to buy into the platform. What actually happened is that they started to take bigger and bigger bites out of the Apple market without growing the overall share, which meant that Apple started to positively hemorrhage money by the time Jobs axed the project on his return. Figures show that they were losing as much as 32-35% of their quarterly profits to clones, while not gaining in market share any more than they did on their own.

Since Apple makes most of their money on hardware and not on software, that was a horrible, losing propositon and they did what was best for the platform and the company. There's a good reason that this hasn't been tried again since then, and why the OS X EULA specifically states that only Apple-branded hardware may be used.

So - what about cross platform? Releasing mac OS X for all Intel hardware is too difficult. So something less risky - just their own machines, OR working with HP/Compaq? or IBM? or just AMD/HyperTransport machines? I guess I just don't see Apple making enough money from doing it.

OS X for Intel, or a truly effective emulator for PowerPC, would basically kill the mac experience as we know and love it. For one, there's next to no reason or incentive for distributors to recreate their products for yet another set of APIs and system instrcuctions, so a native OS X on x86 would just see programmers say "Sorry, but you just need to dual boot." Likewise, a fully functional emulator on the current hardware would give an easy out to mostly-PC oriented companies, who could tell you just to buy their Windows product rather than building a native mac version.

Maybe the Transitive technology could be applied to alleviate some of the performance issues, but the simple fact is that native code is going to be faster. This doesn't even begin to take into account things like UI and interoperability with existing software.

However... iLife for Windows (sold or bundled on Compaqs)?, maybe a new Appleworks?, Safari, iChat, iCal, .Mac (all Windows versions... and/or Linux?) - they seem far easier, and more likely to make some money. And they don't need Transitive (or Mac OS X on Intel). And Apple doesn't seem interested in them either.

In a word, no.

The iLife suite is one of Apple's big reasons to switch. If they give it away, or even sell it, on the x86 platform, they lose even more impetus when it comes to getting people to move away from Windows. The iPod is bait, iLife the lure, and the rest is just reeling in the bites that you get.

Cocoa + Transitive??
If Apple really wanted to be able to run Mac software on Intel without recompiling - a combination of Transitive's stuff and Cocoa running on Windows and Linux could allow developers to write one app for all 3 platforms.

See above. OS X becomes Windows Lite.

Apple on Intel - an interesting question with lots of possibilities, bogged down by Apple's lack of interest.

Let me sum up most of the realistic possibilities:
Apple goes out of business or becomes the next Amiga, whereupon all the amazing stuff stops happening.
 
Ohhhhh, I just realized how they're doing this. No wonder no one else has done it, there would be actual work involved in writing it.
 
May a little bit too quickly !!!

If you read more carefully the corresponding announcement of Transitive Corp about Quick Transit, you may read that this product if only for UNIX to UNIX translation !!!

Too bad :rolleyes:

Still only the M$oft solution (VPC) :mad:
 
shamino said:
You're assuming that you convert the machine language opcode-for-opcode.

Actually I'm not. When it comes to dynamic binary translation I'm thinking of so-called basic blocks, which is a term normally used in compiler optimization.
I'm just mentioning instructions to make clear that it isn't so simple to create a general intermediate representation.
BTW, I'm not against intermediate representations, I still think that such an additional step is needed to produce a high-quality translation. But after my own experience trying to come up with a general solution I'm now more for a predecoded intermediate code, ie. the intermediate form still represents the original code, but in a predecoded way to make peephole optimization etc. easier, but also make it possible to use the advantage to translate the source code to the target machine more or less directly.

Suppose you treat a source machine language as a high-level language.
For example, turn each source opcode into a sequence of statements in a C program.

You are assuming that all applications are written in 100% high-level language, which simply isn't the case!
There is no easy way to represent rotations in a high-level language (I guess Occam might be an exception, but I'm not sure), and there are some other issues as well.
If there is hand-optimized MMX, SSE, or AltiVec, your idea runs in circles, because representation of SIMD opertions isn't really what most high-level languages have been designed for.
Yes, there are so-called "decompilers", but they often produce horrible code, and often they cheat by knowing that certain compilers use specific "covers" (ie. code sequences) for certain expressions. As soon as you have code generated by a different compiler or even by hand, those tools have real problems.

Then take the resulting (huge and probably impossible-to-understand) program and compile it into your native code with a good off-the-shelf optimizing C compiler.

One thing you are forgetting is that apart from producing relatively good code the main task of a dynamic binary translator is to do it as fast as possible, not to say that doing a translation fast is more important than producing an efficient one. Sure, people will always like the result of the translation to run faster, but if it's more like stop and go, because the translator doesn't work fast enough they'll certainly complain.
Remember, dynamic binary translation means that this process happens during runtime of the application that is translated, and the approach you propose probably doesn't work too well in that case.

Obviously, this off-the-cuff idea isn't completely thought through, but I think such an implementation would not be impossibly-complex as your descriptions seem to imply. The front-end would be entirely portable code and the back-end would be a generic C compiler. If it has a good optimizer, and the C code you generate is of a form that's easy to optimize, I think you could get performance good enough to be useful.

I don't think it makes much sense to bring flat code into a structured form, just to make it flat again. Also compilers sometimes care a lot about how the structures are built, every programmer who knows a little bit about compilers will confirm that.
To take an example from a different field: In some cases changing the ways the parens are set in an SQL statement made a difference between a few minutes and a few hours. Of course SQL operates on a lot of data, so the results of bad structuring are much more severe, but this should give you an idea that choosing the right structure isn't always that simple.

I don't think a machine-language-to-machine-language compiler should be substantially more complex than a Java just-in-time compiler

Like I said before, there is a big difference if you have to deal with synthetic code and real machine code, the possibility of side-effects (especially when instructions set condition flags implicitly) being just one.

- of which there are many commercial implementations.

I probably could speak volumes about why the JVM has a very bad design for dynamic translation, but I'll mention just a few things:

* Making the JVM a stack machine when basically all current processors are universal register machines (with the exception of IA-32, which still feels like an extended accumulator) is just a bad idea, especially since that way the rather costly register allocation algorithm isn't done during compile time (ie. from Java to byte-code) but rather during runtime (in the JIT from byte-code to machine code).
Since decent register allocation is so costly there are few JVM implementations that do it right, and this is also the reason why JVMs run faster on x86, because that architecture has been optimized to run fast with few registers, unlike RISC architectures that rely on registers being used effectively.

* I mentioned basic blocks in the beginning and that these are important for compiler optimization. The problem is the Java compiler knows the location of the basic blocks, because it generates the byte-code for them. But it doesn't mark these blocks in the byte-code, which means that the JIT-compiler has to find the basic blocks again during runtime, to be able to optimize the code. Sun's HotSpot technology to a large part is just about trying to reduce the error they made in this case.

You'd need a separate front-end for each source language, but all the other components have already been invented.

Well, try it and prove me wrong. Until then I won't believe that this method is fast enough to run transparent to the user (ie. without slow-downs during runtime) and that it also works on real-life code and not just toy applications generated by one well-analysed compiler.
 
gekko513 said:
Me, too ... Have these people implemented mappers for the whole Windows API, the whole Cocoa, CoreImage, Quicktime, Carbon ++ and the whole GTK-whatever that is used in Gnome?? That doesn't seem feasible.

I'm guessing it's severely limited when it comes to GUI applications. The Quake example from linux to Mac was probably just possible because it only uses glu/glut which exists on both platforms anyway.

If this makes it possible to run iPhoto, iMovie and iDVD on Windows XP, I'll be very, very surprised.


It makes it possible to run Safari on Win XP.
 
Hello thatwendigo,
I'm not sure if you're responding to me or what... but maybe you missed me saying "I think Apple is quite happy with IBM right now". I also wasn't arguing why Apple ended their clone deals, just that they did, which would affect ever getting involved again (I was responding to theBeatles's comment, maybe you were too).

I also agree that Transitive translation code would still not be fast enough for a big change. If Apple ever decided to do OSX on Intel (or similar) it'll be as part of some large strategic move, Transitive would be at most an interesting tool to help.

GregA said:
However... iLife for Windows (sold or bundled on Compaqs)?, maybe a new Appleworks?, Safari, iChat, iCal, .Mac (all Windows versions... and/or Linux?) - they seem far easier, and more likely to make some money. And they don't need Transitive (or Mac OS X on Intel). And Apple doesn't seem interested in them either.
thatwendigo said:
The iLife suite is one of Apple's big reasons to switch. If they give it away, or even sell it, on the x86 platform, they lose even more impetus when it comes to getting people to move away from Windows. The iPod is bait, iLife the lure, and the rest is just reeling in the bites that you get.
That is one argument. I still think IF Apple decided to make more money from Intel, they're better off using the existing Intel OSes than releasing OSX for Intel. You may disagree of course.

GregA said:
Cocoa + Transitive??
If Apple really wanted to be able to run Mac software on Intel without recompiling - a combination of Transitive's stuff and Cocoa running on Windows and Linux could allow developers to write one app for all 3 platforms.
thatwendigo said:
See above. OS X becomes Windows Lite.
Can't see how Apple releasing a quality programming environment allowing a developer to compile for Mac, Linux, & Windows would turn OSX into Windows Lite. It may make many OSX programs become available for Windows and Linux with little effort - whether that brings new developers to Xcode, or means people just buy Windows to run a Cocoa app is hard to say. It think it's more in Apple's favour, and I would love to see Cocoa for Windows and Linux.
 
GregA said:
Hello thatwendigo,
I'm not sure if you're responding to me or what... but maybe you missed me saying "I think Apple is quite happy with IBM right now". I also wasn't arguing why Apple ended their clone deals, just that they did, which would affect ever getting involved again (I was responding to theBeatles's comment, maybe you were too).

I quoted you, so I was responding to you. Even if you said that you thought Apple was happy with IBM, you said a number of things I consider to be remarkably short-sighted or to lack a grasp of Apple's strengths and weaknesses.

The reason that OS X is as amazing as it is has a huge amount to do with the fact that Apple controls the hardware platform to the largest extent possible while still allowing some choice. Many of the things we take for granted are precisely because of their tight integration, something that you just can't do without a lot of specialized coding (or at all, more likely) on the Windows side of things. Cloning (and the issues that bring) cheapened the Apple brand in a bad way, brougtht in the headache of third party drivers, cut Apple's margins and profits, and otherwise hurt the company. If you want to see what happens when you go down that road, look at eMachines and Dell.

That is one argument. I still think IF Apple decided to make more money from Intel, they're better off using the existing Intel OSes than releasing OSX for Intel. You may disagree of course.

:confused:

I never advocated Apple even developing applications for Windows, I hope you realize. I think that having iTunes on PCs is a mistake. I think that iLife on Windows would be an even bigger one, since it would gouge another of the flashy things that catch the punter's attention. When you're an underdog like Apple and competing against a technologically inferior product like Windows, sometimes image and entrenched acceptance does more than the value of your system.

Macs have far fewer security issues, lower TCO, and tend to hold their value longer. They run most software a home user could ever really need, do it fast enough that people won't really be hindered, and generally break most of the complaints that used to apply.

There is less reason to cross from mac to PC now than there has been in a decade.

Can't see how Apple releasing a quality programming environment allowing a developer to compile for Mac, Linux, & Windows would turn OSX into Windows Lite. It may make many OSX programs become available for Windows and Linux with little effort - whether that brings new developers to Xcode, or means people just buy Windows to run a Cocoa app is hard to say. It think it's more in Apple's favour, and I would love to see Cocoa for Windows and Linux.

Then you're not thinking at all about what this will mean for the UI and other concerns of software design. What reason would a programmer have to stick to the Apple strictures when he can just do what he knows, slam out a Windows-style interface, and call it a day? It's the same problem with a truly effective cross-platform emulator. If the mac can run Windows software, there is next to no reason for a developer to code a separate version that would be native and behave properly.

Is it clear now?
 
the_mole1314 said:
*vaporware*cough*vaporware*

In Solaris 10, Sun is touting its ability to run linsux applications natively at near native speeds.

Transitive also says that one OEM will be releasing a product in 2004 and more in 2005. Solaris 10 will be out fourth quarter 2004. I think Sun is that OEM.
 
thatwendigo,
Your opinion is clear, mine is just different. True perhaps one of us is short sighted, whatever. While I agree that SOME of Apple's quality comes from it's control over hardware and software, I believe that Apple should partner and work more often with others, to Apple's benefit (and ours). Opinion only - and boring for other readers so I'll leave it there.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.