Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Here are a few possible theories I have been mulling over.

Apple has already started moving forward with eliminating 64 bit applications and OS level support in iOS. The custom ARM processor in the future could in turn drop support for 32 bit support, freeing up some of the all important silicon (cost of manufacturing chips goes up the larger the die size, and the yield drops more or less in lock step (a semi exponential scale more or less).

The core OS is the same for iOS as it is for macOS, and I am not sure whether they are branches of each other long ago which they copy back and forth from -- or if they have one trunk and have custom deviations for it. If they drop 32 bit support for the ARM it would be logical to drop it from macOS to keep them reasonably close.

One other potential is that they have potential plans to run a mix of ARM and Intel processors (ARM at the lower end; Intel at the upper end). Part of this is in place with I believe the submission of the bitcode to the App store that can be "optimized" or in this case an ARM binary or Intel binary downloaded (fat binary alternative - along with app thinning). Applications distributed directly might still cause confusion - but it is possible they could change the distribution to fat binaries and/or bitcode that is translated to ARM or Intel on installation.

Now if they have eliminated 32 bit support for ARM but not for Intel -- this would make transparent support for ARM vs Intel impossible.

Obviously this sort of strategic information they would be unwilling to divulge ... so they would have to come up with some sort of vague storyline.

Interesting to note, some reports I have seen is that the iPad Pro A10X can hit geekbench of 9100ish which is on par with the top of the Macbook Pro 13" (dual core) lineup.... (5-7 watt chip vs 15 watt I believe and associated thermals).

-----

One 32 bit app is now no longer a problem. MS Office 2011 will not work or be supported under High Sierra (according to Microsoft). The problem apparently originates within Microsoft's Xamarin middleware layer. Microsoft has typically had a habit of straying from Apple's guidelines in the past... so not that surprising.
Obfuscating what hardware you are running on would potentially put a serious dent in the budding VR/(high end) gaming market that Apple seems finally willing to court. I would be curious if things like SteamVR and HTC Vive would even work on ARM processors. Let alone all of the currently available Mac Steam games.
 
Obfuscating what hardware you are running on would potentially put a serious dent in the budding VR/(high end) gaming market that Apple seems finally willing to court. I would be curious if things like SteamVR and HTC Vive would even work on ARM processors. Let alone all of the currently available Mac Steam games.
True, but then the only Macs that support VR out of the box are the top model of the iMac.

If they went through the normal compiler (clang) and used macOS guidelines and API.... recompiling the software on an ARM computer should be no problem (assuming they did not do something like use embedded Intel assembly language). The questions outstanding are drivers - are there anything necessary -- what were they written in, etc. or they just standard visual and input drivers..... -- and -- is the hardware powerful enough (graphics more than CPU but CPU is still a factor).

Other (upper end) Macbook Pros will support it only with a massive external eGPU.

The Macbook - not even close.

The vast majority of Windows hardware does not support VR, and to get it to support it you are talking about major expenditures in higher end GPUs, and maybe not the cheapest end of Intel CPUs.

The budding support for VR on the Mac story is all aimed at content creators... and that would be the end that was not an ARM processor (at this time). I get the feeling it is currently about previewing output from Final Cut Pro (etc.). For VR to become mass market it will take a long time and rather large improvements in CPU/GPU power so that you are not tethered to huge external boxes. It might be why they decided to drop Imagination Technology and take it internal. It will come down to the size of the installed base that is capable of VR.... and right now.... it is a very very small niche of gaming hardware. In other words, the technology is still in it's infancy and it will likely be the better part of a decade before we will start seeing potential adoption worth noting.

In other words, the hardware that is going to "win" the VR battle.... does not yet exist....
 
Last edited:
MS Office 2011 will not work or be supported under High Sierra (according to Microsoft). The problem apparently originates within Microsoft's Xamarin middleware layer. Microsoft has typically had a habit of straying from Apple's guidelines in the past... so not that surprising.
Microsoft has long planned to end support for Office 2011 in October 2017. While I'm sure they figured a new version of macOS was shipping around then, it's not necessarily likely that it has anything to do with the phasing out of 32-bit apps.
 
The x86-64 processors used in the Mac don't execute Swift code. They execute x86-64 binaries which were compiled from C, C++, Objective-C, Swift, x86/X86-64 assembly language, etc. Rewriting their frameworks to take advantage of the benefits of Swift (which they could also add to their other compilers, by the way) doesn't mean it will only run on 64-bit processors. It just requires setting the compiler switch to generate 32-bit binaries, which the build settings for the frameworks obviously already do.
No, it doesn’t. Swift has no legacy runtime compatibility, so Swift is unavailable for i386 on macOS. Again: 32 bit i386 cannot be a compilation target for Swift without legacy runtime compatibility.
 
You're joking, right? Machines with things like Windows 7, and OS X 10.9... they shouldn't be connected to the internet, thus have a good ROI? Did you read what you wrote??
If, for example, you are a small company that uses 10 year old CNC equipment, running on Windows 7 or even XP and it still works, then yes, your return on investment was great. The equipment is air gapped, and therefore, pretty secure.

I've seen plenty of non-internet connected equipment using older versions of Windows. You can also do the same with older versions of any OS, again, if the equipment is not to be connected to the internet.

The original issue I was responding to was a statement that they had professional equipment that is not normally directly connected to the internet, happily running an older application, on an older version of macOS. I questioned (1), why that equipment needed to updated at all, and (2), why they felt Apple updating their OS in the future to not include 32-bit architecture, affected them?

I'm not sure you have read the entire thread of conversation I was having.
 
You do realize that the "historic battle" of RISC vs CISC is largely over and RISC won? Even the x86-64 bit CISC instruction set is translated down to RISC microcode before execution (basically an on CPU API compatibility layer of sorts). Having more instructions (bloated) does not automatically translate to more advanced.

Now it has been almost 30 years ago since I last wrote any assembly code directly.... in those days there was the 8088/8086/80x86 pre moving to a RISC chip architecture and the "Flops" (or Floating Point Operations) actually executed in a math co-processor (80x87) and the main processor was integer only. Oh, and the more "complex" instructions took up to 12 clock cycles vs 1 clock cycle for the RISC (or less when parallel execution was taking place). On the Mac there is now a standard library to use if you are doing lots of "Flops" .... it is the Metal 2 Compute library.... and that uses the GPU to do lots of floating point calculations (which gives you a real bump in performance). There are rumours that Apple may even add additional "machine learning" specialized chips in future products. There is a reason why Nvidia has pivoted to this business.... they get to repurpose the same stuff they use in their GPUs (their Floating Point Compute Units) for a growing business... The Tensor Processor Unit that Google has is basically the same technology producing petaflops of compute power in a rack (more efficient than A tower full of Xeon processors for the same purpose).

Now a lot of standard banking /brokerage back-office financial applications would never ever use "flops" since using "approximate" scientific floating point calculations will inevitably lead to calculation errors due to repeating binary (like repeating decimal calculations of 1/3) and rounding.... leading to calculation errors when it comes to money (out by a cent out by 1,000,000.01 on one side of the ledger and 1,000,000.00 on the other side). i.e. fixed decimal calculations should be done through integer instructions with the decimal point applied after the calculation. I believe even in Java's implementation of Big Decimal the value is calculated and stored in an array of up to 8 integer values (does not fit in 64 bits) and the scale in another value... which is wrapped up in a immutable object.

Flops are primarily used for video, machine learning calculations, scientific... and all those are better executed on specialized hardware not the Intel CPU itself.

So you are saying that ancient history matters for computing, even from google's junk ?
What passes for finance and banking in america today was superceded by a ecommerce design widespread in new zealand 20 years ago. What is current today handles currency, stocks and other tokens without any problem across the world.
As a mathematician, I find your grasp of the details lacking rigour, and certainly a dated viewpoint. Sounds like you could have been a lecturer in comp sci, though.
 
So you are saying that ancient history matters for computing, even from google's junk ?
What passes for finance and banking in america today was superceded by a ecommerce design widespread in new zealand 20 years ago. What is current today handles currency, stocks and other tokens without any problem across the world.
As a mathematician, I find your grasp of the details lacking rigour, and certainly a dated viewpoint. Sounds like you could have been a lecturer in comp sci, though.

As a person that spent a night (actually overnight) in a vault with 2 bank employees trying to track down a reconciliation problem after an upgrade to their system - and being out one bloody cent.... (having to find out if the old version was correct, the new version was correct, or neither....) Because of real world observations like this I take the matter very seriously. I however am not the best at explaining it, so I am quoting a response from Stack Overflow that is does a better reason why it matters. I can only offer real world observations that the use of floating point for exact decimal calculations can go horribly wrong. Maybe new decimal math these days no longer matters as well as other things like accounting.

But you being a "mathematician" might offer a "mathematical" proof at how using a format that cannot exactly represent decimal values - is not a problem in this "new" real world. Maybe theoretical proof these days trumps real world observations.

This is how an IEEE-754 floating-point number works: it dedicates a bit for the sign, a few bits to store an exponent for the base, and the rest for a multiple of that elevated base. This leads to numbers like 10.25 being represented in a form similar to 1025 * 10-2; except that instead of the base being 10, for floats and doubles, it's two, so that would be 164 * 2-4. (That's still not exactly how they are represented in hardware, but this is simple enough and the math holds the same way.)

Even in base 10, this notation cannot accurately represent most simple fractions. For instance, with most calculators, 1/3 results in a repeating 0.333333333333, with as many 3's as the digital display allows, because you just can't write 1/3 in decimal notation. However, for the purpose of money (at least for countries whose money value is within an order of magnitude of the US dollar), in most scenarios all you need is to be able to store multiples of 10-2, so we don't really care if 1/3 doesn't have an exact representation as an integer times a power of 10, and even the cheapest calculators handle cents just fine.

The problem with floats and doubles is that the vast majority of money-like numbers don't have an exact representation as a integer times a power of two. In fact, the only fractions of a hundred between 0/100 and 100/100 (which are significant when dealing with money because they're integer cents) that can be represented exactly as an IEEE-754 binary floating-point number are 0, 0.25, 0.5, 0.75 and 1. All the others are off by a small amount.

Representing money as a double or float will probably look good at first as the software rounds off the tiny errors, but as you perform more additions, subtractions, multiplications and divisions on inexact numbers, you'll lose more and more precision as the errors add up. This makes floats and doubles inadequate for dealing with money, where perfect accuracy for multiples of base 10 powers is required.

A solution that works in just about any language is to use integers instead, and count cents. For instance, 1025 would be $10.25. Several languages also have built-in types to deal with money. Among others, Java has the BigDecimal class, and C# has the decimal type.

https://stackoverflow.com/questions/3730019/why-not-use-double-or-float-to-represent-currency
 
Last edited:
Getting back to the origin of this the benchmarking.

Geekbench, to the best of it's ability, works at exercising the CPU in isolation to measure the average workload a CPU can handle and provide a cross architecture baseline as a comparison. While not perfect, or not specific to specific uses... it provides a reasonable benchmark for comparison. You have dismissed out of hand these results because Apple "could have" or "possibly could have" boosted the processor for the test (even though that would take up precious silicon, and the processor takes up 30% to 40% less silicon than the Intel processors of note)..... Then of course this skepticism is completely missing on the Intel side of the equation -- which to me is indicative of bias given that Apple has no history of designing CPUs for the purpose of numbers on a specific benchmark. It is actually easier to cheat on benchmarks using custom compilers.... but the actual compilers and the resultant binary code is audited to make sure there is no funny business going on.

Then you go on to the rant that it does not measure "flops", even though the benchmark is a mix of integer, floating point, and memory. Obviously the overall number could mean that Apple's processor could have better integer, worse floating point, and better memory....

In the same post that you make sure to spell out the RISC/CISC acronyms as sort of some stupid scientific proof that C meaning Complex (instruction set - not hardware) means that by use of that instruction set Intel is automatically more advanced.... Then in the same post you only refer to this magical "Flops" thing that the benchmark does not measure, but it is actually part of the benchmark.... as why this benchmark does not matter....

I actually thought science was suppose to be the bastion of more rigor than you have demonstrated in your posts. Most science geeks I know would come at it as "oh that is interesting", "lets find out why this is the case" rather than with more and more short condescending retorts with less and less worthwhile content. And yes, maybe the first sentence on a few of my posts were egging on your ego a bit - but then if small emotional pricks cause you to lose your scientific balance.... well...

Tell me, how does a mathematician -- provide you with a scientific foundation at understanding that by virtue of being a "complex instruction set" (i.e. programming language) make a CPU more advanced? Or for that matter CPU design at all? Or at least enough foundation to dismiss observations out of hand without any foundation?

I realize that you might be the smartest person in the room at this point -- and only have a problem when it comes to conveying your brilliance unto us lesser beings.... (by virtue of being "a mathematician") .... but then you are probably replying from a room in your home :rolleyes:
 
Last edited:
To make their customers happy so they stay with Apple and don't switch to Windows? (And it's not "a few old applications" - many applications TODAY are being built with the Carbon libraries which can only be 32-bit and converting to Cocoa would be a massive change - so also "To keep your developers making software for the Mac which means your customers can get the software they need and don't switch over to Windows")
But Carbon is basically Mac OS Classic from the 80s, the writing has been on the wall for a long time, you don't want to keep old stuff around forever. Applications that have been around for decades would surely benefit from a major rewrite or refactoring.
 
  • Like
Reactions: kiwipeso1
BTW, Intel is not known to be competitive on the internal GPU side.... or GPU in general... might be indicative that it is not that great when it comes to "flops"...
 
Getting back to the origin of this the benchmarking.

Geekbench, to the best of it's ability, works at exercising the CPU in isolation to measure the average workload a CPU can handle and provide a cross architecture baseline as a comparison. While not perfect, or not specific to specific uses... it provides a reasonable benchmark for comparison. You have dismissed out of hand these results because Apple "could have" or "possibly could have" boosted the processor for the test (even though that would take up precious silicon, and the processor takes up 30% to 40% less silicon than the Intel processors of note)..... Then of course this skepticism is completely missing on the Intel side of the equation -- which to me is indicative of bias given that Apple has no history of designing CPUs for the purpose of numbers on a specific benchmark. It is actually easier to cheat on benchmarks using custom compilers.... but the actual compilers and the resultant binary code is audited to make sure there is no funny business going on.

Then you go on to the rant that it does not measure "flops", even though the benchmark is a mix of integer, floating point, and memory. Obviously the overall number could mean that Apple's processor could have better integer, worse floating point, and better memory....

In the same post that you make sure to spell out the RISC/CISC acronyms as sort of some stupid scientific proof that C meaning Complex (instruction set - not hardware) means that by use of that instruction set Intel is automatically more advanced.... Then in the same post you only refer to this magical "Flops" thing that the benchmark does not measure, but it is actually part of the benchmark.... as why this benchmark does not matter....

I actually thought science was suppose to be the bastion of more rigor than you have demonstrated in your posts. Most science geeks I know would come at it as "oh that is interesting", "lets find out why this is the case" rather than with more and more short condescending retorts with less and less worthwhile content. And yes, maybe the first sentence on a few of my posts were egging on your ego a bit - but then if small emotional pricks cause you to lose your scientific balance.... well...

Tell me, how does a mathematician -- provide you with a scientific foundation at understanding that by virtue of being a "complex instruction set" (i.e. programming language) make a CPU more advanced? Or for that matter CPU design at all? Or at least enough foundation to dismiss observations out of hand without any foundation?

I realize that you might be the smartest person in the room at this point -- and only have a problem when it comes to conveying your brilliance unto us lesser beings.... (by virtue of being "a mathematician") .... but then you are probably replying from a room in your home :rolleyes:
Is Intels L3 cache on die? It seems like they include more cache than Apple does, and that takes up space...
 
Is Intels L3 cache on die? It seems like they include more cache than Apple does, and that takes up space...
I cannot figure out if it is on die or not. I believe Apple has smaller cores as well, but not sure of the proportions since they do have a lot of it focused on the graphics which is why it has done very well in that area. The reason I mentioned the smaller size is not only to indicate why Apple is typically able to churn them out earlier for each die shrink.... when the yields are not as great.... Obviously the larger the size the more the yield drops, and the cost to manufacture goes up. It is conceivable that Apple could add more cores, or more L3 cache fairly easily -- if it was an issue. Intel typically will still have a higher single core benchmark - it is something they have focused on it the past ... primarily since the legacy nature of Windows apps makes it more important than it should be overall. AMD now is coming out with better price /performance for very performant x86 chips.... but since the single core performance not as great -- Legacy applications especially in gaming where the developers have not focused on utilizing all the cores available... make it seem as though Intel is a better choice generally for publications that focus on gaming. Video production though is better fit on the newer AMD chips than Intel because it typically scales across cores more easily. It is adding more competition to the x86 market, but then if it were not for one specific app -- I would like to see the x86 instruction set - set into the history of computing. The required compatibility is holding us back ever so slightly from the absolute top performance that could be eeked out of chips now. The Intel's Itanium chip was a nice try, but it failed to take off due to of all things Linux giving x86 a second lease on life.

Intel has typically excelled at the fab'ing -- but not always the chip design. When Apple transitioned to Intel, Intel was the superior choice for laptop like devices where there was no real alternatives. I don't know whether it is laziness, or whether it is because Intel purposely hampering performance of their mobile processors.... but they have lost all that ground.

When IBM chose the Intel 8088 to run their initial PC, there were other chips around that time that would have been a better choice.... If I remember right, I liked the National Semiconductor 32xxx chips better than the Intel chips at that time (one of 3 or 4 architectures I have written assembler code for).... but it all gets fuzzier with age.

Anyways, I am just glad to see some competition and balance starting to return to the consumer CPU market.

Apple now has enough cash on hand that they could buy AMD with pocket change, or easily swallow Intel (break it up and sell it off).... if it were not for those pesky monopoly laws :eek:

I have been watching Apple having incrementally better chips each generation - recently improving around 30% a generation. I was expecting this to slow somewhat. I was expecting Apple to get to this point.... in comparison.... but I was not expecting it for a few more generations. It has completely floored me they have advanced to this point so quickly. I have no doubt that their ARM processor (or related ones with more cores) could easily handle the mobile computing that they had.... if they were able to get over the massive issues related to those that need to run Windows or Linux-86 and having a line that is a mix of processor architectures and the inherent confusion that it would rain.

Next year (in all likelihood) they A11X chip will likely move from 16nm to 10nm....
 
Last edited:
I have multiple computers running different versions of the operating system hooked up to the same monitor. Effectively the same solution -- really.... they don't have adapters that allow you to stick a VHS Tape in a Bluray player :eek: (I love using imagery that is not applicable.... )

When did a Bluray player enter into the conversation? Your anti-32-bit compatibility position is so flawed, it's spilled over into your English comprehension.

My point is there are plenty of adapters to convert the SD composite or component output of VHS players to the HDMI 1.4 inputs of most HD (and greater) displays, if the display lacks direct SD inputs. So the analogy of preserving a customer's old videotape collection has a corollary to the 32-bit compatibility subsystem in macOS which is the topic of this thread.
 
  • Like
Reactions: kiwipeso1
Is Intels L3 cache on die? It seems like they include more cache than Apple does, and that takes up space...

According to some more searching the A10X Fusion APL1071 has:
64kb per core + 64kb of L1 cache
3072kb of L2 cache
4096kb of L3 cache

A9X APL 1021 did not have L3 cache.

So it is conceivable that an equivalent Macbook 12" or 14" could have the power of a midlevel Macbook Pro 13" in a similar fanless thin package. The question is how the processor handles running long tasks in comparison.... The Macbook 12" will throttle considerably after 30 seconds at load.... using a processor that (core-m) costed $250+ per unit.

Now although I think Apple is exploring the idea, I don't think they are close to pulling the trigger on it due to having a mix of intel and AxxX chips at different ends of the lineup... which has inherent risks of confusion. I actually do use VMWare Fusion / Linux on my Macbook for the times I need to have a small test Oracle Enterprise database server .... which I use every once in a while when I need to go out of town. Having 5 damaged vertebrae I really appreciate the feeling like I am carrying nothing.... and I really do like having my UNIX bash shell etc. (I still prefer laptops over tablets).... I might get an iPad Pro as the temptation of a tablet with a pencil is tempting... and my old tablet I think is obsolete so it is not good for testing going forward....

My toy shopping list keeps on growing.... need to replace everything soon (except the Macbook).
-- maybe a Mac Mini or Mac Mini Pro if they update it with a quad core.... or a 2018 version of the Mac Pro ... to replace the Mac Pro 2008 [it has been running at "800%" CPU utilization for 6 months almost 24 hours a day]
-- An iPad Pro to replace my old iPad
-- An iPhone SE to replace the iPhone 4S which .... I primarily just use as a modem and test device
 
Ahem no? MacOS doesn't have this silly subsystem crap that Windows NT has. The 32 bit parts of macOS are just 32bit versions of the relevant frameworks.

You're asserting a dependency can only be classified as a subsystem if it matches the integration details of Windows NT? lolwut?

Pick up a dictionary and some common sense if you're triggered by the word "subsystem".
 
As a person that spent a night (actually overnight) in a vault with 2 bank employees trying to track down a reconciliation problem after an upgrade to their system - and being out one bloody cent.... (having to find out if the old version was correct, the new version was correct, or neither....) Because of real world observations like this I take the matter very seriously. I however am not the best at explaining it, so I am quoting a response from Stack Overflow that is does a better reason why it matters. I can only offer real world observations that the use of floating point for exact decimal calculations can go horribly wrong. Maybe new decimal math these days no longer matters as well as other things like accounting.

But you being a "mathematician" might offer a "mathematical" proof at how using a format that cannot exactly represent decimal values - is not a problem in this "new" real world. Maybe theoretical proof these days trumps real world observations.



https://stackoverflow.com/questions/3730019/why-not-use-double-or-float-to-represent-currency

Thank you for posting something I've been aware of for 34 years how to work around.
Is there anything else you want to cover from when I was in primary school ?
Perhaps you would like to start with Turing machines, and read a basic computing paper from 80 years ago before you try explain something to someone who knows how it works ?
Clearly you lack a rudimentary understanding of number theory in relation to the original paper on computing.
[doublepost=1497579328][/doublepost]
Getting back to the origin of this the benchmarking.

Geekbench, to the best of it's ability, works at exercising the CPU in isolation to measure the average workload a CPU can handle and provide a cross architecture baseline as a comparison. While not perfect, or not specific to specific uses... it provides a reasonable benchmark for comparison. You have dismissed out of hand these results because Apple "could have" or "possibly could have" boosted the processor for the test (even though that would take up precious silicon, and the processor takes up 30% to 40% less silicon than the Intel processors of note)..... Then of course this skepticism is completely missing on the Intel side of the equation -- which to me is indicative of bias given that Apple has no history of designing CPUs for the purpose of numbers on a specific benchmark. It is actually easier to cheat on benchmarks using custom compilers.... but the actual compilers and the resultant binary code is audited to make sure there is no funny business going on.

Then you go on to the rant that it does not measure "flops", even though the benchmark is a mix of integer, floating point, and memory. Obviously the overall number could mean that Apple's processor could have better integer, worse floating point, and better memory....

In the same post that you make sure to spell out the RISC/CISC acronyms as sort of some stupid scientific proof that C meaning Complex (instruction set - not hardware) means that by use of that instruction set Intel is automatically more advanced.... Then in the same post you only refer to this magical "Flops" thing that the benchmark does not measure, but it is actually part of the benchmark.... as why this benchmark does not matter....

I actually thought science was suppose to be the bastion of more rigor than you have demonstrated in your posts. Most science geeks I know would come at it as "oh that is interesting", "lets find out why this is the case" rather than with more and more short condescending retorts with less and less worthwhile content. And yes, maybe the first sentence on a few of my posts were egging on your ego a bit - but then if small emotional pricks cause you to lose your scientific balance.... well...

Tell me, how does a mathematician -- provide you with a scientific foundation at understanding that by virtue of being a "complex instruction set" (i.e. programming language) make a CPU more advanced? Or for that matter CPU design at all? Or at least enough foundation to dismiss observations out of hand without any foundation?

I realize that you might be the smartest person in the room at this point -- and only have a problem when it comes to conveying your brilliance unto us lesser beings.... (by virtue of being "a mathematician") .... but then you are probably replying from a room in your home :rolleyes:


Because a) I am a second generation chip designer and have known how CPU designs work since 1983.
b) yes, in our second home I have designed a new type of chip, just as my father built a new type of chip in another room in this second home back in 1972. [Still in use by factory production lines today, decades after the patent expired.]
c) I have been dealing with an entirely different type of mathematics I discovered over the last decade, and have about 8 gigabytes of results memorised outlining an entire sequence of thousands of chemicals so that I can analyse the results and crosscheck manually. [My calculations method is faster, quicker and more accurate to check on a handheld calculator than the traditional method on a top 500 supercomputer.]
d) It is well known and reported frequently that industry tests are subverted by companies designing for the conditions. Samsung does it, Volkswagen does it, countless companies identify test conditions and boost performance for the duration of the test.
e) as for taking up "precious silicon", how many transistors in a 3.3 billion transistor chip need to be dedicated to work the test conditions is the question you need to ask, not "chip size in a certain percentage smaller chip". [Do you even recognise how irrelevant that question is in context of transistor size reduction at generation steps in processors ?]

f) What next do you want me to mention, the times I've worked on projects at subway or burger king after a meal ? Does that even matter ? And why would it matter in the slightest where I do tasks ? The only thing that I care about is the results, and the location does not change the results at all.
 
What do you not understand that Reduced Instruction Set Chip is less sophisticated than Complex Instruction Set Chip ?

Huh? I've designed RISC and CISC chips (RISC: RPI F-RISC/G, Exponential PowerPC x704, Sun UltraSparc V. CISC: AMD K6-II/III, AMD Opteron, etc.). I don't understand this statement/question.

Also, are you suggesting we gamed the designs for specific benchmarks or something? I'm very confused.
 
What do you not understand that Reduced Instruction Set Chip is less sophisticated than Complex Instruction Set Chip ?

Well, that's no longer true. While the theoretical basis for RISC architectures was a simpler, streamlined architecture, it turns out that performance gains weren't substantial without adding complex logic like out-of-order execution, branch prediction and superscalar operations. CISC architectures on the other hand, adopted some of the RISC elements to improve the performance of their chips as well.

Ars Technica did a great article on this about 20 years ago.
 
many applications TODAY are being built with the Carbon libraries which can only be 32-bit and converting to Cocoa would be a massive change
If people are still building Apps today with a 32bit API from the 1980s - Carbon is basically a slightly cleaned-out version of Classic - then it's really their fault. Apple not updating Carbon to 64bit was a clear sign, years ago, that Carbon was on the way out.
 
If people are still building Apps today with a 32bit API from the 1980s - Carbon is basically a slightly cleaned-out version of Classic - then it's really their fault. Apple not updating Carbon to 64bit was a clear sign, years ago, that Carbon was on the way out.
An even clearer sign... they deprecated Carbon..... 4 versions ago.....
 
  • Like
Reactions: Basic75
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.