Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
This is a statement, not a question:

"If there are no real benefits to using RISC..."

Man you are dishonest as you are obtuse. Here is the exact comment I made - which you PARATIALLY referenced:

"If there are no real benefits to using RISC - why didn't Apple base their SOC on x86? Wouldn't they get the same benefit if its the SOC approach that is yielding all the benefits? It seems it would have been a simpler transition which would not require software to be recompiled/factored for a new architecture?"

Did you miss the word "If" and ALL the question marks???

Most of your arguments on this thread have been that the M1 performance has nothing to do with RISC but the benefits of the SOC design. You have repeatedly said that RISC was not worthwhile and pointed to Apple's move from RISC to Intel (CISC).

So I turned the question around on you - IF there is no real advantage to RISC, then why didn't Apple implement their SOC with x86/CISC?
 
Man you are dishonest as you are obtuse. Here is the exact comment I made - which you PARATIALLY referenced:

"If there are no real benefits to using RISC - why didn't Apple base their SOC on x86? Wouldn't they get the same benefit if its the SOC approach that is yielding all the benefits? It seems it would have been a simpler transition which would not require software to be recompiled/factored for a new architecture?"

Did you miss the word "If" and ALL the question marks???

Most of your arguments on this thread have been that the M1 performance has nothing to do with RISC but the benefits of the SOC design. You have repeatedly said that RISC was not worthwhile and pointed to Apple's move from RISC to Intel (CISC).

So I turned the question around on you - IF there is no real advantage to RISC, then why didn't Apple implement their SOC with x86/CISC?
This is such a strange argument and your question is a good one. The M1 design makes perfect sense. Why would there be an argument of using CISC for these use cases? Apple did use CISC and lower power Intel chips for their MacBooks, they were... less than stellar. Most of the time overheated. Why not use smaller instructions for what is needed to keep power consumption low? Apple controls the software and the requirements. they can perfectly design the OS to use the correct instructions to best utilize a simple architectur vs generic over complicated instructions that likely go unused.
 
  • Like
Reactions: cgsnipinva
Man you are dishonest as you are obtuse. Here is the exact comment I made - which you PARATIALLY referenced:

"If there are no real benefits to using RISC - why didn't Apple base their SOC on x86? Wouldn't they get the same benefit if its the SOC approach that is yielding all the benefits? It seems it would have been a simpler transition which would not require software to be recompiled/factored for a new architecture?"

Did you miss the word "If" and ALL the question marks???

Most of your arguments on this thread have been that the M1 performance has nothing to do with RISC but the benefits of the SOC design. You have repeatedly said that RISC was not worthwhile and pointed to Apple's move from RISC to Intel (CISC).

So I turned the question around on you - IF there is no real advantage to RISC, then why didn't Apple implement their SOC with x86/CISC?
If does not change what followed to be anything but a statement.
 
  • Disagree
Reactions: Jouls
This is such a strange argument and your question is a good one. The M1 design makes perfect sense. Why would there be an argument of using CISC for these use cases? Apple did use CISC and lower power Intel chips for their MacBooks, they were... less than stellar. Most of the time overheated. Why not use smaller instructions for what is needed to keep power consumption low? Apple controls the software and the requirements. they can perfectly design the OS to use the correct instructions to best utilize a simple architectur vs generic over complicated instructions that likely go unused.
Is anyone making such an argument?
 
This is such a strange argument and your question is a good one. The M1 design makes perfect sense. Why would there be an argument of using CISC for these use cases? Apple did use CISC and lower power Intel chips for their MacBooks, they were... less than stellar. Most of the time overheated. Why not use smaller instructions for what is needed to keep power consumption low? Apple controls the software and the requirements. they can perfectly design the OS to use the correct instructions to best utilize a simple architectur vs generic over complicated instructions that likely go unused.
Just for the record -- you did read my comment as a question -- correct?
 
"...Intel and AMD are in a tough spot because of the limitations of the CISC instruction set..."

Lord knows how many times I've heard this in previous RISC versus CISC discussions.
I'm confused, having never worked on CPUs outside of school. I've been told that RISC vs CISC is an outdated battle and that the two have converged somewhere in the middle. Turns out there's a detailed article reaffirming that the distinction is still very relevant.

It says CISC vs RISC is less about the number of instructions supported, since even RISC instruction sets are quite large now, and more about the design philosophy that grew from that.
 
  • Like
Reactions: saulinpa
@cmaier made it clear that the process of parallelism in CISC is quite different and more complicated than RISC. He laid out some clear facts and examples.

What can you provide to counter his statement? Are you an engineer who has designed chips?
I think the answer is clear, @m1mavrerick is simply a troll and is entertained by how much he has gotten people to “feed him”. Unless he responds with some actual technical points, and provides some background on his expertise, I see no benefit to feeding him. :cool:
 
Last edited:
I think the answer is clear, that he is simply a troll and is entertained by how much he has gotten people to “feed him”. Unless he responds with some actual technical points, and provides some background on his expertise, I see no benefit to feeding him. :cool:
Hey, he's worked in CPU design for a long time.
 
What a joke: apple resale values are high my asx (speaking in tone of Phil Schiller) - my 2020 MBP 13 2.0GHz 16GB ram/512 Storage goes for $950 at apple trade in. 4 months old what a laughing joke the trade in value is for that.
And most people on Swappa are selling it for $1300-$1400. Trade-in offers are usually low because the trade-in partner needs to earn a margin.
 
It has to be true for your question to be valid.

What? Now I am thinking you must be drinking.

Fact - my comment was clearly a question challenge the weak arguments you have been posting here. You have not made one single logical statement and I am not sure you can read/write/comprehend English.

Or you are just a troll.
 
I think the answer is clear, that he is simply a troll and is entertained by how much he has gotten people to “feed him”. Unless he responds with some actual technical points, and provides some background on his expertise, I see no benefit to feeding him. :cool:

Agreed - he seems to be above his intellectual limit here.
 
  • Like
Reactions: 09872738
Hey, he's worked in CPU design for a long time.
Just to be clear (and re-reading my post it was not), I think that @m1mavrerick is a troll. Never provides any actual technical responses, ignores or misconstrues every post, provides no evidence of any expertise, etc.

@cmaier has demonstrated expertise. He is not a troll, just a lawyer (now). :)
 
  • Like
Reactions: Analog Kid
Unified memory definitely can play a significant factor in accelerating processor performance. Apple designed their memory subsystem to be fast. A fast processor is useless if it can't retrieve data quickly. Why else do you think processors contain cache? To expedite the retrieval of data. The same with memory, the faster the memory subsystem the faster a processor can process data.
To repeat myself, the unified memory will help in the event of a cache miss, but these are quite rare in Geekbench workloads (0.2%). You'll have to convince me the difference in memory latency and throughput make the 50% difference in that particular benchmark.
 
To repeat myself, the unified memory will help in the event of a cache miss, but these are quite rare in Geekbench workloads (0.2%). You'll have to convince me the difference in memory latency and throughput make the 50% difference in that particular benchmark.

Seems to me there are two things related to RAM going on here. UMA and in-package RAM. People use “UMA” to refer to both, but I think of them as two different issues.

With respect to the in-package RAM, it is true that the advantage there comes when you have an L2 cache miss. You add around 6ps/mm of latency when your signals have to travel to distant RAM (Plus you have to use bigger drivers, that take more power and get hotter). You typically get around that by using more cache (or more cache levels. This smart guy has something to say about all this: https://www.ecse.rpi.edu/frisc/theses/MaierThesis/ )

The other advantage of UMA is that it avoids time- (and power-) consuming memory transfers. If you have the CPU calculating information that the GPU needs to see, it can just write it into the shared memory, and there is no need to copy it from CPU memory to GPU memory over a bus.

Note that the information the GPU and CPU share may never even make it into the RAM in the package - it may be entirely within the caches (depending on how much information there is and how much other stuff is going on).
 
  • Like
Reactions: Joelist
Not impressed with your resume. Your experience is with dead end, almost dead end, and poorly (according to you) processors.
Which is near infinitely more education and professional experience than you, or anyone else on this forum has claimed. I took courses from the people who wrote the textbooks, so I can tell who can pass the exams, and who can't.
 
Motorola couldn’t deliver chips that didn’t suck even for desktops. They were bad at chip design.
Not as bad at chip design as at system architecture. HP took a look at them, but decided the HPPA architecture team knew a ton more about how to make an ISA work with compilers and operating systems. Same with the Intel 860, designed by engineers who knew how to implement fast FPUs, but clueless about the needs of decent OS support.
 
PowerPC was an Apple, IBM, and Motorola joint venture. If the PowerPC was such a superior processor why was it abandoned?
It wasn't. There are several IBM Power machines (PPC family) on the current Top 500 supercomputer list. Unlike with the new M1 MBA, cooling these IBM systems is non-trivial. But they are *fast*.
 
If there are no real benefits to using RISC ...
Having seen the difference in the number of transistors (and layout wire lengths) it takes to decode variable length instructions, compares to fixed length ones, this comment seems like it comes from someone who has never designed or even synthesized any decoder logic circuits, and had to optimize the results for timing.
 
No one is arguing that Intel is better, I think all of us who aren't impulse-buying this thing are simply arguing "it can't run the (Windows) software that I need".

Let's see how things progress. Hopefully in a year or two there will be serious development with Windows on ARM that allows it to run on these Macs, or M2 Macs, as well as supporting x86 programs through emulation.
I think that's one of the reasons those in the market for new Mac wouldn't want to buy the new AS devices, but that's probably a minority reason.

The M1 chip is an extraordinary technological achievement. Nevertheless, there are several reasons people in the market for a new Mac would want to hold off:

I. Software reasons to wait before buying an AS Mac

A. Low-version-no. OS bugs: The M1 devices require installation of an OS that is version x.1. IME, Apple's software QC hasn't been the best of late, so it's best to wait to until an OS reaches higher version numbers before installing. This isn't an option if you buy an M1, since you have to install Big Sur.

B. Application compatibility: Some mission-critical applications don't yet work with Big Sur (and, again, with the M1 chip, Big Sur is your only choice). E.g., my university says not to upgrade to Big Sur, because (a) it has incompatibilties with On Guard, which they use to verify your device's encryption status when using their VPN; and (b) Macs with Big Sur can't login to Box Drive.

C. Application performance: This will obviously depend on which applications you run, and how important they are to you. Here's an example of one whose performance on the M1 is currently mediocre: No native build is yet available for Mathematica, and it runs relatively slowly on AS under Rosetta 2 (at least relative to expectations--essentially, with Mathematica, a 2020 M1 appears to perform about as well as a 2014 MBP). Here are the WolframMark benchmarks for Mathematica 12.1 (note that this benchmark only uses two cores):
2019 iMac (i9-9900K), native build: 4.48
2014 MacBook Pro (i7-4980HQ), native build: 2.98
2020 MacBook Air (M1), Rosetta 2: 2.97

D. Current inability to run Windows (if needed).

II. Hardware reasons to wait before buying an AS Mac

A. New hardware. This isn't problematic by itself—some of Apple's new hardware has performed fine. The issue is rather that there's no track record. This can be addressed by waiting a couple of months, by which time any obvious problems will have revealed themselves

B. Insufficient connectivity and/or performance. These buyers will need to wait for the higher-end AS Macs expected in 2021.

C. You want to replace an iMac.
 
Last edited:
6502 was CISC, not RISC.

Also, all RISC processors have multiply instructions, and they all take multiple cycles to execute, just like on CISC machines.

The concept of RISC is not what you say it is.
To be pedantic, the multiply instructions might take more than one cycle to complete, but baring dependancies, they can be issued faster than once per cycle on many modern implementations (both RISC and x86).
 
Contrary to your belief, the apple brand has an advantage. I am not in the business of buying and selling; I buy a product to use and if I don't want it I can sell it privately. But to have some strange notion apple products are worth more is full of bologna.
didn't you contradict yourself in those two sentences?
1) "the apple brand has an advantage"
2) " strange notion Apple products are worth more is full of bologna".

The Apple brand does indeed have an advantage, and yes, the products are generally worth more on the used market than other brand machines with the same original purchase price.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.