Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
LOL! Typical response on this site. Claim someone as no clue. I've heard all of this before when Apple switched from 68K to PPC. Then they abandoned a RISC design for a CISC design. Perhaps you can explain why?

Maybe because the CISC architecture was a relatively cheap commoditized standard architecture that could hold Apple over until they could develop the processor they really wanted?
 
Not impressed with your resume. Your experience is with dead end, almost dead end, and poorly (according to you) processors.
Ignore the resume, and review the post history. You can tell a lot more about someone from what they say than who they claim to be. For example, having read back through your dozen posts bickering back and forth I notice you haven't yet made a true technical argument.

There are a few people in these forums that speak sense. @cmaier is one of them. You don't have to agree with every word someone says to recognize that they know what they're talking about. Whereas your reliance on snark and ad hominem attacks suggest the opinion you're defending is probably not well supported.


One doesn't need to be the captain of a ship to say the captain of the Titanic didn't do a good job.
But one can learn much more about commanding a ship from the captain of the Titanic than from some guy on the dock talking smack about him.
 
Last edited:
And yet that was not my question. I knew about CISC running parallel instructions - I was inquiring about the same process on a RISC architecture.
It’s also not quite the same process in CISC. In CISC, imagine you have an instruction:

ADD [Contents of memory A], [Contents of memory B] -> Memory C + value in R2

This gets decoded into a set of instructions:

1) LOAD A -> temp register 1
2) LOAD B -> temp register 2
3) ADD temp register 1, temp register 2 -> temp register 3
4) LOAD C -> temp register 4
5) ADD temp register 4, R2 -> temp register 5
6) STORE temp register 3 -> [address corresponding to temp register 5]

Now imagine this occurs right after an instruction like “if x==4 then...”

So do we issue these instructions or not? We don’t want to wait to figure out if x is 4, so we guess. If we guess wrong, we have to unwind all the work done in 1-6. Which means we need to keep track of the fact that 1-6 all correspond to a single “add” instruction.

We also have to deal with the temp register assignments. In instruction (4), can we use temp register 1 instead of 4? Maybe. It depends on whether (3) has already issued. How do we decide which temp registers to use? There are a finite number, and we want to be as clever as possible so we don’t stall while waiting for them to free up.

it gets *very* complicated. Much more so than RISC.
 
I don’t think so. Before M1 I would have considered a used Mac to save some money. Now, I’d rather pony up more money for a base new system with multiples in performance, warranty and future integration and support. While that’s just me I suspect that will bear out in the consumer market. It may take a while but it’s coming.
Yup, I agree. People can say what they want but the resale value of Intel based MacBooks is going to decline. Anyone who knows anything about seeking out the best value for their money is going to do some research before they open their wallet to buy a new machine. That research is going to show the Intel based Macs are soon to be "old school tech" and that won't be a good thing for the peeps trying to unload their Intel based Macs so they themselves can upgrade to the latest and greatest M1 machines.

I for one can 100% say I don't miss my 2015 15" MBP at all now that I have my 13" M1 MBA. I have it hooked up to a 28" 4K monitor for when I'm at my desk. Otherwise I'm perfectly okay with the 13" screen all while getting excellent battery life.
 
  • Like
Reactions: Jimmy James
Nothing obtuse about my statement. You said they decided to focus on developing processors which consumed more and more power.

I never said that. I said IBM did not want to focus on the consumer PC/laptop market. Period it is that market where power consumption is a critical issue. In the commercial space - particularly in data centers - it is a factor but not as much as processing power. Additionally, IBM saw more volume sales on corporate/industry sales than in the consumer market.
 
It’s also not quite the same process in CISC. In CISC, imagine you have an instruction:

ADD [Contents of memory A], [Contents of memory B] -> Memory C + value in R2

This gets decoded into a set of instructions:

1) LOAD A -> temp register 1
2) LOAD B -> temp register 2
3) ADD temp register 1, temp register 2 -> temp register 3
4) LOAD C -> temp register 4
5) ADD temp register 4, R2 -> temp register 5
6) STORE temp register 3 -> [address corresponding to temp register 5]

Now imagine this occurs right after an instruction like “if x==4 then...”

So do we issue these instructions or not? We don’t want to wait to figure out if x is 4, so we guess. If we guess wrong, we have to unwind all the work done in 1-6. Which means we need to keep track of the fact that 1-6 all correspond to a single “add” instruction.

We also have to deal with the temp register assignments. In instruction (4), can we use temp register 1 instead of 4? Maybe. It depends on whether (3) has already issued. How do we decide which temp registers to use? There are a finite number, and we want to be as clever as possible so we don’t stall while waiting for them to free up.

it gets *very* complicated. Much more so than RISC.
Now that makes it clear and I understand what you mentioned about needing longer pipelines in CISC architecture vice RISC.

Thank you.
 
It won't matter because PC will still remain dominant within 5 years especially with the <5nm Ryzen processors forthcoming. It all comes down to price point. Sure you may have a really ASIC like apple processor but reality is if the prices are not comparable then it won't matter. BUT apple will take the niche market share and make the most profit from that just like they are doing with smartphones.

You can now officially state that for all things multimedia, apple M1 and up will be the one to use. But again, price is the limiting factor here. PC's are just too awesome to give up; if you don't understand this then you are better off with an apple system. PC folk don't like everything closed and apple is the epitome of a closed system.

I can't even buy a color enclosure of my choosing and that's the most trivial selection a customer can have. I have plenty of apple products and I like them for what they are but I also have plenty of powerful PC's for gaming and encoding, but now I will use apple processor for encoding work instead.

For gaming apple's platform is a joke. In fact a PS5/Xsex will be an outstanding gaming machine at a fraction of any apple computer and/or PC.

Laptops and specially ultrabooks (which are basically the most sold type of PCs, along with servers) are literally as closed as Apple Macbooks. People just buys something to work/browse with, something mobile, they couldn't care less about swapping RAM or the SSD (only a few geeks do it). And the few content creators (compared to the mass of "general purpose" computer users) just buy a tower or get an iMac and forget about the rest. Yeah, gaming exists, but as you just admited PC gaming is just a tiny fraction compared to consoles (as it is compared to laptops/servers in marketshare).

Just some brief numbers to put it into perspective: 80% of AMD's waffers go to PS5/XBox SoCs, 20% go for CPUs + GPUs but Epyc (higher benefit margin) and then Ryzen take most of the remaining 20%. Then it goes CDNA (computing cards and accelerators) which have bigger benefit margins. And finally, with the leftovers, a few thousands of RDNA2 gaming GPUs. That's how little PC gaming is compared to everything else. And Nvidia, basically what started as a gaming GPU company, is already grossing more from AI and datacenters than from gaming.
 
As stated upthread, Apple's Mac market is not people who usually buy some cheap piece of plastic Dell crap with its Celeron and a low res screen. Their market is definitely more upscale and even their lowest end laptop (the MBA) is targeted in terms of material, build quality and pricing at more of a mid tier market. The Macbook Pros are upper tier as are iMacs.

Also, they are more USA focused historically with Macs, and their US market share is not 8% but rather somewhere north of 20%. So yes when they start showing that their Macbook Air outperforms the higher end Dell and HP and Lenovo machines it will increase their market share. Especially as more and more productivity software has native Mac versions (like Office).
 
  • Like
Reactions: Captain Trips
We are reading what you are posting but you are not very clear. You have been beating the drum that RISC is not better than CISC and pointing to past history. The point I was making and you clearly missed was that Apple's success with their custom chips is their approach to implementing the ARM standard and what they have packed onto their SOC which includes other customized processors which together provide more upside performance with lower power draw than the x86 alternative.

I clearly stated that the ability to ramp up clock speeds while shrinking manufacturing processes covered up some of the inherent weaknesses in the CISC/x86 architecture. Now that those options are winding down - Apple has found a way to leverage the strengths of RISC/ARM in this current environment to some effect.
Apparently you are reading into what I've posted and not reading what I have been writing because no where did I write that.
 
And yet that was not my question. I knew about CISC running parallel instructions - I was inquiring about the same process on a RISC architecture.
I mentioned it because the two are more alike than not. It is because of this that I have said the M1 being RISC is not as important as people make it out to be. IMO there are other factors, such as unified memory, specialized processors, and on board GPU which are far more important. RISC is a benefit but it's a slight benefit compared to other design decisions which went into the M1.
 
I don’t think so. Before M1 I would have considered a used Mac to save some money. Now, I’d rather pony up more money for a base new system with multiples in performance, warranty and future integration and support. While that’s just me I suspect that will bear out in the consumer market. It may take a while but it’s coming.
Yes, but there are always people trying to get a used mac and currently they won't get an M1. Down the line they will phase out, but also people buying new Intel ones.
 
It's not Apple's first try though. The A Series has been doing really well for a while now.
They said that their M1 processor is a totally new design. It is optimised and designed for desktop, but of course it's based on the same basis as the A14 as S6 is also.
 
  • Like
Reactions: the8thark
I mentioned it because the two are more alike than not. It is because of this that I have said the M1 being RISC is not as important as people make it out to be. IMO there are other factors, such as unified memory, specialized processors, and on board GPU which are far more important. RISC is a benefit but it's a slight benefit compared to other design decisions which went into the M1.
CMAIER made it clear that the process of parallelism in CISC is quite different and more complicated than RISC. He laid out some clear facts and examples.

What can you provide to counter his statement? Are you an engineer who has designed chips?
 
Man, I feel like this thread is getting toxic and the attacks are getting personal. I have no skin in the game between Intel vs Apple Silicon - I welcome open competition and at the end of the day, I take the stand that these are businesses and not some sports game where we want our team to "win". As long as these businesses are competing and making better products, the customer wins.

Even though I (unluckily) just bought an early 2020 13" MBP before Apple announced their switch and admittedly do have a bit of FOMO, I'm happy that M1 is blowing it out of the water, since I know that my next MBP will that much better!
 
That's for the entire computer, not the CPU alone.
Total system power was up to 31W. The M1 is a single system. Why are we breaking it down into single parts? It wouldn’t be as fast without it being a single unit. I just want accurate figures. It’s misleading to claim its 10W. I’m not denying it’s great as most other SFF PC’s are 60-70W+ at peak.
 
Man, I feel like this thread is getting toxic and the attacks are getting personal. I have no skin in the game between Intel vs Apple Silicon - I welcome open competition and at the end of the day, I take the stand that these are businesses and not some sports game where we want our team to "win". As long as these businesses are competing and making better products, the customer wins.

Even though I (unluckily) just bought an early 2020 13" MBP before Apple announced their switch and admittedly do have a bit of FOMO, I'm happy that M1 is blowing it out of the water, since I know that my next MBP will that much better!
Yeah, i never got why people take this stuff so personally. I remember one time I was a guest at a game design school lecture being given by Tim Schafer. I was simply listening to him speak like everyone else. A couple of the students found out I worked at AMD on the Opteron design. They came up to me and asked if i could get them a poster signed by the design team, and spent 10 minutes dissing Intel as if AMD was their favorite sports team. At Exponential we would literally get people sending us love letters and begging to help us in any way they could, even sweeping the floors. Very strange how people get personally invested in a 100mm square slab of silicon.
 
  • Like
Reactions: s66 and psychicist
Risc can absolutely run more than one operation at a time. In fact, it was RISC processors that pioneered superscalar functionality. In fact, it’s far easier to do Parallelism with RISC - that’s something addressed in the article that is linked to in the original post in this thread. This is because there’s a 1:1 mapping of micro ops to ISA instructions in RISC, unlike in CISC, and RISC instructions are almost always decoded in a single pipe stage. So when you create your reorder buffer/reservation stations/superscalar mapper it’s far easier to keep track of things so that when a branch prediction is missed and you need to unwind, or a cache miss occurs, or whatever, you don’t have to figure out which set of micro ops correspond to a single ISA instruction. It makes life easier in a lot of ways, and also means it’s much faster to get instructions *into* the reorder buffer/reservation stations. In CISC you can be limited by the bandwidth getting data into the buffer. In RISC you are more typically limited by the time it takes to identify dependencies (i.e. that one instruction is dependent on the result of another, so they have to issue in a certain order).

So if RISC is so superior why did anyone go with CISC in the first place?

Plus -- Apple's unique implementation of ARM with their SOC design and the integration of software also makes the M1 competitive. Its not just a RISC vs CISC discussion

Apple has the advantage of tweaking software to hardware, but overall the industry seems to be implying that RISC is the winner and the future

I worked at IBM and yes - that is an accurate statement. IBM did not want to play in the consumer desktop/laptop market at that time. They were more interested in supporting data centers and the corporate server market. What apple wanted to do with the processors supporting their products was vastly different than what IBM/Motorola had in mind.

PowerPC was used in servers because it was more powerful than Intel/AMD? What OS ran on it?
 
PowerPC was used in servers because it was more powerful than Intel/AMD? What OS ran on it?
When did I say that? What I said was that IBM was not really focusing on the needs of Apple with regards to the PowerPC chip because their focus was on the corporate market (servers, high end workstations, data centers, etc) than the consumer market. As a result, the development priorities and schedule did not align with Apple's.

That was the primary cause for Apple's transition to Intel. However, at the time even Apple indicated this would be a transition period and not a permanent move.

And they were correct.
 
  • Like
Reactions: thedocbwarren
Apple’s flavor of RISC is also something very different. Not just because of the processor blocks and unified memory but the CPU cores themselves are something different, being able to process at minimum double the instructions in parallel of anything else out there and frequently more than that. I think people keep playing down the micro architecture too much when it is probably the main driver of the speed and PPW.
 
  • Like
Reactions: cgsnipinva
Apple’s flavor of RISC is also something very different. Not just because of the processor blocks and unified memory but the CPU cores themselves are something different, being able to process at minimum double the instructions in parallel of anything else out there and frequently more than that. I think people keep playing down the micro architecture too much when it is probably the main driver of the speed and PPW.

Could be - how many ALUs per core? I’m not sure I’ve seen that reported? Three or four are quite common in other designs. I see references to the ROB accepting 8, but I didn’t notice if that means the ROB also feeds 8 into the ALUs? (Typically you might have, say, 4 ALUs, with only 2 or 3 having multipliers/dividers - those are very big, and then you also have a load/store unit, floating point unit, etc., so that, in theory, you can perform 6 or 8 instructions per cycle if they happen to be the right kinds of instructions, which they seldom are. Some processors more, some less.)
 
When did I say that? What I said was that IBM was not really focusing on the needs of Apple with regards to the PowerPC chip because their focus was on the corporate market (servers, high end workstations, data centers, etc) than the consumer market. As a result, the development priorities and schedule did not align with Apple's.

That was the primary cause for Apple's transition to Intel. However, at the time even Apple indicated this would be a transition period and not a permanent move.

And they were correct.
So if RISC is so superior why did anyone go with CISC in the first place?



Apple has the advantage of tweaking software to hardware, but overall the industry seems to be implying that RISC is the winner and the future



PowerPC was used in servers because it was more powerful than Intel/AMD? What OS ran on it?
Power was run on the server-side. PowerPC is low-end. They ran AIX and Linux depending.
 
  • Like
Reactions: psychicist
Could be - how many ALUs per core? I’m not sure I’ve seen that reported? Three or four are quite common in other designs. I see references to the ROB accepting 8, but I didn’t notice if that means the ROB also feeds 8 into the ALUs? (Typically you might have, say, 4 ALUs, with only 2 or 3 having multipliers/dividers - those are very big, and then you also have a load/store unit, floating point unit, etc., so that, in theory, you can perform 6 or 8 instructions per cycle if they happen to be the right kinds of instructions, which they seldom are. Some processors more, some less.)
Alas Apple does not publish the specifics of their Architecture. Anandtech seems to have a couple of edges (one being the site founder Anand Lal Shimpi is part of the Apple Silicon team) and has used specialized tests to suss out an overview of the architecture:


It looks like 4 simple ALU plus 2 complex and then another one with different stuff on top of that.
 
  • Like
Reactions: AlifTheUnseen
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.