Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Well I tried to simplify the issue, But you too are wrong!

The level is at the machine code level not at the word level. The bit order (not numeric) is the issue here. How the CPU digests the input at the nibble or byte level is the issue. Even at newer 32bit and 64bit processors, which work off of word and double word. The issue is at the lower level.
This is gibberish. Endianness is the order of bytes in a word. You are saying nonsense. You aren’t oversimplifying. You are just plain wrong.


 
Almost! Re-read Wikipedia "storage or during transmission." Its not at the word level within CPU's machine (assembler code) its as at the nibble https://forums.macrumors.com/thread...-wwdc-2020-what-we-know.2209517/post-28558517
No it’s not.

wikipedia:In computing, endianness is the ordering or sequencing of bytes of a word of digital data in computer memory storage or during transmission.

Tech terms.com:Endianness is a computer science term that describes how data is stored. Specifically, it defines which end of a multi-byte data typecontains the most significant values. The two types of endianness are big-endian and little-endian.

I could do this all day. I’ve also actually designed CPUs. I designed the integer ALUs in several AMD chips. I know how data is represented. You don’t.
[automerge]1592092300[/automerge]
Almost! Re-read Wikipedia "storage or during transmission." Its not at the word level within CPU's machine (assembler code) its as at the nibble https://en.wikipedia.org/wiki/Nibble
Here are more:


What is Endianness?
In almost all modern embedded systems, memory is organized into bytes. CPUs, however, process data as 8-, 16- or 32-bit words. As soon as this word size is larger than a byte, a decision needs to be made with regard to how the bytes in a word are stored in memory. There are two obvious options and a number of other variations. The property that describes this byte ordering is called “endianness” (or, sometimes, “endianity”).

And another:

Endianness

The idea of "endianness" refers to byte order.
[automerge]1592092432[/automerge]
Almost! Re-read Wikipedia "storage or during transmission." Its not at the word level within CPU's machine (assembler code) its as at the nibble https://en.wikipedia.org/wiki/Nibble
And another:


What Is an Endian?
It turns out, this is not the right question to ask. An "endian" is not a standalone term when discussing data. Rather, the terms "big-endian" and "little-endian" refer to formats of byte arrangement.

every single definition says “bytes”
 
No it’s not.

wikipedia:In computing, endianness is the ordering or sequencing of bytes of a word of digital data in computer memory storage or during transmission.

Tech terms.com:Endianness is a computer science term that describes how data is stored. Specifically, it defines which end of a multi-byte data typecontains the most significant values. The two types of endianness are big-endian and little-endian.

I could do this all day. I’ve also actually designed CPUs. I designed the integer ALUs in several AMD chips. I know how data is represented. You don’t.
[automerge]1592092300[/automerge]

Here are more:


What is Endianness?
In almost all modern embedded systems, memory is organized into bytes. CPUs, however, process data as 8-, 16- or 32-bit words. As soon as this word size is larger than a byte, a decision needs to be made with regard to how the bytes in a word are stored in memory. There are two obvious options and a number of other variations. The property that describes this byte ordering is called “endianness” (or, sometimes, “endianity”).

And another:

Endianness

The idea of "endianness" refers to byte order.
[automerge]1592092432[/automerge]

And another:


What Is an Endian?
It turns out, this is not the right question to ask. An "endian" is not a standalone term when discussing data. Rather, the terms "big-endian" and "little-endian" refer to formats of byte arrangement.

every single definition says “bytes”

You are still stuck on the dam word WORD get off it as its confusing you! You even stated it here 8-, 16- or 32-bit.

So if I have eight bits 11111111 how you read them is the issue of Endianness. Going back to what I stated at the very beginning 11110000 Vs 00001111 are not seen at the same value.

The problem here is you are so caught up in more modern CPU design you have either forgotten or never learned how the original Intel 4004 and how the Motorola 6800 worked! Go dig out your books and read them!

BTW I'm an ex IBM'er who help design the original IBM PC's BIOS.
 
Last edited:
  • Haha
Reactions: jdb8167
You are still stuck on the dam word WORD get off it as its confusing you! You even stated it here 8-, 16- or 32-bit.

So if I have eight bits 11111111 how you read them is the issue of Endianness. Going back to what I stated at the very beginning 11110000 Vs 00001111 are not seen at the same value.

The problem here is you are so caught up in more modern CPU design you have either forgotten or never learned how the original Intel 4004 and how the Motorola 8600 worked! Go dig out your books and read them!

BTW I'm an ex IBM'er who help design the original IBM PC's BIOS.
WTF are you talking about? The whole point of this conversation is that you claimed the big problem with switching between RISC and CISC is endianness (it’s not. It’s irrelevant.). Then you claimed that endianness has something to do with the order of nibbles in a word (it does not). Now you are saying it has nothing to do with words, because “8- 16- or 32-bit” (huh?)

If you have a word size greater than a byte, endianness refers to the order of the bytes in the word (which I’ve said over and over again, which you keep denying, and which every single definition in every textbook and on the internet agrees with me).

Now you are talking about the 4004 and the 8600, as if they matter. THey don’t. But both are 8-bit machines, both had the most significant bit where it belongs, and both cannot have anything to do with endianness because neither has word size of more than a byte.

1111000 and 00001111 are obviously not “seen as the same value’ on *any* machine, because on ALL machines 11110000 is decimal 240 (or -16 2’s complement) and on ALL machines 00001111 is decimal 15. Every single digital CPU represents 240 as 11110000 (assuming unsigned int) and 15 as 00001111, regardless of endianness. This includes how they are stored internally - they are stored in registers like this, they are transmitted to ALUs like this - there is NO variation in any machine (at least not any machine that uses binary representation) for how you represent these integers.

You are continuing to make things up.
 
So if I have eight bits 11111111 how you read them is the issue of Endianness. Going back to what I stated at the very beginning 11110000 Vs 00001111 are not seen at the same value.

I think you are having some recollection of machines where "bit 0" is the most significant bit. The IBM 370 is one such example. From Wikipedia: "Note: IBM documentation numbers the bits in reverse order to that shown above, i.e., the most significant (leftmost) bit is designated as bit number 0."

On the other hand, "bit 0" is usually the least significant bit in microprocessors.

Suffice it to say that this is merely a documentation issue. The hardware designers need to be aware that the lsb will appear on this wire over here, not that one over there. But the machine doesn't run slower because of what is written in the paper manual .

The other place where this bit numbering may show up is in bit field instructions. The assembly language programmer needs to be aware how the parameter to the instruction gets used. (As Far As I Can Remember, the IBM370 doesn't have bit field ops. A long time ago, I wrote one short program in 370 assembler - I had extra mainframe CPU time available after completing all my class assignments.)

The problem here is you are so caught up in more modern CPU design you have either forgotten or never learned how the original Intel 4004 and how the Motorola 8600 worked! Go dig out your books and read them!

I think you mean the Motorola 6800...
 
I think you are having some recollection of machines where "bit 0" is the most significant bit. The IBM 370 is one such example. From Wikipedia: "Note: IBM documentation numbers the bits in reverse order to that shown above, i.e., the most significant (leftmost) bit is designated as bit number 0."

On the other hand, "bit 0" is usually the least significant bit in microprocessors.

Suffice it to say that this is merely a documentation issue. The hardware designers need to be aware that the lsb will appear on this wire over here, not that one over there. But the machine doesn't run slower because of what is written in the paper manual .

The other place where this bit numbering may show up is in bit field instructions. The assembly language programmer needs to be aware how the parameter to the instruction gets used. (As Far As I Can Remember, the IBM370 doesn't have bit field ops. A long time ago, I wrote one short program in 370 assembler - I had extra mainframe CPU time available after completing all my class assignments.)



I think you mean the Motorola 6800...

If he means that, that’s even less relevant. Yes, even in PowerPC, bit “0” is the most significant bit. I took over the design of the PowerPC x704 floating point unit and worked on the follow-up one as well. Bit 0 being the MSB really sucked; there was some new number format (can’t remember what it was), and bit 0 was sometimes 2^N and sometimes 2^(2N) or something like that. Really sucked.

Of course, made NO difference whatsoever to any programmer - it was entirely a documentation issue. I had to name the flip flop that held the MSB 0 so that anyone in the future would know what, in the PowerPC spec, it referred to. But any programmer simply put “11110000” in the register same as always, and didn’t need to know that we called the left bit “0”
 
Seems to me that true all-day battery life while simultaneously improving performance are at least two benefits to the user.

Not to mention the new form factors and functionality that will be possible.

I would like that... But I have my reservations. Will Apple be able to support hardware virtualization with ARM CPUs? I'd need (want) to run multiple containerized apps on my mac. Right now, for example, Docker Desktop for the Mac (and it's Kubernetes implementation) is tightly coupled with Intel and Hyperkit.

Do you think Apple can/will address that?

Plus - Do you think Apple will allow devs to easily install "non-App Store" 3rd party apps?
 
Last edited:
I would like that... But I have my reservations. Will Apple be able to support hardware virtualization with ARM CPUs? I'd need (want) to run multiple containerized apps - and right now, for example, Docker Desktop for the Mac (and it's Kubernetes) is tightly coupled with Intel and Hyperkit.

Do you think Apple can/will address that?

Rumors are that they will add hooks for that 1 year after the initial ARM MacOS release. We’ll see.
 
  • Like
Reactions: dgdosen
...it’s not like anyone will be developing for macOS in the future, they’ll be developing for iOS and then porting over.
Developing for one platform and porting (in reality, rewriting it) to another is a massively inefficient way of developing software. You need to be able to guarantee a lot of sales to make it worthwhile.

Creating separate and basically incompatible iOS and macOS was an extraordinarily bad long term strategy for Apple that is taking them years to fix.
 
Developing for one platform and porting (in reality, rewriting it) to another is a massively inefficient way of developing software. You need to be able to guarantee a lot of sales to make it worthwhile.

Creating separate and basically incompatible iOS and macOS was an extraordinarily bad long term strategy for Apple that is taking them years to fix.
A lot has already been written about Mac Catalyst, here’s a good example.

This isn’t developing for a PS5 then porting to an Xbox 360, this is using Apple’s Xcode to target an app for one of Apple’s platforms and retargeting it, IN Xcode for another one of Apple’s platforms. Rewriting and other inefficiencies are drastically reduced such that “many of the developers building the first third-party Catalyst apps managed to get an acceptable build running on the Mac within 24 hours. But each faced some challenges unique to each app.” Not EVERY dev will have it easy... if they’re using deprecated API’s or older versions of Swift they will have their work cut out for them. But, good developers mentioned by the OP ARE keeping everything updated :)

What Apple has done is lowered the bar to creating a native Mac App. Twitter’s macOS app doesn’t make Twitter ANY more money than the iOS app (it’s free for both, so zero guaranteed sales and the macOS ad views are TINY compared to iPadOS), but it wouldn’t exist if Apple hadn’t created a way for Twitter to take their iPadOS effort and easily direct it to macOS. They’ve taken a “why should we” question and turned it to a “why wouldn’t we?”
 
WTF are you talking about? The whole point of this conversation is that you claimed the big problem with switching between RISC and CISC is endianness (it’s not. It’s irrelevant.). Then you claimed that endianness has something to do with the order of nibbles in a word (it does not). Now you are saying it has nothing to do with words, because “8- 16- or 32-bit” (huh?)

If you have a word size greater than a byte, endianness refers to the order of the bytes in the word (which I’ve said over and over again, which you keep denying, and which every single definition in every textbook and on the internet agrees with me).

Now you are talking about the 4004 and the 6800, as if they matter. THey don’t. But both are 8-bit machines, both had the most significant bit where it belongs, and both cannot have anything to do with endianness because neither has word size of more than a byte.

1111000 and 00001111 are obviously not “seen as the same value’ on *any* machine, because on ALL machines 11110000 is decimal 240 (or -16 2’s complement) and on ALL machines 00001111 is decimal 15. Every single digital CPU represents 240 as 11110000 (assuming unsigned int) and 15 as 00001111, regardless of endianness. This includes how they are stored internally - they are stored in registers like this, they are transmitted to ALUs like this - there is NO variation in any machine (at least not any machine that uses binary representation) for how you represent these integers.

You are continuing to make things up.

You clearly are messed up as you keep getting lost! This is at the assembly code level not what you program at. At the very basic level on how a processor works.

You're just like my ex who has no idea how the cars engine worked trying to tell me the key wasn't working when turned and needed my car keys to fix the car, not realizing the issue was a dead battery! No matter how hard I tried to explain it to her she was still stuck on the key was the problem.

So I get it! You are stuck on the key as the broken part couldn't be something deeper as you just can't get your head under the hood to learn a bit on how CPU's work back in the day and how that same design is still used today.
[automerge]1592224148[/automerge]
I think you are having some recollection of machines where "bit 0" is the most significant bit. The IBM 370 is one such example. From Wikipedia: "Note: IBM documentation numbers the bits in reverse order to that shown above, i.e., the most significant (leftmost) bit is designated as bit number 0."

On the other hand, "bit 0" is usually the least significant bit in microprocessors.

Suffice it to say that this is merely a documentation issue. The hardware designers need to be aware that the lsb will appear on this wire over here, not that one over there. But the machine doesn't run slower because of what is written in the paper manual .

The other place where this bit numbering may show up is in bit field instructions. The assembly language programmer needs to be aware how the parameter to the instruction gets used. (As Far As I Can Remember, the IBM370 doesn't have bit field ops. A long time ago, I wrote one short program in 370 assembler - I had extra mainframe CPU time available after completing all my class assignments.)



I think you mean the Motorola 6800...

Yes, Typo! 6800!
 
Your dream came true!

Well, not exactly. For me, the magic keyboard is too expensive for what it is made out of. I hate that material so I will wait and see for now. Ideally though, I'd like a little 11 inch clam shell iPadOS form factor, with essentially two current iPad Pro's joined with a hinge and magnets used to keep the screens from touching. The bottom half would be a keyboard with improved tactile feedback that adapts to whatever app you are using. I think we will see such a device eventually, all the pieces are there.
 
  • Like
Reactions: Jack Neill
You clearly are messed up as you keep getting lost! This is at the assembly code level not what you program at. At the very basic level on how a processor works.

You're just like my ex who has no idea how the cars engine worked trying to tell me the key wasn't working when turned and needed my car keys to fix the car, not realizing the issue was a dead battery! No matter how hard I tried to explain it to her she was still stuck on the key was the problem.

So I get it! You are stuck on the key as the broken part couldn't be something deeper as you just can't get your head under the hood to learn a bit on how CPU's work back in the day and how that same design is still used today.
[automerge]1592224148[/automerge]


Yes, Typo! 6800!
Assembly IS programming. People actually program in assembly.
All you have is garbage assertions. I have cited document after document. Everybody agrees I am right. You have no idea what you are talking about. Bit order is the same on every computer - most significant bit is always in the same place. Endianness varies on different computers, but endianness involves the order of bytes in a word, and there is no correlation between risc/CISC and endianness. Your premise was wrong, is wrong, and will always be wrong. And now you won’t even say what you claim I am wrong about, because every time you say anything specific and subject to verification I produce 10 documents that prove you are incorrect.

there is no inherent difficulty switching between risc and CISC caused by the order of bits or nibbles. The order of bytes could be an issue (a minor one), but is irrelevant here because ARM supports either endianness. Don’t try and confuse people.
 
  • Like
Reactions: Nugget and CarlJ
Assembly IS programming. People actually program in assembly.
All you have is garbage assertions. I have cited document after document. Everybody agrees I am right. You have no idea what you are talking about. Bit order is the same on every computer - most significant bit is always in the same place. Endianness varies on different computers, but endianness involves the order of bytes in a word, and there is no correlation between risc/CISC and endianness. Your premise was wrong, is wrong, and will always be wrong. And now you won’t even say what you claim I am wrong about, because every time you say anything specific and subject to verification I produce 10 documents that prove you are incorrect.

there is no inherent difficulty switching between risc and CISC caused by the order of bits or nibbles. The order of bytes could be an issue (a minor one), but is irrelevant here because ARM supports either endianness. Don’t try and confuse people.

99.99% of programers use a higher level programing language like C or Swift. Very few people get down to assembler any more.

Yep The key is bad not the battery conundrim issue here you just can't see it!
 
99.99% of programers use a higher level programing language like C or Swift. Very few people get down to assembler any more.

Yep The key is bad not the battery conundrim issue here you just can't see it!

What is your point? The bit order doesn’t change if I do ADD eax, 15 versus if I do x = x +10.

The language doesn’t matter. The bit ordering is the same.

And now you are talking about batteries for some reason. Your arguments show no knowledge of CPUs, let alone software. You aren‘t even on topic anymore.
 
  • Like
Reactions: jdb8167 and CarlJ
I have no doubt an arm-based Mac will allow Apple to provide a better experience overall, but I have two concerns: The first is running Windows/Parallels, how will that work, if at all? And second, will this mean that Apple will move to a controlled-app ecosystem like iOS?
Exactly. I think it looks to Apple that only a small part of the user base looks like it depends on being able to run some Windows-based stuff (to be honest, I am one). Ditching them is an attractive option for Apple.

There is an interesting difference between Microsoft and Apple. Microsoft has for long been trying to remain 'backwards compatible' to the point that it was seriously holding them back in innovation. Apple is the opposite. Apple has a history of dropping/ditching (small) portions of their user base, making the platform somewhat risky for power users. I've been hit by Apple's lack of loyalty more than once over the last twenty years (the last time when they ditched most of macOS Server and never delivered the promised migration to open source products) when I started to use their products (starting with OS X). Microsoft was the company that followed its users. Apple was the company where its users followed the company. I really hate Apple when it is so disloyal towards it customers.

I think in this case, the population they are ditching might be a lot more important than they think. E.g. I am thinking of a university student physics that I know, who uses a MacBook Pro, but for doing certain things needs to run Virtual Box with a Linux to use certain software (e.g. data analysis software). Linux on x86 is extremely mature and reliable, so this is easy and performance is excellent. In an ARM setup, the student would need not only an ARM Hypervisor, an ARM Linux, and all the software for the scientific stuff on that Linux VM. There is a such a volume of existing x86-based stuff (a lot of code too) that moving from PPC to x86 was opening up opportunities, going from x86 to ARM would seriously shrink them in a practical sense.

Minority users may seem not that important as a whole. But ditching them could be the equivalent of not getting your vitamins and spore elements. They only seem unimportant.

Luckily, ARM is bi-endian, so it will be easier to port stuff over from x86 then to a big-endian architecture (assuming ARM on macOS runs in little-endian mode).

The story about CISC versus RISC in the article is over 20 years out of date. Yes, that was the idea, and in the beginning the rise of RISC architectures (DEC Alpha, HP PA-RISC, MIPS, even Intel's own 860/960) it was true, but already in the 486/Pentium time Intel was borrowing RISC tricks for its x86 architecture and the difference has been much less since that time than the article suggests. Others did too. PowerPC is a descendant of IBM's POWER (which is a RISC architecture) married to m68k CISC. The difference was only meaningful at the every start. As St. Wikipedia writes:

It is also the case that since the Pentium Pro (P6), Intel has been using an internal RISC processor core for its processors.
 
  • Like
Reactions: Nugget
Exactly. I think it looks to Apple that only a small part of the user base looks like it depends on being able to run some Windows-based stuff (to be honest, I am one). Ditching them is an attractive option for Apple.

There is an interesting difference between Microsoft and Apple. Microsoft has for long been trying to remain 'backwards compatible' to the point that it was seriously holding them back in innovation. Apple is the opposite. Apple has a history of dropping/ditching (small) portions of their user base, making the platform somewhat risky for power users. I've been hit by Apple's lack of loyalty more than once over the last twenty years (the last time when they ditched most of macOS Server and never delivered the promised migration to open source products) when I started to use their products (starting with OS X). Microsoft was the company that followed its users. Apple was the company where its users followed the company. I really hate Apple when it is so disloyal towards it customers.

I think in this case, the population they are ditching might be a lot more important than they think. E.g. I am thinking of a university student physics that I know, who uses a MacBook Pro, but for doing certain things needs to run Virtual Box with a Linux to use certain software (e.g. data analysis software). Linux on x86 is extremely mature and reliable, so this is easy and performance is excellent. In an ARM setup, the student would need not only an ARM Hypervisor, an ARM Linux, and all the software for the scientific stuff on that Linux VM. There is a such a volume of existing x86-based stuff (a lot of code too) that moving from PPC to x86 was opening up opportunities, going from x86 to ARM would seriously shrink them in a practical sense.

Minority users may seem not that important as a whole. But ditching them could be the equivalent of not getting your vitamins and spore elements. They only seem unimportant.

Luckily, ARM is bi-endian, so it will be easier to port stuff over from x86 then to a big-endian architecture (assuming ARM on macOS runs in little-endian mode).

The story about CISC versus RISC in the article is over 20 years out of date. Yes, that was the idea, and in the beginning the rise of RISC architectures (DEC Alpha, HP PA-RISC, MIPS, even Intel's own 860/960) it was true, but already in the 486/Pentium time Intel was borrowing RISC tricks for its x86 architecture and the difference has been much less since that time than the article suggests. Others did too. PowerPC is a descendant of IBM's POWER (which is a RISC architecture) married to m68k CISC. The difference was only meaningful at the every start. As St. Wikipedia writes:

It is also the case that since the Pentium Pro (P6), Intel has been using an internal RISC processor core for its processors.

There is absolutely nothing CISC-like in PowerPC. (I was the FP and FP-interface designer on the PowerPC x704, and the out-of-order-issue/integer ALU/FP ALU/cache interface designer on various x86 CPUs).

There are very meaningful differences that remain in x86 vs. Risc. For example, writeable instruction pages, complex addressing modes, variable-length instructions, etc.

This requires a much more complicated instruction decoder in order to convert that muck into risc-like instructions. But it also requires extra hardware throughout the processor to cope with weird CISC-anachronisms, including complications in the instruction fetch unit, many extra pipe stages, etc.

It’s true that to a certain extent x86 micro architecture looks like a risc engine surrounded by a translator, but the translator infests many other parts of the design and results in compromises that RISC machines don’t need to make.
 
  • Like
Reactions: Gerdi
What is your point? The bit order doesn’t change if I do ADD eax, 15 versus if I do x = x +10.

The language doesn’t matter. The bit ordering is the same.

And now you are talking about batteries for some reason. Your arguments show no knowledge of CPUs, let alone software. You aren‘t even on topic anymore.

If you read the full entry:
"You're just like my ex who has no idea how the cars engine worked trying to tell me the key wasn't working when turned and needed my car keys to fix the car, not realizing the issue was a dead battery! No matter how hard I tried to explain it to her she was still stuck on the key was the problem.

So I get it! You are stuck on the key as the broken part couldn't be something deeper as you just can't get your head under the hood to learn a bit on how CPU's work back in the day and how that same design is still used today."

It would have made sense!
 
You can say goodbye to Windows compatibility if the Mac goes ARM. That would be a deal breaker for a large number of Mac users who are still tied to the Windows platform. This article does not mention this very significant downside.
That kind of depends a bit on ms doesb't it? If they find it woth wile to make the necessary changes to the windows on arm sorcecode needed to make it run on Axx chips from apple, and apple ports the drivers necessary, it could happen, the question is how much 3.d part software would run on windows on Axx SOCs
[automerge]1593346011[/automerge]
What processor is in the 4K? The A13 goes a long way towards doing a LOT of work with the LITTLE more efficient cores.
the A10X
 
Last edited:
Apple could bring back the iBook name and even produce a 2-in-1.

But I object to replacing Intel completely. They need to ADD to their lineup, not subtract.

Yea, I know... Apple is obsessed with subtraction.
It's really annoying, my 2015 MacBook is well overdue an upgrade, I love the form factor but hate the processor. It chugs when connected to my external display for crying out loud. The iPad Pro is the next best thing but it's so limited. I want
"Apple could bring back the iBook name and even produce a 2-in-1."

Great point
This is literally all I want from this transition. If Apple still refuses to make the iPad on par with the Mac, then give us a touchscreen Mac for crying out loud! I really don't understand why they continue to make the iPad so limited, yet make the Mac better in literally every way bar a touchscreen/tablet mode. With an ARM chip, the only thing missing to make the Mac the perfect machine in my opinion would be this.
 
It's really annoying, my 2015 MacBook is well overdue an upgrade, I love the form factor but hate the processor. It chugs when connected to my external display for crying out loud. The iPad Pro is the next best thing but it's so limited. I want

This is literally all I want from this transition. If Apple still refuses to make the iPad on par with the Mac, then give us a touchscreen Mac for crying out loud! I really don't understand why they continue to make the iPad so limited, yet make the Mac better in literally every way bar a touchscreen/tablet mode. With an ARM chip, the only thing missing to make the Mac the perfect machine in my opinion would be this.

They are clearly heading for touchscreen macs. Probably a 2022 thing. The signs are all there in the details of how they are integrating UIKit into mac, the new UI for mac, etc. And when they do it they will do it right, with touch targets automatically being big enough, etc.
 
They are clearly heading for touchscreen macs. Probably a 2022 thing. The signs are all there in the details of how they are integrating UIKit into mac, the new UI for mac, etc. And when they do it they will do it right, with touch targets automatically being big enough, etc.
Not only that, starting with macOS Big Sur, PencilKit support comes to the Mac.
 
Not only that, starting with macOS Big Sur, PencilKit support comes to the Mac.
Yep, another good point. Actually wouldn’t surprise me if they add apple pencil supporter first (i.e. include the hardware in new form-factor macs), next year, and touch the year after.
 
  • Like
Reactions: smulji
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.