Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
However, something like Rosetta 2 potentially has even more information to use in making decisions about dependencies, out of order and speculative execution, etc. It would not be surprising if they are trying to extract even more parallelism at that layer.

Rosetta2 has less information about dynamic dependencies (i.e. it doesn't really know the call graph, or what loads are missing the cache and which are cache hits, which makes 100s of cycles of difference in how long the load takes...er...ok...the memory system in the M1 is fast so maybe high double digits not triple digits, but same basic deal).

Rosetta2 has the exact same information about static dependencies as the CPU and more time to do something about it...but dramatically less information about static dependencies then the compiler had, and arguably less time then the compiler had too (people will complain a bit if the first launch of some x86-64 app like PhotoShop takes an extra ten seconds, but they will go ballistic if it takes an extra ten minutes...devlopers will wine a bit if the highest level of optimization takes an extra ten minutes, but as long as they don't need to use it during most of the development process they will be basically ok with it).

So I highly doubt Rosetta2 is extracting any extra parallelism out of anything.

Leaving the theoretical behind, the M1 has a more relaxed memory ordering than the x86, that helps it run faster, but it exposes latent data race conditions. The M1 also has a mode where the memory ordering model matches x86 (basically it inserts memory barriers where the x86 would have had an implicit memory barrier). Rosetta2 opts into this stricter memory ordering model so it doesn't expose bugs in the programs it is transcompiling (I'm not sure machine code A to machine code B is strictly transcompiling, but I think of it that way). So I would highly doubt anything in Rosetta2 would attempt to extract more parallelism when it is already going out of it's way to reduce some of the speculative execution because it wants to be "bug for bug" compatible with the memory environment that the x86 presents.

Hmmm, or maybe I didn't leave the theoretical, just stepped aside into a different theoretical ;-)
 
The ultrasparc V I worked on was actually sold as ultrasparc iv. Code name was millennium. And both Athlon 64 and opteron also had deep reorder buffers.

How big was the ultras-arc iv's reorder buffers? (yes, I attempted to do my own research, which amounted to a few google searches, and reading two entire wikipedia articles)

I'm assuming that despite having a larger dollar budget the transistor budget wasn't running into the 600+ range, and given the vast gulf in time separating the US-IV and the M1 that would be understandable. I'm just trying to get a rough feel on what use to be "large" v. now.
 
Not exactly surprising given what the last generation of Rosetta was capable of

Hmmm, that's really not how I remember the last generation of Rosetta.

It was "fast (enough)", but not "fast". Photoshop d r a g e d. It was way way faster then an emulated PowerPC would have been on the CORE1, but oh wow was it not fast.

Also I hated debugging any OS bugs that involved Rosetta. Fortunately only a tiny handful came my way. Even fewer once I found out I was getting them in part because one of the many Macs in my office was the same duel mirror door 400Mhz (I think!) model that Rosetta attempted to present itself as (well maybe not 400Mhz). So I gave it to another group for their debugging, and then I got to send most of the bugs to them ;-)

(I did miss that machine because a pair of slow CPUs caught different race conditions form fast ones, so it was a valuable part of my test environment)

Anyway it looks to me like Rosetta2 is way way faster then Rosetta1 was. Which is awesome. One less reason to drag my feet about replacing any of the Mac in the house. Hopefully they do a 15" or 16" MacBook of some sort soon.
 
Same.

I already do, because 13" MacBooks (and laptops in general) make bad virtualisation hosts anyway.
I have an esxi machine in the next room in my home office but I'm missing Fusion Pro right now because of the features they added a couple of version ago to allow remote control the VMs on my server.

Speaking of VMs, now that macOS has a virtualisation framework I wonder if they just used code from the FreeBSD bhyve project as a basis for it.
 
  • Like
Reactions: throAU
Speaking of VMs, now that macOS has a virtualisation framework I wonder if they just used code from the FreeBSD bhyve project as a basis for it.

I suspect they did.

But yeah, on my desk here I have an i7-6700 with 64 GB in it running Linux for virtual machines, a vSphere cluster in the next room. At home, an r7 2700x for VMs.

Setting up an EPYC hyperV cluster (6x 32 cores, 512GB Ram each) at work at the moment.

Plus azure :D
 
  • Like
Reactions: SlCKB0Y
It is a great machine. I got my B/F’s hand-me-down when he moved to the Mac Pro.

Seems like an opportunity for someone (I mean that seriously). One of two solutions: Reverse engineer the software and port it to something modern or figure out how to support them via something like QEMU on modern hardware that is still supported.

I am curious for what you use this fab gear. Given its age (anything supporting Window 3 is ancient), you are not researching fab techniques, and it seems that teaching people how to fab chips with ancient tech would have issues. Given the cost to maintain this gear and all the hazmat issues, I wonder why you do not out source the fab to somewhere else rather than do it in house. One would have access to more modern processes, etc., with fewer issues.
I'm a materials scientist who develops novel materials using semiconductor fabrication techniques. I spend a lot of time teaching principles & techniques, which absolutely doesn't require the latest equipment. For example, something like a profilimeter or wafer bow tester doesn't need the latest hardware - a tool from 1985 operates according to the same physical principles as a modern tool. The software sucks, but the physics are the important thing. In fact, many times newer software will abstract a lot of the technical considerations from the user (think auto-calibration, auto alignment, etc.), while older software gets the user closer to the bare metal. For teaching I prefer older, more manual tools, because the operator needs to spend a lot more time thinking about what they're doing, facilitating the learning process.
 
How the heck is Apple so far ahead in performance? It's incredible how much of a lead they have it's like alien technology.
It’s because of soc, like the A series chips in their mobile products, Apple is the only one developing the OS and the soc chips, (software & hardware). ...Surely they have been working on this for years.

And, one thing to note that may not be common knowledge is that iOS and all the other Apple OS, are all based on the same core as the core for macOS.
 
[...]
1) This is the MacBook Air and thus does not need 32GB RAM

2) These are ARM machines and as such are more efficient than X86 which is a notorious memory hog!

3) The clue is in the name, Unified Memory Architecture!

4) I am sure that other Macs such as the 16" MacBook Pro and iMacs etc will all have more RAM available to soothe the fevered brows of those who either do not understand these things or maybe do need a bit more memory.

First off, it is fantastic to have enthusiasm, but we should try to make sure it doesn't interfere with facts.

Point one, you are absolutely right, the old MBA topped out at 16G, the new one doing the same is no big deal.

Point two, not so much. Most of the memory used in Mac apps is data not instructions, and the data will be the same. The ARM instruction encoding tends to be less compact then the x86, that is in general tru of any RISC v. CISC CPUs. Fortunately as I noted most of the memory is used by data not instructions so total memory needs should be similar...but not less.

Point three just means the graphics subsystem uses the same memory as the main CPU, so you don't get a "free 4G" by having the GPU use it's own pool of VRAM. Fortunately comparing the MacBook Air M1 v. x86 neither did the x86 (iterated graphics parts tend to also be unified memory systems). "Unified Memory Architecture" is really more following the marketing maxim "if you can't fix it flaunt it!"....except not quite. UMA systems are common (look at the C=64, or original Amiga...or anything before VRAM became vaguely affordable) and generally low-ish performance. Apple's UMA is actually pretty high performance.

Point four, that is the big one, and I'm with you. Macs that traditionally have >16G of RAM (at least as an option) will have it when they make the move to ARM.

One thing you left out though is the M1's flash interface (SSD) is about twice as fast as the Intel MBA (I don't know how that compares to the outgoing Intel 13" MBP, or the Mac mini). That is important because for the most part macOS trades off "disk" I/O for RAM when it runs out. So the M1 MBA pays half the price when it is forced to page, or when it is forced to reduce the size of the disk buffer cache.

But I think the biggest point is one we totally agree on, the 16" MBP replacement will have options for more RAM. The iMac and iMac Pro replacement will have options for more RAM. The Mac Pro replacement (if they keep that product in the line up) will have options for more RAM.

The bigger open issue is will they support an external GPU, or add more GPU cores, or have we seen the best graphics performance we are gonna see until the M2 comes out. I expect we won't know until we see the next round of new ARM Macs.
 
  • Like
Reactions: AlifTheUnseen
[...] but can't other ARM manufacturers like Qualcomm and Samsung just open it up and copy parts of it or would Apple say " I see what you did there" and sue them? I mean others will reach 5nm and Qualcomm and Samsung also make ARM. I guess for Apple it all comes down to better designers, innovations and patents in the future in order to keep their lead. :)

It is kind of hard to figure out what made something fast by reverse engineering. I mean you can tell for example the rough size of the A14 reorder buffer by feeding it work that needs X stations and see when it gets slow. Or you can saw off the top and poke around with a microscope for a few weeks until you find something that looks like it might be a reorder buffer and simply count the number of regular structures and guess they are each entries in the ROB. What you can't do so easily is determine all the other tradeoffs to make that many entries a good idea. Like if you take an existing ARM design that has 64 reorder buffer entries and say "the A14 seems to have between 617 and 650 entries, so I will change mine to have 664 entries!" you are likely to discover it doesn't help much. The rest of the chip has to actually decode instructions fast enough to fill the buffer in enough cases that it is useful to spend the transistors on reorder buffers and not more L2 cache. The rest of the chip needs to be able to drain entries when dependent data is discovered, otherwise you just fill it once and it stays full forever and spending the transistors on anything else would have been better.

Other things just aren't obvious from looking at all the transistors anyway, like it is all the same gates, but they are placed in different locations so you can use a skewed clock, how do you figure out why a particular thing got placed in a particualr location. Was it needed in order to let the clock run a little faster, or is it there because it needed to be somewhere (close to three other things), and that spot was available.

Think of it like some company makes a graphics engine for video games. You can disassemble it, but it is hard to tell why it is doing things. Not impossible, but hard. It may take you say 3 years of looking over it in fine detail to tell what is a big deal and what isn't (and you will only be partly successful), and then when you start to incorporate those lessons into your product the other company has had three more years working on it and improving it. Also some of the lessons aren't easy to apply to your graphics engine. Like if they decided "screw triangles, quads are where its at!" you have a whole engine based on triangles, you would need to throw it out and start again from scratch.

You _could_ just lift an entire CPU/GPU core out and use it unmodified, but they yeah, you will probably be caught -- if it is obvious enough TSMC will probably notify Apple. If it is less obvious Apple may not notice until the product ships, then they just sue you later. Plus it still takes time to do anything useful with that CPU core without the I/O subsystem and the cache and all the GPUs and such. Even if you copy all that, then you need to figure out the boot sequence and make a product.

Most companies do some actually competitive research, and they will buy other companies CPUs and surgical saw off the top of the chip and poke around. They do a half hearted look for "did they steal any of our design?", and also see if anything obvious is there. They don't tend to find a whole lot. They _do_ tend to generate reports about what market segments the competitors are going to be strong and weak in, and make guesses at actual costs and such. Like "the Raspberry Pi isn't really a finished project and will have no impact on the Macs video editing or audio editing or general product use, but it WILL impact the hobby market where we sell Macs because they are an accessible Unix, the RPi isn't actually anywhere near as accessible, but with a $35 to $100 price point the rough edges will tend to be overlooked; in the educational market....blah blah blah"

A real report might also estimate how much money the company made in each market last year/Q/whatever, and an estimated size of the market (and yeah, this isn't exactly the same thing because the RPi isn't a CPU it is a system, the CPU is some Broadcom part...but Apple is actually a system company so they do more competitive analysis on systems then chips; also Apple tends to be _less_ interested in what other companies are doing then any other large company I worked for)
 
  • Like
Reactions: AlifTheUnseen
This is crazy performance. Too bad there's only 16 GB RAM max for now ...
Actually, it is too bad there is no RAM EXPANSION CAPABILITY.... What you need today may very well be different a year or two down the road and it can sometimes be hard to predict your future needs... and needs to change. Changing a WHOLE computer just for extra RAM (when one with more RAM gets released) is ludicrous.
 
To be more specific, they’re optimized for running apps native to Apple Silicon. They’ll still run apps developed for Intel via Rosetta 2 but those aren’t optimized.

Interestingly that isn't exactly true. The M1's normal memory ordering model is "more relaxed" then the x86/x86-64. That is to say if a programmer desires other threads to see loads and stores in the order it perceives them it absolutely needs to add more memory barriers (which cost performance...as do the implicit memory barriers that the x86-64 inserts for you). The M1 has an additional optional memory ordering model that matches x86-64. Which Rosetta2 uses. That is definitely the M1 being optimized to run Rosetta2 apps. Not as a major design goal maybe, but defiantly as a real goal.

I'm not sure that was the only thing as lauded to by Joe Groff (an Apple employee, although I think he is off in Swift land/compilers not CPU design) on Twitter:




Do you think it’s possible for Apple to move the memory out of the package and allow user replaceable memory?

Yes, but given that it has been roughly a decade since Apple decided the laptop line would be better served by soldered in non-user-upgradable RAM (for reliability purposes) that would be a reversal of a long standing trend.

I'm not saying it isn't technically possible, but chances of it happening in reality seem very low.

Actually, it is too bad there is no RAM EXPANSION CAPABILITY.... What you need today may very well be different a year or two down the road and it can sometimes be hard to predict your future needs... and needs to change. Changing a WHOLE computer just for extra RAM (when one with more RAM gets released) is ludicrous.

Granted, yes, it is way nicer to be able to extend the lifetime of a computer by being able to upgrade it, and RAM and mass storage are two places where traditionally it has been possible to do so, and people tend to actually do it from time to time.

However Mac laptops have not supported that for quite some time. So this isn't a regression in the M1 MacBook Air nor the M1 MacBook Pro. It is a regression for the M1 Mac mini (and if it comes to the iMac it is also a regression there as well).

I personally won't worry about this until I see what happens on the iMac (and I assume there will be more Mac mini models as Apple is continuing to sell Intel models...although for the most part I'm not in the market for a Mac mini, and it mostly doesn't matter personally to me if they screw those up...but it does actually matter for the health of the ecosystem, so yeah, if they don't eventually address this it will bug me).

(...although maybe this shouldn't bother me, my personal 2012 iMac ran out of the supported by macOS window without me upgrading it's RAM, and also ran out of me really caring about it in general as I only use it as an external display for my faster MacBook Pro...and I never upgraded it's RAM despite that only needing a screwdriver; the iMac Pro my work purchased for me has 96G of RAM and while it too is upgradable I don't imagine I'll actually need more RAM before that machine ought to be replaced by something else...yet despite some evidence that I shouldn't care if a future iMac/iMac Pro has expandable RAM, I do)
 
Yes, M1 is a exciting step forward for personal computers. But again, it doesn't make what we have today and still have to use less beneficial.

Well strictly speaking, it does make today's Intel Macs slightly worse. Normally Apple supports (with new OS upgrades and such) old Macs for around a decade. They don't promise any specific span of years though. Across CPU transitions this number drops a lot. So a new Mac Book Pro 16" purchased in 2019 you could expect to still be running the current OS in 2037 to maybe 2039. I expect it will actually not run a new macOS in 2035 now.

That may or may not be a big deal. Any given customer may only keep those Macs for a much shorter time period and replace them as newer better ones become available anyway. Or be on a "new every two" plan or something. Or the particular workload might be fine running on older versions of macOS for a few years anyway (lack of security fixes might be an issue though).

Over all though, yeah, the Intel computers purchased last year run exactly as fast this week as they did two weeks ago (except a statistically small percentage that broke in that time). They still do the same jobs they did before at the same speed.

It'll be interesting to see how passive cooling stacks up with continuous heavy load (e.g. iMovie export).

Apparently it takes a bit over ten minutes on....mmmm....it was a video export (I didn't do this test, I saw it on one of Rene Richie's YouTube videos), but not iMovie for the M1 MacBook Pro to spin up fans (and at that point they were apparently still very quiet, easiest to tell by putting a mic near the exhaust as opposed to listening with human ears). At that point it started having about a 15% performance advantage over the M1 MacBook Air. The Mac mini was still passively cooling and holding the same speed.

So I think we need a load heavier then video encode exporting to see a big difference! Maybe my next personal laptop purchase will not end up being a Pro, I might be happy with a 15"+ Air (if they make them, and if the M1 16" Pro doesn't have some other compelling things going on)

I mean, just think about Apple TV with an m1-style chip in it - will Apple finally try (post-Pippin) to compete with consoles for real?

Yeah, that would be cool...but Apple's real problem with video games isn't technical. Or maybe "isn't just technical". They don't put in the work to build the kind of relationship with the big video game makers. They don't push for big AAA exclusive windows/DLC. They focus on the "casual" market. Which might actually be a good strategy, but it isn't what you need if you want to be the next Xbox of PS(some-number). It also won't get you to be the next Nintendo Switch either (that needs a lot of focus on in-house game talent, and probably a decade or two to generate valuable in-house entertainment IP). Apple has the money to do that kind of thing, but I don't know that it has the interest in doing that kind of work. If you see a new SVP of gaming (or any euphemism for that), then maybe. Especially if they hire someone with that kind of role away from MS or Nintendo.

At the technical level I have no doubt Apple could make something that does a great job, and even has the same sort of price point as the XBoc Series X or PS5. Although if they haven't been working on it since 2017 they won't be launching something like that in 2021.

I hope the CEO of Intel threw up when they read this. Hopefully this is enough to get Intel from continually sandbagging. Very exciting time in processors, this is going to push things ahead by leaps and bounds.

I doubt the M1 is making Intel's CEO happy, but the vomiting is coming from looking at AMD's performance numbers. There are a lot of moats between Apple making significant dents in Intel's business (past the non-trivial, but surely not more then 7% of CPUs that Apple has been buying from Intel to put in Macs). AMD on the other hand is a direct competitor to basically all of Intel's CPU business. They aren't socket compatible, but from the software layer they are a direct replacement, and even at the hardware level they are close enough for many many many of Intel's customers (basically any that actually design their own motherboards, as opposed to ones that just use Intel's reference designs, and even there AMD provides reference designs).

In other words the Apple M1 is like Intel's CEO accidentally hammering his thumb and not the nail. It is more then noticeable. It hurts. AMD is like a bandsaw accident. Probably not fatal, but it could be. It'll also be life altering.

Steve jobs never wanted macs to be user-upgradeable or serviceable.

Yeah, on the other hand Steve Jobs changes his mind (well changed). He was just as against having 3rd party Apps on the iPhone. That wasn't a front, it was real. Likewise he changed his mind about how upgradable Macs should/could be. Although a large part of that was thinking that it could be done at Apple Stores for people that wouldn't otherwise get any benefit from having a Mac with expandable memory because it is too hard for most people to do. Of corse the expandable memory one went away because it was an apple care call driver/major source of service repair (not just the result of doing a bad upgrade, but also just RAM working loose from sockets in portable devices, which is why the desktops kept it long after the laptops got rid of it).

Now, I understand that some folks want to know what the behavior of the systems will be when pushed to the edge, but “pushed to the edge” should in no way be confused with “day to day usage”!

Well some people have day to day usage that pushes machines to the edge. My common work load on the MacBook Pro hurts it for about an hour. If I had a newer MBP with 32G of RAM it is actually more like 15 minutes (I have coworkers with the newer model). Don't feel bad for me, I also have an iMac Pro, and normally arrange to do that sort of work there. I absolutely won't be attempting to get a new work laptop until an ARM system with 32+G is available.

I also have very little doubt that one is coming (not no doubt mind you, just very little doubt).

I can also afford to wait on that because I'm WFH for at least six more months, so I really don't do much of anything on that MBP.

iPad office is nothing but a gimped pointless version with 90% of its features missing. If you really believe what you have written here than you have no idea how to use a spreadsheet or word processor

I think a vast majority of people really do know how to use word processors and spreadsheets and really have no need for the vast majority of the features.

I write a lot of design docs in a word processor. I really don't need to do much more then import a template, brain dump. Copy and paste my word soup into more reorganized clumps. Rewrite to group related thoughts. Remove any "supporting facts" that actually lead nowhere. Maybe make some process graphs or some UML-ish diagrams and slap them in (we have some great tools to do that, so even if the word processor can make UML diagrams I would rather generate them from my actual data definitions). Occasionally I make some directed acyclic graphs and shove them in, and again I have nice tools to do that..although here I sometimes don't drive them from live data, so if a word processor really could do that directly I would have a modest benefit.

Oh, and spell check. Spell check is great for me.

That is a pretty small percentage of what modern word processors do.

Likewise my use of spreadsheets is simple. Mostly because the data I feed into them is simple, and my desires for them are simple. If they had much more advanced graphing I might sometimes get some use out of that. If they could handle vast data sets (like a half billion entries) I might get some value out of that, but that is debatable because I have other tools that are built into the companies data flow. So if I want the weighted moving average of some field and the spreadsheet didn't die when I dump that much data into it I would need to wait for the WMA while the tools we have compute things like that as they extract "in case" they are needed (they also live update some of those as data comes in because they drive alarms and other live production things).

I do know how to use a spreadsheet, I just really don't need much from them. Likewise most people who use spreadsheets don't need very many of the vast quantity of features they have.

Lots of people may need different subsets of the word processor and spreadsheet features, but only a tiny minority of people need very many actual features from them.

[sorry double quoted this]
Linux has ARM support and the M1 can easily be supported. Microsoft could choose to support Windows on it. The T2 allows disabling its boot volume signature checks.

The M1 doesn't support disabling boot signature checks. However you can sign your own boot images (self signed cert). I'm not sure that would work well for MS, not for someone wanting to make a M1 Linux distro. However Apple did show off a fully supported by Apple hypervisor with some M1 distros & also with Docker.

I think that would work well for a whole lot of Linux use. I'm not sure it is as valuable for Windows (i.e. it is more of a Parallels/VMWare solution for a hypothetical ARM M1 Windows image then it is a Boot Camp for M1 ARM Windows).

I hope something like virtualPC comes along and let’s me keep running my windows XP code.

Hmm, I think you might want to look at Bochs. It should have no problem doing what you want, although it probably won't make for a fast virtual XP machine.

A LOT of companies use Macs for development and a lot of them are dependant on VMs and virtualization like Docker

Apple did show off their own virtualization system including running ARM Docker containers (or are they called instances? I'm not a Docker user, sorry). The downside here being if you really wanted x86 Docker instances this isn't going to help (but isn't Docker a Big Deal because it is easy to run instances on various cloud services and in-house clouds? I'm not sure how important this is...then again almost all my Linux experience has been treating it as a generic Unix-ish system & I probably use it on the RPi more then on x86 as it is!).
 
  • Like
Reactions: AlifTheUnseen
yup, it's hard to get memory-starved when you can at most run only two applications simultaneously, isn't it :p
[...]

Not really. I nominate "Chrome (with a few tabs open), plus any app using non-negitave amounts of RAM"

Or if I'm being less pejorative Chrome + Xcode sucks up a lot of RAM, esp if I'm compiling a big Swift program with LTO and I pack rat Chrome tabs like the web is going away tomorrow.
 
  • Like
Reactions: AlifTheUnseen
nterestingly that isn't exactly true. The M1's normal memory ordering model is "more relaxed" then the x86/x86-64. That is to say if a programmer desires other threads to see loads and stores in the order it perceives them it absolutely needs to add more memory barriers (which cost performance...as do the implicit memory barriers that the x86-64 inserts for you). The M1 has an additional optional memory ordering model that matches x86-64. Which Rosetta2 uses. That is definitely the M1 being optimized to run Rosetta2 apps. Not as a major design goal maybe, but defiantly as a real goal.

I'm not sure that was the only thing as lauded to by Joe Groff (an Apple employee, although I think he is off in Swift land/compilers not CPU design) on Twitter:
That's interesting. Thanks for the clarification.
 
  • Like
Reactions: J Osborne
Well some people have day to day usage that pushes machines to the edge.
No, I agree completely. My own daily use case would make an M1 utterly useless. But, there’s a difference between having an edge case and recognizing it as such, and
having an edge case and inferring that MOST people will expect to use the system in the same way.

You seem to understand your “day to day” far outstrips your average person’s “day to day”, and so do I :)
 
  • Like
Reactions: J Osborne
I'm using my computers for work. Time is money.

be told that it will take 3 weeks to repair, go back to my car to find a parking ticket, binge watch Netflix series for 3 weeks while I don't earn money and lose customers

You should look into Apple's small business unit & the SBU support (they have reps at many Apple stores, at least the mid to large stores). They can generally get you things like free loaner units when you have a repair. You don't really even need to be a big customer for it to happen. I know a one-man app developer company that has a SBU rep.

I do not claim to know all the reasons for every decision Apple makes, but I also do not accept the view that every decision is just to maximize profit. To me, this is the same as many other decisions about which people on here complain. I look at the total package and consider based on it.

Well the soldered RAM being soldered to reduce failures is about money, it is just a longer term view of money. It is fewer customers with a broken Mac, so more that think of it as reliable when they go to buy another computer. It is few customers that buy 3rd party RAM that isn't quite right and end up with a Mac that is flaky and trashy, and well may as well buy a crashy Windows PC since the last Mac was crashy.

Not necessarily. When parts are replaced aftermarket, the HOPE is that everyone’s not throwing the old DIMM’s in the trash, but that’s wholly up to the consumer. If the replacement parts ARE thrown in the trash, then that device is adding to the landfill even while it’s still in use. This means means the total environmental impact when that system is then thrown into the trash is greater than for a device that had no replaceable parts.

Yes, but the device with replaceable/upgradable parts lasts longer. So if I could upgrade a MBP from 16G to 32G and make it handle the world for four years as opposed to two then over a four year period I discard one MBP and one extra set of DIMMs as opposed to two MBP systems. OR if a system is repairable and without a repair has an avg lifespan of 6 years and with repair it lasts 9 you get one less Mac discarded over 18 years (2 v. 3) even if you discard an extra whatever widget breaks.

Of corse if a repairable system _also_ breaks more frequently (like sockets DIMMs on a laptop) the math changes a lot.

If you just throw away the TV, it’s one TV. If you replace a circuit board in your TV, throw away the old circuit board, then eventually throw away the TV (it won’t last forever, or you may just want a new one), the end impact for you personally, is that 1 TV PLUS that 1 circuit board to the landfill (not only is it near, it’s more). Replace more parts? More parts to the landfill.

It’s not a lot, but, again if the person in question is a person that will throw things away instead of recycling, then it’s more in the landfill over the lifetime of that device than JUST the TV.

More things, but fewer TVs. The things are each a lot smaller then a TV, and one would assume are less harmful (as a TV already contains them).

Apple stops supporting systems with software after a period and stops repairing the hardware after a period, and when they do, they announce it fairly far in advance. As a process, it works for me.

In the US Apple supports hardware for a decade (CA law), but the software is not covered by that law, and you don't get a lot of heads up. When 10.7 comes out they don't tell you what 10.8 won't support. You can guess, but sometimes you are wrong (some releases are the exact same list as the year before...other years they drop a lot of devices...sometimes ones they don't need to because it is easier to explain "iMac after year X" as opposed to "that GPU is a pain, so we gave up on it, the other iMacs form that year are great!"). When I use to work at Apple we didn't know what the supported list was until WWDC. In some cases we were still trying to expand it in the lead up to the WWDC builds!

I'm really curious if Apple will ever produce a higher end iMac without a monitor. Especially because it's nearly 2021 and I think not just most enthusiasts will soon use ultra widescreen monitors.

I think the iMac is defined as having an integrated monitor. The Mac mini is the inexpensive no-monitor machine, the Mac Pro is the expensive one. Traditionally the Mac mini was made from laptop parts and not super high performance. It looks like in the ARM era that may not be as big a deal (or maybe not happening at all). It might be a good machine to watch.
 
Yes, but the device with replaceable/upgradable parts lasts longer. So if I could upgrade a MBP from 16G to 32G and make it handle the world for four years as opposed to two then over a four year period I discard one MBP and one extra set of DIMMs as opposed to two MBP systems. OR if a system is repairable and without a repair has an avg lifespan of 6 years and with repair it lasts 9 you get one less Mac discarded over 18 years (2 v. 3) even if you discard an extra whatever widget breaks.
Lasts longer, but not forever. When you’re thinking on the scale of the lifetime of a person you can say “I only ever threw 8 memory sticks in the trash, but I kept the computer the WHOLE TIME...” But you’re eventually going to die. That laptop gets added to the landfill IN ADDITTION to the 8 memory sticks. So, over, say, a 200 year span, you’ve contributed that laptop plus an additional 8 memory sticks. Which is more.

More things, but fewer TVs. The things are each a lot smaller then a TV, and one would assume are less harmful (as a TV already contains them).
I believe the thought is that repairing keeps things out of the landfill longer. AND that is ABSOLUTELY true! But still, in the hands of most people, either before OR after those people die, that stuff is STILL going to the landfill and the TV will add to all the “things” that are already there. UNLESS you’re a person that recycles everything (and if you are, then none of this matters because your stuff isn’t going to the landfill) the total impact on the environment is greater when repairing things.
 
Well the soldered RAM being soldered to reduce failures is about money, it is just a longer term view of money. It is fewer customers with a broken Mac, so more that think of it as reliable when they go to buy another computer. It is few customers that buy 3rd party RAM that isn't quite right and end up with a Mac that is flaky and trashy, and well may as well buy a crashy Windows PC since the last Mac was crashy.

First, I did chose my words carefully, I said I did not believe it was about maximizing profit. If your argument is that they want to make better and more reliable products to maintain your brand, I agree with you. That is not what most people mean when they say it "is about money" or the other common phrase: "it's just greed".

Yes, but the device with replaceable/upgradable parts lasts longer. So if I could upgrade a MBP from 16G to 32G and make it handle the world for four years as opposed to two then over a four year period I discard one MBP and one extra set of DIMMs as opposed to two MBP systems. OR if a system is repairable and without a repair has an avg lifespan of 6 years and with repair it lasts 9 you get one less Mac discarded over 18 years (2 v. 3) even if you discard an extra whatever widget breaks.

Of corse if a repairable system _also_ breaks more frequently (like sockets DIMMs on a laptop) the math changes a lot.

That is one of two arguments, the other being that it results in smaller, lighter units that consumers prefer.

In the US Apple supports hardware for a decade (CA law), but the software is not covered by that law, and you don't get a lot of heads up. When 10.7 comes out they don't tell you what 10.8 won't support.

No, but Apple announces which machines will be supported when the release is announced and typically 3-5 months before it happens. Further, they continue to provide security updates for several years after they stop supporting new features on a release. I am not sure what "heads up" you would expect? One always have several years from when a system ceases to be supported by new OS releases until it stops receiving security updates. How long do you think that should be?
 
Lasts longer, but not forever. When you’re thinking on the scale of the lifetime of a person you can say “I only ever threw 8 memory sticks in the trash, but I kept the computer the WHOLE TIME...” But you’re eventually going to die. That laptop gets added to the landfill IN ADDITTION to the 8 memory sticks. So, over, say, a 200 year span, you’ve contributed that laptop plus an additional 8 memory sticks. Which is more.

(I'm assuming you mean 20 year span, not 200...I mean if I manage to live 200 years I'll take it, but I'm not expecting to manage it....)

Ok, so case 1 you can slap in new DIMMs and make a laptop "good enough" to last your entire life (this is not going to actually be true as more things need upgrading then RAM), so at the end of life you have one laptop and 8 memory sticks. I think that is your argument, if not I'm sorry I seem to have misunderstood you.

My assertion is if you can't make the computer last that 20 years because you can't upgrade it you toss the entire computer and get a new one. At a minimum once. Maybe more like every five to ten years. So the "no upgrade case" is two entire laptops, but no extra DIMMs. Or maybe it is 3 or 4 laptops, but still no extra DIMMs. That is a lot more then one laptop plus 8 sticks.

If your argument is that they want to make better and more reliable products to maintain your brand, I agree with you.

I think we are pretty much in agreement then ;-)

That is one of two arguments, the other being that it results in smaller, lighter units that consumers prefer.

Yeah, I expect those things are also true. I was at Apple when we made this change, and the presentation I saw was bad RAM upgrade were a major apple care call driver, and in addition it was a major root cause of dead and flakey Macs brought to genius bars (or repair centers Apple had any insight to at any rate). Fixing that was the major motivator. Trimming the weight of the sockets out of the system was probably not a big deal at the time. Saving the physical space, and no longer need to figure out a way to make them easy to access but protected was likely also a minor motivator. The headliner though was absolutely "breaks less, so people don't have broken Macs" ("apple care call drivers" were a big deal to fix, not because we saved the COST of handling the apple care call, but because customer sat goes up when they don't have to call about a problem...or actually it goes down when they need to call)

Further, they continue to provide security updates for several years after they stop supporting new features on a release. I am not sure what "heads up" you would expect? One always have several years from when a system ceases to be supported by new OS releases until it stops receiving security updates. How long do you think that should be?

Expect? I don't really expect much because I tend to imagine Apple will do in the future what they have done its the past, so Mac tend to get updates over five years, but under ten with exceptions on both ends.

I would _like_ an explicit policy, even if the policy is "OS upgrades for five years or more, security upgrades for 7 years or more" (i.e. no actual change except my good guess goes from guess to something one can count on).

Having worked at Apple I understand it can be a gigantic pain to support older hardware, and also that sometimes you don't know how much work it is until you actually go do it. So I don't expect them to suddenly decide "screw it, we will support everything for a decade!"

Besides Apple has been making a ton of money doing exactly what it has been doing, they don't need my help to set policy. (it isn't like that was my job even when I was there, I merely carried out policy, and gave feedback back up the management chain if I thought things were stupid, or could use a minor nudge...occasionally it moved things, normally it didn't)
 
I'm assuming you mean 20 year span, not 200...I mean if I manage to live 200 years I'll take it, but I'm not expecting to manage it....)
No, I meant 200. If we’re really talking about the environment, the total impact of our actions will be felt long after our short little blip of a life is gone! :)

Ok, so case 1 you can slap in new DIMMs and make a laptop "good enough" to last your entire life (this is not going to actually be true as more things need upgrading then RAM), so at the end of life you have one laptop and 8 memory sticks. I think that is your argument, if not I'm sorry I seem to have misunderstood you.
I believe you understand me ok. For the case of the “landfill dumper”, that device is eventually going into the landfill, no doubt, either before or after they’re dead.

My assertion is if you can't make the computer last that 20 years because you can't upgrade it you toss the entire computer and get a new one. At a minimum once. Maybe more like every five to ten years. So the "no upgrade case" is two entire laptops, but no extra DIMMs. Or maybe it is 3 or 4 laptops, but still no extra DIMMs. That is a lot more then one laptop plus 8 sticks.
I see your point from the human side of things. While over a lifetime, recycling is the best, if less mass is thrown away (laptop and a few DIMMs vs. multiple laptops), then, repairing or upgrading seems to be better.
 
Actually, it is too bad there is no RAM EXPANSION CAPABILITY.... What you need today may very well be different a year or two down the road and it can sometimes be hard to predict your future needs... and needs to change. Changing a WHOLE computer just for extra RAM (when one with more RAM gets released) is ludicrous.
Think of it as an appliance like a phone, TV, or PVR. It will work very well within the parameters it was designed for, for many years. You'll get 5-10 years of useful life before software advances require more RAM. By this stage, you would probably want to upgrade anyway to take advantage of better performance. All laptop manufacturers are likely to follow this pattern; it's cheaper for them to make, and they get a new sale every few years.

These are not hobbyist or enthusiast machines to tinker with; if you enjoy that, then a modular desktop is your best bet.
 
The performance is remarkable in single core mode, especially if you consider the performance / power consumption ratio.
Now in the multithreaded area, it is still good, but it is worth noting that a simple AMD Ryzen 5800x will outperform it (about 8K for M1 to 11K for a 5800x).

Still, I would be interested to see how this M1 performs under logic pro or other audio sequencer with heavy AU / VST plugins load?

I am impressed not because of the performance only, but also because for a first gen in this category, it already makes the competition flush red in single core tests, and who knows maybe next year the multicore performances will make us forget about intel architecture once and for all :)
 
The performance is remarkable in single core mode, especially if you consider the performance / power consumption ratio.
Now in the multithreaded area, it is still good, but it is worth noting that a simple AMD Ryzen 5800x will outperform it (about 8K for M1 to 11K for a 5800x).

Still, I would be interested to see how this M1 performs under logic pro or other audio sequencer with heavy AU / VST plugins load?

I am impressed not because of the performance only, but also because for a first gen in this category, it already makes the competition flush red in single core tests, and who knows maybe next year the multicore performances will make us forget about intel architecture once and for all :)
This is true, but it made me chuckle. The Apple M1 is certainly not a supercomputer, but think about your comparison for a minute.

You are comparing an ultrabook chip that consumes 10-15W under maximum load to an 8-core/16-thread desktop CPU with a TDP of 105W. https://www.pctechreviews.com.au/2020/11/06/reviewed-amd-ryzen-7-5800x/

It is astonishing that we are even talking about these two categories of chips in the same sentence :)

A fairer comparison might with the AMD 4900HS as shown by MaxTech at
. Note that while the AMD 8-core/16-thread chip does beat the M1 by 30% in multi-core results, it does so using about 3-4 times the power (13W vs 35-55W).

So yes, the M1 with 4 performance cores is not going to beat the heavyweight gaming/workstation laptops or a lot of desktop computers, but this is not its role. It's offering a fantastic balance of performance and battery life that is unprecedented. It will be interesting to see how Apple scales this up to compete with the 12-16 core Intel and AMD desktop CPUs. Exciting times!
 
  • Like
Reactions: AlifTheUnseen
Hi Guys, I don't know if anyone reads this anymore, but thanks for all the stuff you posted - I learned a lot. We are indeed in exciting times.

I have a 2015 iMac that I wanted to modernise to a 2020 one... Geekbench said my machine had a single core score of about 1100, the new iMac one of about 1200.... what the heck? Upgrading? Why? (ok, there is also multi core scores....)

Now I see the Mac mini has a single core score of about 1700... I think I know what to do. Get a Mac mini for everyday use, a decent monitor, and build a zen3/3080 gaming PC :cool:
 
Hi Guys, I don't know if anyone reads this anymore, but thanks for all the stuff you posted - I learned a lot. We are indeed in exciting times.

I have a 2015 iMac that I wanted to modernise to a 2020 one... Geekbench said my machine had a single core score of about 1100, the new iMac one of about 1200.... what the heck? Upgrading? Why? (ok, there is also multi core scores....)

Yeah — Intel's desktop CPUs aren't really out of their slump yet. The early-2021 Rocket Lake-S generation should bump that to about 1500, but that's still pretty poor given that Apple can achieve higher scores while consuming much less power (meaning, among other things: once Apple does make a CPU that draws more power, it'll be even faster yet).

 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.