Perhaps they're embracing Trump's "chaos" model - no clear strategy, just do stuff.they didn't set out to hurt their own sales
Perhaps they're embracing Trump's "chaos" model - no clear strategy, just do stuff.they didn't set out to hurt their own sales
While the ultimate proof will be what form the Mac Pro takes, the way they've positioned the iMac Pro and their more recent moves with iOS devices make it pretty clear to me the iMac Pro is about goosing ASP by selling iMac Pros to some of the regular iMac users who wanted more power, not necessarily about cannibalizing the sales of the people who want Mac Pros.
[doublepost=1522461477][/doublepost]....
Even if they won't say it (and they won't, because they're Apple), the sales weren't as good as the cMP. So they are going to go back and ask themselves what happened to tank the sales since the cMP.
If the tcMP was selling well and the only problem was thermals, they never would have considered replacing it with the iMac Pro, and I don't think they would have released the iMac Pro. But the tcMP was a failure by all of Apple's metrics, not just thermally.
[that the iMac Pro couldn't be the permanent top end solution ]
I've heard that most definitely was the plan.
They've backtracked for some reason I think upgradability is one of those reasons probably. Apple honestly would have been perfectly happy to leave a down clocked Vega 64 as the top end, but something happened to tell them that wasn't acceptable. My guess continues to be they demoed the iMac Pro to a few VIP customers who probably threw a fit.
which one of these two were the plan and info because they are relatively inconsistent. If the MacPro 2013 volume was too small to be viable then there is little rational reason to believe that the iMac Pro volume is any more viable. It has basically the same market limitations.
Wanted to mention that Apple has an officially supported eGPU card list. I think there is a possibility we could see something similar for a Mac Pro. Apple seems uninterested in doing Mac edition GPUs any more. All the cards mentioned are stock PC cards, even with specific vendors recommended.
https://support.apple.com/en-us/HT208544
Ive is on a completely different page. In his mind
There is a pretty big leap of an inference. Every single GPU on that list has previously appeared soldered to the motherboard of some Mac in one configuration or another (some mobile stuff clocked a bit lower but same baseline GPU implementation; just a different functionality setting. )
There is little here in terms of "new card" coverage other than bigger thermal window. Apple made the same GPUs they already had drivers for work. There is still open question is whether they can get something they haven't already done supported.
Similarly that switching to Bootcamp Windows appears to still have problems. Apple may clean that up by 10.14 but is the coverage going to get wider?
You miss:a. Thermal corner. Same approximate thermal constraints. A bit better because only one GPU standard and the coolers separated, but the GPU is hobbled by a significant percentage and CPU base clock set a bit lower than normal (turbos up to almost range cap so not to big of a hit).
b. no GPU upgrades . Same.
c. singular SSD storage drive . Same
d RAM upgrade . Worse .
e. bundled monitor. Worse for those don't want it.
f. Angry Nvidia CUDA mob . Just as pissed off.
g. no std PCI-e slots. Same .
The iMac Pro is pretty much the successor to the 2013 Mac Pro. But I think one reason Apple probably got negative feedback is because it fixed none of the issues from the 2013, and even doubled down on them. (There is no way at all the iMac Pro GPU could be upgraded, as opposed to the "in theory" of the 2013 Mac Pro.)
Ive is on a completely different page. In his mind, the 2013 Mac Pro is a failure because it wasn't elegant enough. The iMac Pro is his version of a dream workstation, and there will be some people who get on board with that. But the customers that never got on board with the 2013 Mac Pro probably let Apple know that the iMac Pro wasn't going to bring them back.
Apple really wants to keep their high end customers. I don't know why, considering how eagerly they've been dropping everything else. But they really want the high end creatives to stick around.
You miss:
h. despite being 5yr Old, the Hated Ill tcMP still outperfoms it at twice fast at FP64 calculations (with D700's).
Its clear the iMP dint sell as good as the tcMP did it first quarter, anyone supporting the iMac Pro "success" its naive at least.
Why Apple didnt launch an mMP along the iMP? one (or both) of two reasons:
- They want to migrate as many PRO users as possible into a new Mac category the AIO PRO Workstation: iMac Pro...
- The mMP uses a meaningfully different hardware architecture not available the same time as the iMP was conceived (as a interim product IMHO) and the actual reasoning why there is no mMP: switching to AMD CPUs.
....
It's clear at this point that the iMac Pro isn't necessarily being positioned as a "let's segment the market for our pro desktops even further".
While the ultimate proof will be what form the Mac Pro takes, the way they've positioned the iMac Pro and their more recent moves with iOS devices make it pretty clear to me the iMac Pro is about goosing ASP by selling iMac Pros to some of the regular iMac users who wanted more power, not necessarily about cannibalizing the sales of the people who want Mac Pros.
After all, if they thought they could convert most of that latter category, they wouldn't have announced another Mac Pro (and there are undoubtably plenty of people, like myself, who would have gotten an iMac Pro begrudgingly or otherwise if no Mac Pro was announced... they didn't set out to hurt their own sales.)
Right, so how could the iMac Pro outsell the MP 2013? Part of the decline from the 2009 (and before) era of classic MP sales was folks transitioning to other machines. If folks left the Mac Pro for a top of the line iMac then those same folks probably aren't going to buy a iMac Pro. Especially, after the iMac probably picks up 6 core capability this year with its update. (similar issue for MBP 15" if it picks up 6 cores also. )
I also forgot to mention that the base price of the iMac Pro is also $2,000 higher than the original base price of the Mac Pro 2013. There is no way it is going to outsell the MP 2013. It is priced not to interfere with that exodus to the iMac.
The goosed up price may help with the initial buyer bubble demand curve but the year forward propects are likely to decline faster than the MP 2013 did. (and the MP 2013 initial demand bubble was likely better. )
3. Not only the customers, but the any competent folks in marketing should have also known that the iMac Pro wasn't getting those folks back either.
Open the Mac App store and look at the Top Charts. Go to the "top paid" section. See those two Apple programs in the top five. They are important to growing Apple's services revenue. Analyst are looking at that more than Mac Pro sales trends. The Mac Pro isn't completely necessary for Logic and Final Cut Pro but it is useful in keeping those in the top 10-20.
2. The high end customer are not as price sensitive. So they buy more RAM , SSD , and CPU with larger profits than the MBA customers do. PC vendors partially balance off their loss leaders "race to the bottom" systems with the "Pro" line systems with much fatter profit margins. Apple's margins are pretty high across the line up. I think the higher margins are going to be used to cover lower run rates. I suspect they are going to play a game of "chicken" with a "death spiral". That they won't raise it so high that the loose a critical mass.
[ I think they are wrong. Another substantive contributor to "Mac Pro" sales decline has been the increase in the entry base price. If they hadn't established a track record of doing whole lot of nothing for 3-4 years at a time that might work. Higher prices and likelihood going to disappear down the Rip van Winkle hole again probably won't work. ]
3. It isn't just the "high end customers". it is the high end customers who are left. Two 3-4 years cycles of Mac Pro updates and there are a bunch of folks who have just left for Windows ( Linux or maybe Hackintosh). The only upside is that the ones that are left have a high "update pain" tolerance. If Apple goes down the Rip van Winkle hole for another 3 years after the next Mac Pro updates they will probably wait it out.
I think one thing that put the Mac Pro back into the development resource allocation is that Apple got surprised how many folks stuck around during their long naps and are willing to give them another chance.
4. It isn't the somewhat narcissistic small scale threat ( if I don't get a new Mac Pro I'm not buying a iPhone (or MBP) anymore). That the Guy Kawasaki era "evangelists" holding their family members hostage ( get me a Mac Pro or I'll make my family drink the Jim Jones kool-aid). High end in terms of yearly spend. If your $XX Million dollar a year customer says that would like a Mac Pro in the mix and there is a number of sales folks whose commission/jobs depends on keeping that customer then it bubbles up to the top.
Apple has never supported Crossfire/SLI in previous macOS. I doubt they are going to start now and largely solely only for the Mac Pro.
If two GPUs are sharing the same x4 link then how do you drag over twice as much data to fill up both GPUs faster? ( consumer PCs will split the x16 into two x8's coupled with backchannel GPU links. Not 4's . )
I'm not sure the people in charge knew why they had those folks to begin with.
That's my worst nightmare. Apple comes out with a Mac Pro, promises there will be updates, I buy one and... nothing.
They really thought after they failed to update the Mac Pro that they could ship out an iMac Pro and everyone would be just as happy.
I don't know the specific reason why they decided to put it back into development, besides it was so sudden that people didn't even know the announcement was coming. It just seemed like a big surprise. All I can figure is that something happened that surprised execs, and the best guess I can make is they showed it to big customers who said if this is the replacement we're all moving to Windows.
Doesn't sound that strange from the richest corporation in the world ?It is far more likely it is lack of resource allocation inside of Apple that is the hold up than technology outside of Apple being unavailable.
the tcMP is restricted to PCIe2 x4 SSD, notwithstanding there are 3rd prty solutions with NVMe oe even m.2 adapters, I think Apple is aware this.Apple couldn't even bring themselves to sell new SSDs for the 2013 even though it was a user serviceable part.
I mentioned Nvlink because that way the vram of two cards could be pooled together ( instead of duplicated on Vram across two or more cards vis SLI/Crossfire. A pointless exercise ). Two cards with an nvlink type of bridge would instantly double the vram available for rendering. If the nMP could use two internal vega64 with 16GB each ( and the navi for the next iteration of the nMP..if it come that is ) connected by some type of nvlink bridge, it will double the capacity of the cards to fit data onboard ( instead of having to stream them off the RAM/SSD if it cannot fit them on the vram )
SLI/Crossfire are one thing, and NVLink/Infinity Fabric(amd's eqv) is another totally different thing, do not mix apples and pears.
Do you even read ?
with nVidia CUDA you actually dont need nVlink for your apps to handle multiple GPUs as a single logical unit,
I think that there are not always the financial reasons the cause of a mistake.Doesn't sound that strange from the richest corporation in the world ?
the tcMP is restricted to PCIe2 x4 SSD, notwithstanding there are 3rd prty solutions with NVMe oe even m.2 adapters, I think Apple is aware this.
I think that there are not always the financial reasons the cause of a mistake.
And again, Apple doesn't make enough money off of the Mac Pro to justify spending large amounts of money on it.
Wrong, requires re-definition of the PCIe std (a 16x channel serial bus), if you'll need to take 2 of them for DP video you are bootlenecking the system, not to say the complexity to adjust bus timings etc, better solution should be to open the tcMP GPU conector (a custom PCIe x16 interface + 6x DP Interfaces + GPU intercnnect interface -CrossFire=>NVlink/Infinity-)Apple would sit down both Intel and AMD and say "We want the next generation Thunderbolt controllers to accept DisplayPort over the PCI bus, and we want the next generation Radeon GPUs to support directly addressing a Thunderbolt controller over the PCI bus.
Intel's next gen Thunderbolt controllers would work with DisplayPort signals routed over the PCI bus (or some other similar thing), all the new Radeon GPUs available to everyone would support it. Mac Pro customers could buy retail AMD GPUs off the shelf as long as they were of the latest generation or newer. Nvidia could get in on the party if they want to. And it would be an industry standard because, just like Thunderbolt, Intel will push it as a standard industry wide.
Mago's prediction, and his idea of the technical constraints, is limited because he's trying to make everything work with off the shelf Thunderbolt controllers and off the shelf GPU components. What is being missed is that Apple has a lot of influence over the next generation designs of both those things.
Actually I see two possible reasons: after Jobs death, Oversight becomes very relaxed at Apple with cook being more into Liberal Agenda than Management, then Apple's R&D just cared on whats priority: Laptops, iPhones, leaving behind all the rest (even macOS), then possible plans to ditch Intel for AMD CPUs added more drag to the Mac Pro development, then (hopefully) come VR/ML trends to explode and surprise then having nothing in the ecosystem for macOS/iOS developers to build products for Apple and actually having a PROs stampeede, they realized they need an emeregency solution: iMac Pro, to add later an actual ML/VR workstation in leu of the tcMP never airmed at this kind of workloads.Maybe it didn't sell enough to warrant caring. Apple clearly hoped it would sell better than the 2012 model and maybe it did for a time, but if they were expecting a ten-fold increase and it was only a two or three-fold (in it's best quarters)...
And now, years on without an update, it sells under 30,000 units a quarter per the April 2017 Summit compared to around a million units a quarter for the iMac line. No real surprise they made an iMac Pro before a new Mac Pro.
vram FYI is an obsolete concept, modern GPUs use Framebuffers to render display output, vram (actually multi-port ram) is not used anymore, framebuffers are read asyncrous by DP serializer (its like another GPU core but one that at one end has access to the GPU ram, and at the other end has a serial signal output)
[doublepost=1522506294][/doublepost]
Not the vram of those multiple GPUs.
...
It would be entirely possible Apple would sit down both Intel and AMD and say "We want the next generation Thunderbolt controllers to accept DisplayPort over the PCI bus, and we want the next generation Radeon GPUs to support directly addressing a Thunderbolt controller over the PCI bus. Oh, and in since this is an industry wide problem we're not going to pay for it, but you'll make all your customers happy."
vram FYI is an obsolete concept, modern GPUs use Framebuffers to render display output, vram (actually multi-port ram) is not used anymore, framebuffers are read asyncrous by DP serializer (its like another GPU core but one that at one end has access to the GPU ram, and at the other end has a serial signal output)
Actually I see two possible reasons: after Jobs death, Oversight becomes very relaxed at Apple with cook being more into Liberal Agenda than Management, then Apple's R&D just cared on whats priority: Laptops, iPhones, leaving behind all the rest (even macOS)...
Dubious.
x4 PCI-e v3 --> 32Gb/s https://en.wikipedia.org/wiki/List_of_interface_bit_rates#Main_buses
DisplayPort 8K 60Hz --> 49Gb/s
DisplayPort 5K 120Hz ---> ~45Gb/s https://en.wikipedia.org/wiki/DisplayPort#Resolution_and_refresh_frequency_limits_for_DisplayPort
That's 1.4. The next iteration 1.5 or 2.0 should be higher. I think those for are normal, non-HDR, color also. Add HDR and the overhead also goes up.
One problem with that is that not only sucking up PCI-e bandwidth through the switch in the TB controller you also sucking up very substantive bandwidth in the PCI-e switches in the host system too.
Your problem is that with Type C alt-mode DisplayPort mode the TB controller would have to tap dance that PCi-e encoded DisplayPort signal back into DisplayPort. Your notion is that some hocus pocus game could be play to mutate the PCI-e data as it is encoded into TBv3 protocol. That doesn't solve all of the requirement. alt-mode DisplayPort is also required.
Other than lossless or visually lossless encoding, the DisplayPort port signal should be raw until just before it hits the TB transport. No reason to encode and decode on the same host system. It will suck up more power relatively corner case benefits.
Wrong, requires re-definition of the PCIe std (a 16x channel serial bus)
if you'll need to take 2 of them for DP video you are bootlenecking the system, not to say the complexity to adjust bus timings etc, better solution should be to open the tcMP GPU conector (a custom PCIe x16 interface + 6x DP Interfaces + GPU intercnnect interface -CrossFire=>NVlink/Infinity-)
FYI information Thunderbolt didnt come from Intel or Apple Magicians, what they actually did was to find an application to the latest MAXIM RS485 Transceivers (then capable of 10MBps, current on 40MBps, next gen RS485 said to be capable of 100MBps), its not a trivial development, and actually all the Data I/O industry relies on modified MAX RS485 transceivers, from Intel TB, to Mellanox' Infinyband.
NOFYI you are picking some odd issue never implied in the posts. FYI this is what I meant by NVLINK style dual GPU system as a way to differentiate between single card systems like the iMac pro, multi eGPUs vs a high ‘bandwidth, high throughput’ like the nMP : https://www.chaosgroup.com/blog/v-ray-gpu-benchmarks-on-top-of-the-line-nvidia-gpus
That’s it !
While the solution doesn't have to be over PCI-E, PCI-E is bidirectional, which means that a x8 card has 64Gb/s of upstream bandwidth basically unused.
Thunderbolt 3 also can't do 5k 120 hz or 8k 60 hz so the scenario is irrelevant. You're talking a dual Thunderbolt cable solution which is basically the maximum a Thunderbolt Mac can do right now anyway. So it all still fits, and there would be more than enough bandwidth off a 8x or 16x card.
But again, this is a straw man I'm simply supposing to say that it's a solvable problem if you control all the pieces, which Apple, Intel and AMD do.
We're also on the verge of PCIe 4.
Sure, but it's nothing the switch couldn't handle. I mean, you act like it's a lot of bandwidth. But if I'm doing full screen updates on the display every frame, it's basically the same thing, and if your PCIe bus can't handle that, you have no business driving that display anyway.
PCI-E encoded? I'm not sure what you mean. The data would be packeted, but there wouldn't be any encoding or decoding going on. And it's not much of a problem because DisplayPort is packeted anyway.
.
I think you're missing something here. I mean, this is all just a straw man. But the DisplayPort data would never be encoded or decoded. It's just wrapped in PCIe packets, just like it would normally be wrapped in DisplayPort packets.
Basically what I'm saying is that AMD could modify their GPUs to support sending a raw DisplayPort signal to a peer PCI-E device. In this case, the Thunderbolt controller. The Thunderbolt controller would have to be changed to expect a raw DisplayPort signal over the PCIe bus from a peer device. But again, Apple has their hands in both those suppliers.
I also think we might see Apple, while supporting Thunderbolt displays, move back to pushing direct DispayPort output for displays as well. It's not like there are any existing 8k Thunderbolt or USB-C displays out there they need to support anyway. And it gets them away from Thunderbolt and USB-C bandwidth being the gating factor for professional displays.
[doublepost=1522543864][/doublepost]
Needs citation. Devices that share a PCIe bus can send any info they want to each other. DisplayPort signals are 1s and 0s just like any other data. So send the DisplayPort signal over PCIe transport just like any other data sent between two PCIe devices.
Bottlenecking what? GPUs don't usually send very much data upstream (or downstream, but that's a different conversation.)
So you're saying Intel has no idea how to alter Thunderbolt controllers? Mmmmmkay.
1st, You dont know a thing on how PCIe works, havin unused Upstream bandwidth doesnt means you have an separate open channel available for downstream,
as the same copper line is used concurrently to upstream and downstream using different signaling schemes.
Speed For single-lane (×1) and 16-lane (×16) links, in each direction:
- v. 1.x (2.5 GT/s):
- 250 MB/s (×1)
- 4 GB/s (×16)
- v. 2.x (5 GT/s):
- 500 MB/s (×1)
- 8 GB/s (×16)
- v. 3.x (8 GT/s):
- 985 MB/s (×1)
- 15.75 GB/s (×16)
- v. 4.x (16 GT/s):
- 1.969 GB/s (×1)
- 31.51 GB/s (×16)
- v. 5.x (32 GT/s):
- 3.938 GB/s (×1)
- 63 GB/s (×16)
All the stuff you say, still no practical sense just to extend the PCIe functionality to justify PCIEe slots in the mac pro, becasue the TB3 imperatively needs DP signals and you know the mMP wont happen w/o TB3, when the solution comes from the hated tcMP: a custom GPU interface driving DP,PCIe and maybe GPU Fabric interconnect, you dont like the solution coz means not std PCIe slots, but Apple could make this solution Public Domain and thus other hardware mfr deploys its own Workstations or Gaming Rigs whit these TB enabled GPUs, w/o messing the PCIe interface.
You need to go outside Gamers forums, and try to read about Digital Compute Science, lots of wrongs and self assumptions from an uneducated biased approach.