Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,676
The Peninsula
Aiden, can you explain what Nvidia mean by: "mixed precision performance"
Many very important GPGPU apps, especially in the "deep learning" field, deal with recognizing repeating patterns in mind-numbing quantities of data. (That's why how many thousands or millions of CUDA cores that you have is important.) The CUDA cores in your Tesla are doing this as they're optimizing your battery life and helping you to park.

You don't need full or double precision floating point when looking coarsely at huge datasets. Nvidia's CUDA supports "half precision" floating point (16 bit floats - see https://en.wikipedia.org/wiki/Half-precision_floating-point_format ) so that the memory and or bandwidth requirements are 50% of standard precision floating point, and if the processing is faster that's an additional benefit.

(And MathPunk is right, but short floats make it a three-tiered game. Use short floats for the coarse work, elevate to 32-bit floats for the next level, and go to 64-bit for the critical stuff.)

Probably not important for wedding videos (you don't want the bride's gown to be "approximately white" unless she's "approximately a virgin"), but very important for apps like Siri that are using GPGPU programming to respond in nearly real time to fuzzy input.


Also look at the second link from golem.de. In Luxmark 3 R9 290X is faster, much faster than Titan X.
And I could post a link to benchmarks that show a MacBook Air destroying a twelve core MP6,1 in H.264 encoding, and claim that the Air is faster, much faster than the MP6,1.

But I won't, because we both know that a single benchmark is irrelevant unless that is exactly what you do every day to bring home the bread.

And the koala cub is cute....
 
Last edited:
  • Like
Reactions: tuxon86

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
MVC, do you believe that HBM on Fury X has 450 GB/s bandwidth?
sapphire-radeon-r9-furyx.jpg

Official BOX of Sapphire AMD Fury X. It plainly says 450GB/s.

Also, http://mundo.pccomponentes.com/wp-content/uploads/2015/06/Fury-X-Full-Specs.png Official AMD Slide for Fury X. Plainly says 275W TDP. None of the reviews shown higher than 280W power draw on Average. Also, 275W TDP from that AMD Slide is BIOS TDP. If we will turn to Stacc post from few pages ago, we will see that people experimented with the GPU by running it on 60% of nominal TDP. If that is the case, than in that circumstances the real TDP was around 175W and the core clock was 1035 MHz. All the links are in this thread on the last pages.

Would it be hard for Full Fiji chip to run on 125W if it already can run on 175? Would it be hard for Fiji to sustain 900 MHz on core at 125 if it is possible for it maintain 1035 MHz on 175W?

I guess, only Apple knows the answer...

P.S. If Fury X runs on Water Cooling because it has to, then explain to me, why Fury Nano is rumored to be full Fiji chip with Air Cooling? ;)

you can keep ignoring obvious realities if you choose, the beauty of a free internet

Amazing that AMD is taking such a beating in the reviews for the enormous power draw of Fiji if they could just turn the clocks down 100Mhz and get it down to 125 Watts, which would instead make Maxwell look like the power hog. I am starting to think you might actually believe this, pretty astounding.

My Sapphire box has a sticker with correct numbers, I wouldn't be surprised if under it is the misprint you posted above. Perhaps they had clocks lower to bring power down but the lower performance made the Maxwell cards even more unreachable in speed so they had to up them at last minute to avoid looking like idiots? This would explain them claiming it was an "overclocker's dream" when in fact nobody has been able to get more then 8-10% more out of it. (Not even in same neighborhood as "dream")

You should write Asus and ask them why they tell their customers that the card draws 375 Watts. Make sure to tell them that the hat you are wearing is made of nothing but the finest Aluminum so they know that you are an expert in materials and energy consumption.
 
  • Like
Reactions: tuxon86

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
you can keep ignoring obvious realities if you choose, the beauty of a free internet

Amazing that AMD is taking such a beating in the reviews for the enormous power draw of Fiji if they could just turn the clocks down 100Mhz and get it down to 125 Watts, which would instead make Maxwell look like the power hog. I am starting to think you might actually believe this, pretty astounding.
Those are only your assumptions. You have been given proofs. You are still refusing to acknowledge them. I agree with you to disagree. And lets end it now.

P.S. It is not nice in any conversation or discussion making assumptions about the other person. Regards.
 

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
Those are only your assumptions. You have been given proofs. You are still refusing to acknowledge them. I agree with you to disagree. And lets end it now.

P.S. It is not nice in any conversation or discussion making assumptions about the other person. Regards.

Completely have no idea what you are trying to say here. I'm going to guess that something has been lost in translation.

I did find something you were right about, it appears that either Sapphire did bad math, or Fury clocks were upped at last minute to seem less pathetic in tests against 980Ti. My Sapphire box did in fact have same number under the sticker.

And as to how heat from a computer is the same as a space heater, here is a link that might be interesting.

https://www.pugetsystems.com/labs/articles/Gaming-PC-vs-Space-Heater-Efficiency-511/

Of course, to get a system that drew 1000 Watts they needed a high power PC AND 3 @ GTX Titans.

Good news with Fury X is you would only need 2 of them. The power used by a system comes out as heat, very simple math. If a GPU pulls 375 Watts, as Asus claims for their Fury, then the cooling system has to exhaust 375 Watts into the room. People like to imagine that sophisticated copper and or water cooling somehow makes the heat "disappear", but in fact it just gets moved from one place to another.

I think humans only believe what they see and can easily comprehend. A good example is someone thinking that their kitchen is hot. They open the freezer, believing that the machine is "creating" cold. In fact, the end result of this would be a slightly WARMER kitchen as all the machine does is move heat from inside to outside. The energy used to move that heat would be a net addition to the room.
 

Attachments

  • IMG_6017.JPG
    IMG_6017.JPG
    548.1 KB · Views: 179
Last edited:

filmak

macrumors 65816
Jun 21, 2012
1,418
777
between earth and heaven
Apple is dedicated to their proprietary designs and lockings, they like everything built in with their way and for their profit.
They have denied to let people upgrade their computers by locking firmwares, proprietary connectors, proprietary designs and with many more methods/inventions.

But imho I think that in the case of putting a new GPU in the next iteration of nMP there is nothing to stop them following the middle way, a more powerful GPU downclocked and a more powerful PSU perhaps with a little redesign to accommodate it.

Of course there is and the other way of keeping everything intact and put "new" "FirePros" Apple's editions inside with new names and let us wondering again what kind of AMD tech or part of it have been used.
 
Last edited:

t0mat0

macrumors 603
Aug 29, 2006
5,473
284
Home
So what happens when Intel goes Purley for 2S and up- do 1S Xeon's get marginalised? Could the Mac Pro still use one 2S Xeon E5?

Seems quiet on the Xeon E5 v4 news front - is it seen as just a bump compared to v5 Skylake Xeon's?
 

ManuelGomes

macrumors 68000
Dec 4, 2014
1,617
354
Aveiro, Portugal
koyoot, since half precision has already been explained, I would only state that the koala baby is awfully cute :)
Aiden, great "approximatelly a virgin" comment :)

Is it so hard to imagine that a downclocked Fiji would fit in the nMP? Don't really think so too, but hey...
t0mat0, it seems 1S and 2S will be completely different architectures, I guess they'll have to decide which way to go.
 
  • Like
Reactions: t0mat0

ixxx69

macrumors 65816
Jul 31, 2009
1,294
878
United States
So, after more than 50 pages, are we looking now at a 2016 update instead with SkyLake?
After 50 pages, we don't know when or what we're looking at. Don't lose sight that at this point it's not only total speculation when Apple will release an updated MP, but there's not even the usual signs to go on (e.g. we're 9 months past Haswell-E and possibly 9 or so months away from Skylake-E).

Some aspects of the speculation have more merit than others, but with the spotty MP release track record the last five years, a niche-use Mac market, Intel's botched CPU release schedule, AMD's performance per watt issues, and the unknown whereabouts of TB3 & DP1.4, there's really not much to go on.

Don't get me wrong, it can be fun to speculate, but anyone who suggests anything concrete is really just guessing like everyone else.

If I had to guess, I'd agree with the 2Q/3Q 2016 speculation only because if they we're going to release a Haswell MP, I think they would have done it already. If they wait until the Fall events to announce it, it would be too close to Skylake (and potentially other component updates) to then release another updated one in 2016 when they could offer a more significantly updated MP.

I don't like that they wait so long between releases... it really sucks for those who need a new MP now and are forced to purchase old tech at new tech prices... but that's just the way the ball rolls with Apple in this niche market.
 

ManuelGomes

macrumors 68000
Dec 4, 2014
1,617
354
Aveiro, Portugal
TB, right now everything is indeed pure speculation.
The odd thing is, there is no solid info (leaked) that will provide any insight into the specs or release date.
Still, we keep on trying to guess and wishing...
I'd say TB3 (but no DP1.4) is a safe bet, Haswell-EP seems to be one too, don't see them waiting almost another 2 years for Skylake-EP, if in fact Broadwell-EP is dead, which I'm not entirely convinced of. I still believe we might see it in a couple of months.
Regarding GPUs, if I had to guess I would have to go with Grenada right now. Fiji seems to be mentioned in El Cap but it's quite a bit limited when it comes to differentiation between SKUs for Apple to have several Dxx0 cards. Memory capacity is fixed, unless they'd cap the bandwidth for differentiation, unlikely though.

koyoot, I know you don't like Fudzilla, but have you read the AMD Exascale Heterogeneous processor with 32 Zen cores and an Artic Islands 32GB HBM2 Greenland GPU? DP/SP is 1/2 finally. What a beast!! It's posted on IEEE page so it could be legit. 2016/17 timeframe probably.
This is pretty much speculation still.

Would Apple be waiting on this? That would make some people happy, 2 16 core CPUs and one 32 core GPU :)
On a single package, which means you could possibly even put 2 or even 3 (on the 3 sides of the nMP thermal unit).
If each cluster had a max TDP of 140W or so no need to up the power supply.
But will it come to affordable processors to fit in the nMP? Nah..
 

ManuelGomes

macrumors 68000
Dec 4, 2014
1,617
354
Aveiro, Portugal
I imagine someone :) will be commenting on the power consumption blah blah blah
and how the nMP is power limited and you can't fit any of this in that power envelope.
This was just me pointing to a rumor or possible announcement od something to come, each will draw his own conclusions of course.
 

throAU

macrumors G3
Feb 13, 2012
8,827
6,987
Perth, Western Australia
Apple is dedicated to their proprietary designs and lockings, they like everything built in with their way and for their profit.
They have denied to let people upgrade their computers by locking firmwares, proprietary connectors, proprietary designs and with many more methods/inventions.
.

People keep saying this, but the big bottlenecks these days for most things are as follows:

- storage bandwidth
- memory capacity
- GPU

The big changes in storage bandwidth lately require wider buses to the SSD. i.e., new motherbord/chipset required

The Mac Pro can upgrade RAM.
In theory the GPUs are upgradable.

Yes, in theory most apple machines are less expandable than they used to be. In reality, plugging new bits into your old motherboard gets you a small fraction of the performance of upgrading machine these days.
 
Jul 4, 2015
4,487
2,551
Paris
I still believe there is a strong possibility Apple will release OSX as a free OS for any x86 system in about two years, especially if Windows 10 slows or shrinks Apple sales.

When that happens Apple will still provide desktops but claim they have the best designs. They could offer barebones towers. One for gamers with two PCIE slots. One for Pros with four slots and more ports. Bring your own graphics card. Because OSX can officially be installed on any PC hardware then Nvidia and AMD will keep updating Mac drivers for all new hardware.

That's the only way Apple can make Metal look impressive. Otherwise it's going to look like a joke running on mobile or underclocked GPUs in sealed up systems.

Windows 10 is the best thing that ever happen to Apple in years. It will make them wake up and realise iPhones and that eyesight destroying watch isn't enough to impress anymore.
 
  • Like
Reactions: mburkhard

throAU

macrumors G3
Feb 13, 2012
8,827
6,987
Perth, Western Australia
Apple tried dealing with clones before. It nearly bankrupted the company.

Not going to happen.

Apple's entire business model is being vertically integrated and having intimate control over the entire experience. If they try to do the same thing as windows, they'll have all the same problems.
 
  • Like
Reactions: askunk and filmak
Jul 4, 2015
4,487
2,551
Paris
Apple tried dealing with clones before. It nearly bankrupted the company.

You can't compare that. The era, user base, and the OS platform and the hardware was completely different and too niche. Today Apple is mainstream and its users except many cross platform usage, compatibility and comparison.

Excuse my edits. Auto type is ****.
 

filmak

macrumors 65816
Jun 21, 2012
1,418
777
between earth and heaven
The Mac Pro can upgrade RAM.
Yes, in theory most apple machines are less expandable than they used to be. In reality, plugging new bits into your old motherboard gets you a small fraction of the performance of upgrading machine these days.


I don't know, maybe i'm completely wrong, but i have seen all the pc/mac evolution from their early days.

I would like to express some thoughts.

People keep saying this because many of us tested the some new components in 5,1 mac pros and saw that:
1. the new blade SSDs perform about the same mounted via adapters on the pcie bus
2. the newest GPUs like GTX970/980 and others are better performers than the current dX00 in nMP.
3. Many 5,1 MPs with upgraded components perform really very good, for their age, with higher geekbench scores (over 33K)
I don't think that these are a small fraction of performance.:)

Anyway I really believe that the nMP is a step ahead, not a big one in terms of performance, but a really different/fresh approach in modern computing.

Their cooling method for example is extremely good and efficient. The thunderbolt ports, the footprint, the sound emissions are strong points but there are also and many compromises.

As you say the GPUs are, in theory, upgradable. But in practice they 're not. The practice matters.

We cannot install new GPUs in our nMPs because they have proprietary connectors, one of them has the SSD connector built in, their firmware is not on the GPUs but locked inside the nMP's firmware. Finally as it seems and in theory they 're not upgradable, especially for any third parties. And Apple certainly prefer to buy from them a new edition of their offerings than upgrade their older one.

I already own one 6c d500 nMP and two days ago I ordered a second one (4c D500), so as you can understand that I appreciate them, but I'm also probably old school and still like the internal upgrades, additional cards, disks etc.:)

Also I don't think that the current design of nMP is here to accommodate :
- storage bandwidth
- memory capacity
- GPU
because if it is so, it has failed most of them (GPUs, memory capacity). Maybe the under discussion next iteration of it will make it better.

Anyway we really enjoy using these machines and the stability of OS X.;):):)
Cheers.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,298
3,893
So what happens when Intel goes Purley for 2S and up- do 1S Xeon's get marginalised? Could the Mac Pro still use one 2S Xeon E5?

It is extremely likely 1S won't get marginalized. The 1S is the same foundation that the upped end Core i7 "Extreme" is deployed on. It is doubtful that Intel wants to make that more marginal than it already is. The mainstream Core iX solutions are highly likely going to stay maximized at just 4 cores for the foreseeable future ( the additional transistor are more likely going to removing AMD and Nvidia GPUs from systems; not competing with the higher end x86 core only option products). So the folks who "need" more than 4, have to go upscale Core i7 "Extreme (i.e. Core i7 x9xx ).

It is also highly likely that using a Purley 2S Xeon E5 will not work for Mac Pro on price at the entry-mid options. The 2S is going to have built in 10GbE , Super high bandwidth buses, etc. Intel is going to make folks pay for that. Unless highly kneecapped with very slow clock speeds, you are not going to see $290-300 ( E5 1620 ) or $580-600 ( E5 1650 ) prices.

http://ark.intel.com/products/series/81065/Intel-Xeon-Processor-E5-2600-v3-Product-Family#@All

The lowest 4 core there is $100+ more expensive and slower. The stuff in the sub $580 range are also relative 'dogs' in the 1S WS workload space. With the 10GbE I'd be surprised to see Intel keep the v3 like pricing levels. If AMD has something super competitive maybe. If not then the base level pricing would likely be incrementally higher which work even less for a Mac Pro.


And no Apple probably won't turn to overclocked mainstream 4 core Core i7's as a solution for the Mac Pro.


Seems quiet on the Xeon E5 v4 news front - is it seen as just a bump compared to v5 Skylake Xeon's?


Not sure what else they can leak about E5 v4 and still have anything left. Intel has to have something to talk about at the Intel IDF conference coming up. If they leaked absolutely everything there would be no reason to go over cover it in the tech news. It don't think E5 v4 (and v5) will have the prime spotlight at IDF SF 2015, but should get more timeline info updates from either Intel or the system vendors around that time being given to more folks. That usually leads to another round of leaks.

Every Xeon E5 v(n) to v(n+1) move is an evolutionary bump. That's why they add the plus 1. It is beyond puzzling why folks think that next (n+1) increment is going to be some radical revolutionary change. It isn't. If it was revolutionary they would change the product's major name "E5" not the trailing end version number.
 
Last edited:
  • Like
Reactions: t0mat0

flat five

macrumors 603
Feb 6, 2007
5,580
2,657
newyorkcity
People keep saying this, but the big bottlenecks these days for most things are as follows:

- storage bandwidth
- memory capacity
- GPU

the big bottlenecks these days for most things have nothing to do with the innards of the computers.. they're incredibly fast nowadays.. faster than necessary for most things.

the bottlenecks are in the realm of user interfaces, input/output devices, etc.

for example.. i can think a sentence in a fraction of a second but it takes much much longer to make those words appear on a screen.. the keyboard itself is a huge bottleneck.

the mouse is also incredibly inefficient.. the most versatile part of our bodies are our hands (or maybe mouths) yet when using hands to talk to computers, we're limited to pointing and clicking.. pretty much the extent of what an infant can do with their hands.

i can think of a shape in a second.. say a 6' wide strip through the middle of a sphere with a 1' wide thickness and 1" filleted edges.. to actually model it, i spend about 90secs.. it takes me 100x longer to do a simple task than it does to think it.


Screen Shot 2015-08-02 at 10.30.50 AM.png

things like this are the major (major!) bottlenecks in modern computing in my opinion. (not talking about benchmarks.. talking about actually getting things done using the tools)
 
  • Like
Reactions: mburkhard

deconstruct60

macrumors G5
Mar 10, 2009
12,298
3,893
You can't compare that. The era, user base, and the OS platform and the hardware was completely different and too niche. Today Apple is mainstream and its users except many cross platform usage, compatibility and comparison.

Riddle with flaws in just a couple sentences..... in the current context clones make even less sense than they might have back then.

1. Apple is mainstream. OS X is not. Relative to the overall Personal Computer market (tablet, smartphones, laptops, desktops,... computers that people personally own ) OS X is an even smaller player now than it was back then. If whittle down to the just the classic PC form factor market that is over $800 OS X has marginally higher traction. That is actually not where the trend lines are going in terms of growth.

You are flip flopping in scope. iOS makes Apple a bigger player now in the overall PC market. But that doesn't necessarily improve OS X's share.

2. The core hardware being in more commodity status is actually a bigger negative in terms of possible blowback harming Apple's margins. That was the whole magic voodoo that the "clones" never did. They were suppose to expand Mac into areas where Apple didn't want to cover. But to be viable the clones needed to compete in areas that Apple did want to cover. At that point it doesn't "buy" Apple much to add them.

How many quality PC vendors don't have systems that overlap at all with Apple? So if they overlap with Windows why wouldn't they try to overlap with OS X at higher price points than Windows ( for more profit )? Even more commodity hardware means even more dubious quality "race to the bottom" shops are going to pop up.

Clones only make sense if Apple was to almost completely pull out of the Mac system market and just retreat to being a OS X software vendor. That makes about zero sense for them now. Apple doesn't sell OS X anymore independently so where's the huge revenue flow? The market were PC OS prices are heading toward zero makes that "software only" approach way more risk if don't have almost monopolistic share of the market (e.g., Windows 90+% of classic PC market). With that high a percentage of the market there just has to be a decent churn on new hardware ( some fraction of folks just need to upgrade because their PC is at end-of-life ) that there is a flow of bundled OS charges with new systems. So if 15% of the "have to" upgrade their system then talking 0.15 * 0.90 13.5% of the market. If 15% of 7% of the market it is just 1.1%. Operating down in the low single digits is typically not indicative of being viable over the long term as a major corporation.

They could offer barebones towers.

Even in the Apple I , II era Apple had about zero interest in being a barebones vendor. They have never been a barebones vendor. Never will be, the culture and folks working there aren't going to do that.

The whole point of the initial Apple computer was not to sell a kit that folks complete. That is what they were about a substantively, if not complete, working system straight from the box. The Mac just upped that to another level of completed system.

There is only highly limited opportunity to add substantive value add to basic commodity parts as a core business.

It wasn't just the clones that drove Apple toward bankruptcy. That hodge podge collection of boxes with slots and directionless ego projects Apple was turning out too was off the Apple DNA track also. When Jobs came back he killed both that hodge podge approach, the relatively independent OS software meanderings, and the clones. It was both that were flawed strategies. Purse everything at the same time was a problem. [ Frankly, long term there are not many good examples of that working either. ]

The were a subset of folks who bought the Mac Pro / Power as "bare bones" system ( some stuff filled in but the intent was to yank parts out sooner rather than later. ) I don't think Apple ever intended those folks to be the primary targets of the Mac Pro. They are certainly not being targeted now. Being all possible things computing to all people never was Apple's objective.
 
Last edited:
Jul 4, 2015
4,487
2,551
Paris
OSX is mainstream. There's no arguing with that when Apple has become the biggest corporation in the world alongside Exxon and they own more computer stores globally than any other computer manufacturer.

They are essentially a mainstream 'PC' manufacturer now, unlike that very overpriced niche crap they produced when Steve Jobs was away. You keep talking about the 80s and 90s like it means something in today's world where gaming, GPUs, app stores and soon VR are driving the Eco system. It's not just a Mac eco system either. It's an Internet of Things Eco system driven by apps, gadgets and shared content. The OS is just a background service today. It's easily given away for free to create a larger user base. Couldn't do that 20 years ago when Apple allowed people to make Macs because they didn't have an Eco system to make money from that ran on top of the operating system.

End of story. Don't compare two eras. Waste.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,298
3,893
Also I don't think that the current design of nMP is here to accommodate :

because if it is so, it has failed most of them (GPUs, memory capacity). Maybe the under discussion next iteration of it will make it better.

Memory capacity is pragmatically about the same top end in term of max possible with some configuration.

http://eshop.macsales.com/shop/memory/Mac-Pro-Memory

Mac Pro 2013 tops out at 128GB. The previous Mac Pro single CPU systems don't. There is parity if add in a second CPU, but a second CPU costs how much more? ( it is an increment amount because those older Intel CPUs are almost at end-of-life and can possible buy used. New systems, with modern CPUs it is a bigger jump.)

However, memory capacity using 4 DIMMs did go up. The previous designs were not "maximum DIMM slot" kings either. Addressing the problem for the Mac Pro is pragmatically more so about higher density in the same number of slots. So your basically left with GPU. That means "failed on most" on a list of 3 elements isn't particularly accurate.

The GPUs drag in alot of Nvidia fanboy versus AMD fanboy drama which probably is going to have relatively little impact on Mac pro design decisions. Apple probably did not think long enough on what the operational lifetime of the new MacPro systems were going to likely and demand/pricing on upgrades could be. There is probably a target of what they wish they will be, but it will be likely is somewhat different.

I don't know, maybe i'm completely wrong, but i have seen all the pc/mac evolution from their early days.

The current Mac Pro design isn't about the "early days". The older designs have bigger gaps in I/O. SATA I , SATA II storage I/O. The current Mac Pro has tossed SATA out completely as far as internals. So not going to see a big jump over 'old' SATA because there isn't any old SATA.

The older Mac Pros are stuck on PCI-v2. The new Mac Pro is stuck on PCI-v2 in terms of expansion. There is a different in price but in terms of bandwidth the older systems are "stuck in time" also. It is more a factor of how much of bandwidth was using when the system shipped. Again the systems shipped with bigger gaps. I don't think that is going to be quite as true for current and future systems as it was in the past.

The 2009-2012 Mac Pro's two x4 PCI-e v2 slots share bandwidth. That doesn't matter much if plug two defacto x2 cards into them. It will matter if plug two cards that actually need the full x4 bandwidth and need to run them concurrently. If do then probably need a new system because the older one doesn't cut it anymore. There were gobs of headroom when common storage devices ran relatively extremely slow.
 

ixxx69

macrumors 65816
Jul 31, 2009
1,294
878
United States
Apple tried dealing with clones before. It nearly bankrupted the company.
The "clones" had nothing to do with Apple almost going under... they were a blip in an already low marketshare. The clones were just a symptom of everything that was ailing Apple at the time.

Mac sales were cratering because of changing industry, an indecipherable lineup of models, bad designs, configurations based on marketing rather than user needs, overpriced, etc. Clones were a "last ditch" effort to increase Mac OS marketshare - Mac developers were abandoning the platform because the market wasn't large enough compared to DOS/Windows. But in reality there really weren't that many clones sold to literally affect genuine Apple Mac sales significantly.
 
Last edited:

PunkNugget

macrumors regular
Apr 6, 2010
213
11
Apple tried dealing with clones before. It nearly bankrupted the company.

I know that was over 20 years ago, but who gives a crap about clones when building hackintoshes have been around for over 6 years now and we have dedicated websites to show you how to build one. Hundreds of thousands of people are building them on their own. Apple knows this and can't do anything about it and doesn't care anymore. Their main focus is the iPad and iPhone. That's why they went "small" with their TrashCan Pro with only one CPU and crappy GPU. The only drawback (as I mentioned before) is their 32 Core "Cap" that they have on their OS written code. IF they ever make that decision to expand and allow a dual CPU config (obviously along with an OS update that can be used with a new dual CPU setup), then (and only then) will they start to catch up to Windows. Until then we're stuck with it. At least we can use 980 Ti's and Titan X GPUs for our Hackintosh builds. My new setup is going to use two 980 Ti's and (as it did before on my other current system: The Hackinbeast) Mac OS X on one SSD and Windows on the other SSD. If Windows 10 is more tolerable for me then I will be switching altogether, because there is NO Core "Cap" written in their code and (again) Apple doesn't really care anyway. They haven't proven otherwise. It's too bad, as I like their OS. Oh well, things change and that's life...
 
Last edited:

filmak

macrumors 65816
Jun 21, 2012
1,418
777
between earth and heaven
Mac Pro 2013 tops out at 128GB.

At first I would like to thank you for your very detailed and of high quality posts, we have all learned a lot from them, I'm very grateful.

In Mac Pro's specs, as of today, it is still reported that max ram is 64 GB.
This also happened and with the older models, and usually they could handle more Ram,within the same specs, than Apple quoted.

I know about the third party ram from OWC but if I remember correctly it has lower specifications / speed? (1066 vs 1866?), I'm not very sure, there was an old thread about this subject here, from a scientist who was interested in, and needed, 128 GB but at the end he went for another solution because he was not satisfied with the compromises.

I think that there was something about the limitations in Intel's chipset for addressing the higher capacity of RAM, this was not a problem for the normal lower-capacity RAM kits (up to 64 GB), they could/can run at faster 1866 MHz speeds.

Think that this was the reason, and perhaps the higher cost, that some companies offer this kind of upgrade (OWC, Transcend and maybe others) and some (bigger names) don't (Crucial, Kingston etc).

The older Mac Pros are stuck on PCI-v2. The new Mac Pro is stuck on PCI-v2 in terms of expansion.

I cannot remember a single PowerMac / Mac Pro who haven't got any limitations / restrictions. This was, and still is, and this will be Apple's policy, perhaps they have other priorities-interests. They have never offered the best, and possibly untested, hardware but they certainly offered more stability.
 
Last edited:

filmak

macrumors 65816
Jun 21, 2012
1,418
777
between earth and heaven
The 2009-2012 Mac Pro's two x4 PCI-e v2 slots share bandwidth. That doesn't matter much if plug two defacto x2 cards into them. It will matter if plug two cards that actually need the full x4 bandwidth and need to run them concurrently. If do then probably need a new system because the older one doesn't cut it anymore

Sure. We don't have to worry any more for these issues.
nMP doesn't have any built in slots any more. Only built in GPUs.
and if something happens to the them you have to send the whole unit in.

In the old design you could find a lot of solutions on site and continue with your work.

Imho in general we have some steps forward and some back.

Perhaps we 're heading, in a time loop, at the era of the first Apple computers, Amigas, Atari STs, that everything was built in.
You had a failure? if you can't find a spare board, you have to buy a new computer, and of course it will, probably, be faster/better.
 
Last edited:
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.