Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Wtf? No free software? Buy a Mac, and you don't need to buy software. Pcs require hundreds of dollars in software to make them useful.

Pages, numbers, keynote, garage band, photos, imovie, etc.. all free..

Office 2015 is in the form of a trial. It's $99/year for it. Plus antivirus, antimalware, winzip, etc.. all costs money...

Hmm, I do not seem to have free access to Pages, Keynote and Numbers. Did I miss something ?
 
My guess is that Apple hates making over powerful machines because they last too long. But if you're willing to pay for a 15" quad core then that will get you a long(er) life.

Good point. You can see from my sig. that I still have a 2011 MBP. It runs Sierra, so I still use it. That's going to be a 6 year old laptop in a couple months. Though I admittedly spend most of my non-work time on an iPad.
[doublepost=1478011234][/doublepost]
I recently started following this site, wccftech. It has a lot of good news on the upcoming AMD zen platform and HBM 2 based AMD graphics cards coming out early next year. I think that will finally give AMD a performance edge over Intel and nVidia for the first time in ages.
 
Hmm, I do not seem to have free access to Pages, Keynote and Numbers. Did I miss something ?
When you buy a new Mac, you get free copies of all the iWork and iLife software. Just like the free copies of iWork and iLife you get with a new iPhone.
 
I recently started following this site, wccftech. It has a lot of good news on the upcoming AMD zen platform and HBM 2 based AMD graphics cards coming out early next year. I think that will finally give AMD a performance edge over Intel and nVidia for the first time in ages.
A word of warning. WCCFTech is clickbait site, with absolutely no knowledge at all about upcoming hardware from AMD. They are posting stuff made by themselves, or reposting from other sites, just to bait every person who is desperately in need for good information about AMD. It was one of the sites responsible for mindshare failure about Polaris architecture, with hyping the GPUs up to the moon.

If you educate yourself there - be prepared to be completely and utterly disappointed with AMD's real world performance.

Current state of knowledge about Zen is that clock for clock 8 core CPU will have the same level of performance as 8 core Haswell-E CPU. This one: http://ark.intel.com/products/82930...ssor-Extreme-Edition-20M-Cache-up-to-3_50-GHz

If the silicon design will have 3.0 base clock - it will be on the same level. If it will clock up higher - it will be faster.

http://www.anandtech.com/show/10591...t-2-extracting-instructionlevel-parallelism/7 here is deep dive in the uarchitecture of Zen, and comparison with Broadwell.
Look at everything that defines single threaded performance: Load, store, decode, dispatch, INT Registers, FP Registers, Retire Queue, Retire rate, ALU, and AGU. They are on the same level as Broadwell-E.

So the only things that can let down right now the Zen architecture are caches bandwidth, and core clocks. Latest silicon from A0 revision has 3.15 GHz base clock, and 3.6 GHz boost clock, at 95W TDP. So lower than the Haswell-E CPU.

If you want to know and educate yourself about hardware, I suggest leaving WCCFTech in the garbage, and going to Anandtech, Anandtech forums, reading FUDZilla, and VideoCardz sites.
 
[Quote = "koyoot, de la publicación: 23840513, miembro de: 703545"] Una palabra de ADVERTENCIA. WCCFTech es clickbait sitio, con Absolutamente ningún conocimiento m de las Sucesivas hardware de AMD. Ellos estan Publicando Cosas Hechas por ellos Mismos, o volver a colocar en Otros Sitios, Tal Como cebo para Cada Persona Que esta Desesperadamente en Necesidad de Información Sobre Una buena AMD. Fue uno de los Sitios Responsables de la insuficiencia mindshare Sobre la arquitectura de Polaris, con exagerar las GPU Hasta la luna.

Si no se educa - Estar Preparado Para Ser completa y Totalmente Decepcionado ONU de la estafa Rendimiento Real de AMD.

Estado actual de los Conocimientos Sobre Zen Es Que El Reloj para El Reloj de la CPU de núcleo 8 nos podemos deducir el Mismo Nivel de Rendimiento de Como de 8 núcleos Haswell-E de la CPU. Este: http://ark.intel.com/products/82930...ssor-Extreme-Edition-20M-Cache-up-to-3_50-GHz

Si el diseño de silicio nos podemos deducir la base 3.0 Reloj - Que Será en el Mismo Nivel. Si Lo Hará de reloj Más arriba - Que Será Más Rápido.

http://www.anandtech.com/show/10591...t-2-extracting-instructionlevel-parallelism/7 Aquí es inmersión profunda en el uarchitecture del Zen, y la comparacion con Broadwell.
Mira Todo Lo Que define el Rendimiento de solo un subproceso: cargar, Almacenar, decodificar, despacho, INT Registros, Registros de PF, se retiran de la cola, se retiran TASA, ALU, y AGU. Ellos estan en el Mismo Nivel Que Broadwell-E.

Así Que Lo Único Que PUEDE bajar Ahora la Arquitectura hijo Zen el ancho de banda cachés, y frecuencias de reloj. Última silicio A partir de Revisión Tiene A0 3.15 reloj de base GHz, 3,6 GHz y reloj de impulso, es 95W TDP. Por lo Tanto inferior a la CPU Haswell-E.

Si DESEA conocer y educar Acerca del hardware, sugiero Dejando WCCFTech en la basura, y una Yendo los Foros de Anandtech, Anandtech, la Lectura Fudzilla, Y este sitios VideoCardz. [/ Quote]


ESA canjear su Opinión, Que No estoy de acuerdo con Usted. va un Foros de Anandtech, Anandtech, lectura Fudzilla, Y este sitios VideoCardz.
Si Quieres Estar Bien Informado Sobre la industria SIGA sicilio este sitios Como EETimes diario, etc, etc ...

Y no hay Lecciones de Aprendizaje
 
Si Quieres Estar Bien Informado Sobre la industria SIGA sicilio este sitios Como EETimes diario, etc, etc ...

Y no hay Lecciones de Aprendizaje
Yes, eventually SemiWiki blog also posts quite a lot of information on the silicon industry but focuses solely on the information about the silicon designs, rather than GPU specifications.
 
When you buy a new Mac, you get free copies of all the iWork and iLife software. Just like the free copies of iWork and iLife you get with a new iPhone.

Hmm, interesting. My current Mac doesn't have those pieces of software and neither did my 2011 MBP. I have no idea what happened between buying the Mac and now to lose those pieces of software. I did reinstall the OS and upgrade it at some point of time. Bummer...
 
hi guys im coming from a mid-2014 15inch macbookpro with gt750m
could anybody compare the 450 455 460 with nvidia 900 or 1000 series my main task is probably music production and logic runs fine but i wanted to see how it performs games under bootcamp last games i played were maybe metro lastlight cod advance warfare :}

it would be great if the 450-55-60 can match a gtx960m or at least 950-1050m
and if anybody knows how the better gpu-cpu can affect music production let me know
and excuse language...
 
hi guys im coming from a mid-2014 15inch macbookpro with gt750m
could anybody compare the 450 455 460 with nvidia 900 or 1000 series my main task is probably music production and logic runs fine but i wanted to see how it performs games under bootcamp last games i played were maybe metro lastlight cod advance warfare :}

it would be great if the 450-55-60 can match a gtx960m or at least 950-1050m
and if anybody knows how the better gpu-cpu can affect music production let me know
and excuse language...

The 455 (1.3 Tflops) is slightly less powerful than a 960m (1.4 Tflops), the 460 (1.86 Tflops) is slightly more powerful than a 1050 (1.73 Tflops)

With that being said, a 1060 which you'll find in a couple laptops that are significantly cheaper and .6" thin will get you ~2.5x the performance of a 460 with 4.4 Tflops.

If you plan on doing any gaming, the value/performance proposition says you shouldn't get a MBP compared to what else is out there.

For me, the graphics is my biggest gripe about this laptop as the best you can get is blown out of the water by mid range GPU's from nvidia. I don't think the MBP ever had such a huge graphics disadvantage against other laptops with midrange cards.
 
AMD Polaris RX480 in a tiny Z-Box, lol :rolleyes:

https://www.zotac.com/us/product/mini_pcs/magnus-erx480-windows-10

It's the Macbook Air Apple Users deserve, but not the one it needs right now. So we'll hunt Tim Cook because he can take it. Because this isn't our Macbook Pro. It's a Touchbar Macbook Air, a bright emjoi touch bar. A Dark Apple....
https://www.computerbase.de/2016-10/zotac-zbox-magnus-en1060-test/3/#abschnitt_benchmarks_in_full_hd

Look at how bottlenecked is the GPU by the thermal design of the computer. Compare stock GTX 1060 with scores for this computer. Similar thing will happen with RX 480.
 
AMD Polaris RX480 in a tiny Z-Box, lol :rolleyes:

https://www.zotac.com/us/product/mini_pcs/magnus-erx480-windows-10

It's the Macbook Air Apple Users deserve, but not the one it needs right now. So we'll hunt Tim Cook because he can take it. Because this isn't our Macbook Pro. It's a Touchbar Macbook Air, a bright emjoi touch bar. A Dark Apple....

That Zotac box has a volume of 2.65L, without display or battery, the 15" Macbook Pro is 0.96L. That's 2.76X more volume for the Zotac, without display or battery. That's an unreasonable increase in the volume of the Macbook Pro if that extra volume is required to keep the GPU cooled.
 
Gigahertzs isn't everything. The amount of current through a CPU goes as the frequency times the junction capacitance per transistor times the total amount of capacitors. The power dissipated is the current times the voltage. So, while we've stayed at roughly 3 Ghz frequency, we've gained a hell of a lot more transistors. This is possible through reduction of transistor size, which lowers the junction capacitance (and hence drops the amount of current pumped per cycle).

The performance of a CPU goes as the clock speed times the work per clock (IPC) accomplished. You see, there are other ways to improve performance than clock speed of a CPU. We've seen massively parallel CPUs compared to the 3Ghz Pentium 4s - we've got quad core, 8 thread mobile CPUs. We've seen addition of instructions that greatly speed up specific tasks - AVX, for example. We've seen addition of better branch prediction, better pipelining and instruction merges, etc. All these features enhance total performance per clock and uses additional transistors to accomplish the task. Hence the focus of the last decade or so has been to enhance the total performance largely through parallelism and execution efficiency (better utilizing the transistors). As such, we've seen total performance increases not through much of clock speed increase, but rather, IPC and parallelism.

I think a lot of people would prefer if the CPU and GPU stayed separate. However, giving the huge push into heterogeneous computing, it would make sense that any CPU maker capable of making a GPU would integrate the GPU into the CPU to make eventual way for this heterogeneous compute future, where massively parallel instructions are sent to the GPU part automatically and the CPU takes care of the more serialized and loopy parts of the code. Such a future is coming. If one were to integrate a massively parallel compute logic into a CPU, you might as well add ROPs, tessellation, etc, to make it a fully functional GPU, so that you are not wasting precious silicon space with parallel compute units that aren't used often. I think we'll see a lot more benefits of this heterogeneous approach very soon. I am thinking that when Zen comes out, we'll see more of this Apple-AMD partnership with OpenCL and so on, bear fruit.


I am acutely aware of all of these things. I still have my AMD employee badge around here somewhere. Still, the speed increase from 1996 to 2006 is vastly different than from 2016.

Also, ARS Technica had a great article explaining the AMD choice. The Nvidia solution would not support 2 5K displays using display port 1.2 while the AMD does. So, Nvidia was not capable of meeting the required performance for this system. Period and full stop. When your choices are AMD or Death, you might feel strongly enough to go with "or Death," but Apple gave in and made more computers.
 
I am acutely aware of all of these things. I still have my AMD employee badge around here somewhere. Still, the speed increase from 1996 to 2006 is vastly different than from 2016.

Also, ARS Technica had a great article explaining the AMD choice. The Nvidia solution would not support 2 5K displays using display port 1.2 while the AMD does. So, Nvidia was not capable of meeting the required performance for this system. Period and full stop. When your choices are AMD or Death, you might feel strongly enough to go with "or Death," but Apple gave in and made more computers.
According to Ars Nvidia does support display port 1.3 and thus would support 2 5k displays. Not their fault that Intel doesn't support DP 1.3 in their TB 3 spec...
 
exactly my point and as a production laptop this is what i would say more than justified that price point... althought i dont know much about quadro graphics

Quadro GPUs are workstation grade GPUs for the professional in mind. They can handle 3D, editing, color grading, designing, etc.
 
[Quote = "TRDGT4Writer, de la publicación: 24012307, miembro de: 878480"] GPU Quadro son las GPU grado de estaciones de trabajo para el profesional en mente. Pueden manejar 3D, la edición, la gradación de color, el diseño, etc. [/ Quote]

Like AMD FirePro ™ for high-performance computing or Radeon Pro WX Series Video Cards
 
Secondly you say that CUDA is better because Nvidia hardware is better. You couldn't be more wrong. Highest end from last generation from both companies: Fury X vs Titan X. One GPU has 8.6 TFLOPs of compute power. Second, slightly over 6 TFLOPs. Which one will be faster? In applications that favor Compute performance the answer is obvious..



It definitely wouldn't be obvious.

In GPUs, Gflops are a paper calculation of shader cores x frequency x operations per core per clock (2).

If one has bulkier cores that do more work per cycle, this is not reflected in a Gflops number.

This is tantamount to saying a 4GHz Pentium D would 'obviously' be faster than a 3GHz Kaby Lake dual core. There's worlds more to it.
 
It definitely wouldn't be obvious.

In GPUs, Gflops are a paper calculation of shader cores x frequency x operations per core per clock (2).

If one has bulkier cores that do more work per cycle, this is not reflected in a Gflops number.

This is tantamount to saying a 4GHz Pentium D would 'obviously' be faster than a 3GHz Kaby Lake dual core. There's worlds more to it.


Doesn't "operations per core per clock" define "more work per cycle"? I typically don't refer to the Gflops calculation, but that seems like what it'd be from a somewhat layman's perspective.

Would a dual-core Pentium D @ 4GHz actually have higher Gflops based on that calculation than, say, a dual-core 2.9ghz Skylake CPU?
 
Last edited:
Wtf? No free software? Buy a Mac, and you don't need to buy software. Pcs require hundreds of dollars in software to make them useful.

Pages, numbers, keynote, garage band, photos, imovie, etc.. all free..

Office 2015 is in the form of a trial. It's $99/year for it. Plus antivirus, antimalware, winzip, etc.. all costs money...

What on earth are you talking about?

What
does Office 365 have to do with owning a non-Apple computer? Buy it if you like it (you might have noticed Office for Mac is payware too...) - or use LibreOffice like all sane people do in 2017 - or one of the other dozen packages around.

WinZip? Seriously? Windows (like OS X, Linux and... even QNX) has built-in zip capability since XP, if you want something more maybe use the far superior (and GPL-licensed) 7zip.

In fact I think there is enough software under the GPL/MIT license around to last you a lifetime, whether you are on Mac, WIndows or Linux.

I literally don't remember the last time I bought (or, for that matter, pirated) software for any platform. Was that the 90s?

I like Macs as much as the next guy, but I don't see the need for spreading senseless FUD.

I agree. PCs are more like sports cars or muscle cars - you can tinker with them as you desire and change parts on them. These Macbook Pros are more like German luxury cars - not focused around pure performance but other aspects of overall experience.

Or maybe they're more like a brand of computers... :p
 
  • Like
Reactions: kodos and Stella
Would a dual-core Pentium D @ 4GHz actually have higher Gflops based on that calculation than, say, a dual-core 2.9ghz Skylake CPU?

No, it wouldn't, because Skylake has wider vector ALUs.

Still, the entire FLOPS story is fairly complicated and is mostly a marketing tool. Already how GPU makers describe their cards (640 shader cores etc.) — by that logic a modern Intel CPU would have "64 compute cores" or so. A modern GPU is basically a very wide multiprocessor that uses multitudes 512 bit or even wider vector units. If a unit can do one addition operation per cycle, thats 16 FLOPS right there. The problem is that a vector unit can only execute one instruction at a given time. So if your task is to add 16 numbers to another 16 numbers, that matches the GPU capability perfectly and it can do it in one clock. But if you want to add 4 numbers and then multiply 4 numbers and then divide 4 numbers, then the GPU has to split this into three instructions and your FLOPS fall down to 4. Modern GPUs got very advanced with this stuff and try to mix and match different tasks and data paths to achieve very high utilisation, but it still depends a lot what you need to do and how the GPU is configured. Newer Intel iGPUs are very good at more complex compute tasks (e.g. raytracing), partly because they are very flexible in how they can schedule and mix computations — there are a bunch of very interesting Intel white papers describing this in detail.

The bottom line being: the GFLOPS numbers only represent a very theoretical case along the lines of "our GPU can multiple-add 100000000 numbers per second, if that happened to be exactly the thing that you want to do". Using them for direct comparisons is difficult.
 
  • Like
Reactions: villicodelirant
Doesn't "operations per core per clock" define "more work per cycle"? I typically don't refer to the Gflops calculation, but that seems like what it'd be from a somewhat layman's perspective.

Would a dual-core Pentium D @ 4GHz actually have higher Gflops based on that calculation than, say, a dual-core 2.9ghz Skylake CPU?


You could call it work per cycle, sure, but again as that's again a theoretical figure, how well the system keeps those two possible operations per ALU per clock is a different story.

If we look at modern GPU architectures, if both an AMD card and an Nvidia card looked like 10Gflops on paper, I'd expect the Nvidia one to be a few tens of percents ahead on real world performance. And that's not saying AMDs are bad! Nvidia simply does more per core, AMD does slimmer and more numerous cores. So on paper more cores = more Gflops. On paper.

And I wasn't saying the Pentium D would have a higher Gflop rating, it doesn't, modern SIMD is way ahead of that and that's where a CPUs Gflops come from. I was just drawing the comparison that GPU Gflops comparisons are becoming the new GHz myth of old, and driving me a bit batty :p
 
  • Like
Reactions: villicodelirant
If we look at modern GPU architectures, if both an AMD card and an Nvidia card looked like 10Gflops on paper, I'd expect the Nvidia one to be a few tens of percents ahead on real world performance. And that's not saying AMDs are bad! Nvidia simply does more per core, AMD does slimmer and more numerous cores. So on paper more cores = more Gflops. On paper.

I'd expect the opposite :) From what I've seen, AMD hardware is better at utilising its resources. Where AMD lacks is drivers — and thats IMO why we are seeing large improvements when Vulcan or DX12 is being used on AMD. The simpler, closer to the hardware model API partially removes the advantage Nvidia has with its driver optimisations.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.