... but i feel you're saying "well, if we can't get faster cores then we might as well get more cores so things will more-or-less even out that way"
No, I'm saying
since we can't get to faster cores, then they may as well give us more cores for our money/advancement. And, for sure, we'll take any speed increases they can figure out too. I don't want them to just keep selling the same 2-4 core chip for the next decade.
In the end, it's about how much computing power comes in the package (look at GPU advancements!), and then finding ways to take advantage of it. For most users, current speeds and cores are overkill. And, yes, certain functions need as much single-core speed as we can get.
It doesn't necessarily even out, depending on the software and use, but the power is there to be tapped into at least. And, afaik, we're running into physics limitations right now. In that case, give us more cores and we'll find ways to use them.
... and often with the tag of "pro's needs" added to it as a means to strengthen the myth (for whatever reason).
It's because there are certain types of computing jobs, and certain types of software that tend to be able to utilize that extra power. It's a myth in that it isn't automatic and across the board. But, if I have a 4-core, maybe I can have a rendering or Folding@home going in the background, and still effectively use xyz app without noticing much impact. But, if I have a 12-core, I could run Folding@home, a rendering engine, be encoding some video, be compiling some software, and still use whatever app I need to be using while that's all happening. That could happen on the 4-core too, I suppose, but it's less likely to be as smooth and all the background processes won't finish as quickly.
Sure, most of the time many of those cores are going to waste (in terms of pure productivity), but I can always find things to keep them busy (ex: Folding@home).
most renderers have openGL previews these days which is more-or-less giving you instant feedback.. the rendering software i use is openCL and if i'm wanting ray-traced previews...
Oh sure, a lot has been off-loaded to the GPUs now (which, as noted above, are highly multi-core). But, say you're importing a model into the CAD program, that could run on a core so you could keep working w/o slowdown. Or, maybe the calculations of some plugin in the 3D program might run on a core, not slowing down aspects of the main app. Sometimes, things which appear to you as being single are actually handled by routines which are broken down across multiple cores. It just depends.
Plus, as OSs and development languages/tools advance, more aspects are becoming multi-threaded at a lower level.
what i meant by 'ridiculous' was the notion that a 'pro' should or would spend $15,000 on a 24core machine in order to get a render back in 4 hours as opposed to 12 on an 8core machine... that's a complete ripoff when compared to spending $1 to get the image back in 1 minute from a 64,000 core supercomputer via their 4 or 6core personal computer.
No argument there, though I'd be a bit surprised if that kind of cloud computing was that cheap. But, the general principal, yes. But, if I'm spending a certain amount on a computer, I'd rather have a 12-core than a 4-core if the speed isn't going to be that much different anyway.
right.. for rendering
More than rendering, but rendering is a good example, because the programmers took the time to make that work. They can do that to other aspects of the program as well. The rendering aspect just works particularly well for me, as it was designed as a stand-alone, scalable app. So, not only does it scale, but it does so incredibly efficiently. Like I said, I can use nearly 100% of any hardware available. It isn't like the 2-core 100%, 4-core 80%, 8-core 60%, etc.
it's not useless at all.. but sub-$5000 setups utilizing 4-6 cores.. 8-cores tops..
is where the benefits are happening.. anything over that is a complete waste of money for nearly every single operation out there from a professional on a personal computer.
I agree in general... for many apps and many people. But, if they give us 12 or 22 or whatever cores in that same sub-$5000 box, or even if I have to pay an extra $1000 to get that, then I'd take it. And, that seems to be where Intel says they are going, unless I've misunderstood.
'bring on the cores' is bad info.. if your application can scale to 112 cores, don't recommend 12 cores vs 8.. recommend 4 or 6 faster cores (as in- you should be using the fastest single core speeds available instead of a bunch of slow ones that will sit idle 95% of the time).. if an application can scale to 112 cores then it's highly likely there are that many cores available for you to use via cloud at a cost far (FAR!) less cheaper than it would cost to purchase.. and certainly far less cheaper than you'll be spending to get the minuscule addition of 4 more cores to your personal computer with negligible speed increases.
But, the problem is the faster ones are capping out. So, say you could have 12-cores at 2.9 GHz or 4-cores at 3.0 GHz. I'd take the 12 any day. But, that would depend. If you're an average computer user, or a gamer, maybe go with the 4-cores as you'll gain a bit of speed and not use the extra cores. I'll use them.
That's more like what we're currently facing. It's not like the choices are 4-cores @ 3GHz vs 12-cores @ 1GHz.
idk, spending thousands of dollars in order to get a render back in 20 minutes instead of 40 isn't what i'd call 'quite a bit'..
That just depends on what your priorities, though I agree that the cloud is more cost effective for some applications. Say you're running some physics analysis that doesn't scale to the cloud, then you'll pay to cut times in half.
you're going to spend weeks designing/modeling/etc. with the computer.. then spend a whole bunch of extra money in order to complete a render package in 1/2 day instead of 1 day ???
But, what if I can decrease my processing time from 1-day to a 1/2 day, in the background, while I'm still working on the next project? Again, that would have to be cost-analyzed for each situation. Not everyone (or every app) has cloud-computing capability.
They dropped the single-tasking audio jack because the Lightning port already does the job, and at some point in the design cycle, having the redundant function has too great an opportunity cost. Whether it's adding a speaker and Taptic engine this year, or dropping the chins next year, the analog jack becomes wasted space. There's no consumer demand for dropping the jack, but technically it's in the consumer's long term interest.
It's a matter of priorities... it needs a 3.5mm jack more than it needs 'stereo' speakers. They could drop that speaker too if they really needed the space. But, they figured that gimmick is more sexy than keeping the jack. And, it's hardly in the long-term interest of the consumer unless the future is Lightening audio (which it is most certainly NOT!).
the Dell XPS is (apparently) being released with a Core i7 7500U, which is an improvement over Skylake performance wise in both CPU and iGPU.
Yea, it's probably more about the iGPU, bus, and other components than advancements to the CPU itself. For example, whatever chip they put in there, I want TB3/USB-C 3.1, etc. The exact details of the CPU cores aren't as important to me. But, sure, I'll always take more performance at lower power.
