Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You mean that the "Atom I" (Silverthorne) would never work in a phone or MP3 player.

Check out "Atom II" (Moorestown) and "Atom III" and ....

Intel's in the low power race for the long haul....

Perhaps Apple's buying the rights for PA Semi to build SOCs around Atom cores. That would make a lot more sense than trying to out-engineer Intel for low power systems.




"Designer", not "manufacturer". And, Apple would be more likely to increase the profit margin than to reduce the price.

Two mistakes in one sentence ;)


That's what i think. Apple can design their own boards and then shove an atom inside. this would definitely keep all the stuff in house.


aussie_geek
 
This makes a whole lot of sense. Maybe Apple will be able to further reduce the price since it now owns a manufacturer of some of the chips. At least in a future ipod release.
This almost certainly won't be cheaper for Apple, unless it allows them to reduce overall chip count. Apple's doing this for control, and so they can carefully tune performance.
Intel's in the low power race for the long haul....

Perhaps Apple's buying the rights for PA Semi to build SOCs around Atom cores. That would make a lot more sense than trying to out-engineer Intel for low power systems.
Yeah, I've heard the "long haul" promises before, then they surrendered and just bought in DECs StrongARM.

First, Intel doesn't have any history at all of licensing out their cores so it's exceedingly unlikely that PA is building around the Atom. Has Intel decided to license this one out?

Second, it hasn't been traditionally difficult to out engineer Intel in this space and, actually, just about everyone has. Intel's engineering is good, but x86 is a huge albatross around their necks as far as portable systems go-- they have to be better than good to overcome that disadvantage.

There is no way we will see such MINOR increases in storage capacity in the next two years.
I highly doubt that we will see the first 32GB iPhone next summer.

I think it's impossible that they would not increase the flash memory for over a full year.
Flash density only grows so fast. So far, Samsung has been slightly ahead of Moore's law, but you're still looking at a year or more between updates.
Oh. I'm not sure that I'd agree that an asymmetric system with a relatively weak general purpose processor and a bunch of specialized SIMD arithmetic processors is the best system on which to learn parallel programming.

I'd suggest a quad-core, perhaps with CUDA, would be a better teaching system.
I was with you until you threw CUDA in. How is that any different than a bunch of specialized SIMD processors?

There is actually some benefit to starting people out with a weak general processor-- it will really encourage them to make use of the parallelism because the benefits will be so obvious.
I pointed this out when the acquisition happened... and most showed they know little about such by proclaiming Atom.

Therefore, ``I fart in your general direction.''
Actually, what you said was:
I would look to the Enterprise Markets and Middle-tier mid-to-large markets that can leverage XServe and Xsan with replacement products that do much more than XRaid.
Think larger networking iron ala data switches, multimedia streaming, embedded devices for the Health Industries, Federal industries, etc.
...
 
I was with you until you threw CUDA in. How is that any different than a bunch of specialized SIMD processors?

It's not - but it lets the student work on both traditional general-purpose code parallelism (the quad), and on mainly mathematical data parallelism (the co-processor).



There is actually some benefit to starting people out with a weak general processor-- it will really encourage them to make use of the parallelism because the benefits will be so obvious.

They certainly would learn quickly that many applications don't have data parallelism that SIMD units are built to exploit. :eek:

In all the years of PowerPC development - only a few classes of applications really benefited from AltiVec. Those were important applications, but most stuff didn't use the AltiVec unit for much.
 
Wussies, they need to send them to Los Alamos for a few months to do some assembly on the CM5; then they will know what SMP programming is all about. ;)

-mark
 
Ummm - 64-way SMP chip from Intel

Rumors are that Larrabee will use 16 x86 cores running up to four threads each and use a 1024-bit wide memory bus.

http://news.cnet.com/8301-13924_3-9966739-64.html and other stories

http://en.wikipedia.org/wiki/Larrabee_(GPU)

Larrabee will differ from other GPUs currently on the market such as the GeForce 8 Series and the Radeon 2/3000 series in that it will use a derivative of the x86 instruction set for its shader cores instead of a custom graphics-oriented instruction set, and is thus expected to be more flexible.

In addition to traditional 3D graphics for games, Larrabee is also being designed explicitly for general purpose GPU (GPGPU) or stream processing tasks: for example, to perform ray tracing or physics processing, in real time for games or perhaps offline as a component of a supercomputer.​
 
They certainly would learn quickly that many applications don't have data parallelism that SIMD units are built to exploit. :eek:

In all the years of PowerPC development - only a few classes of applications really benefited from AltiVec. Those were important applications, but most stuff didn't use the AltiVec unit for much.
:D True enough...
Rumors are that Larrabee will use 16 x86 cores running up to four threads each and use a 1024-bit wide memory bus.

http://news.cnet.com/8301-13924_3-9966739-64.html and other stories

http://en.wikipedia.org/wiki/Larrabee_(GPU)

Larrabee will differ from other GPUs currently on the market such as the GeForce 8 Series and the Radeon 2/3000 series in that it will use a derivative of the x86 instruction set for its shader cores instead of a custom graphics-oriented instruction set, and is thus expected to be more flexible.

In addition to traditional 3D graphics for games, Larrabee is also being designed explicitly for general purpose GPU (GPGPU) or stream processing tasks: for example, to perform ray tracing or physics processing, in real time for games or perhaps offline as a component of a supercomputer.​
Yeah, this is why I think Apple is making the right choice with Snow Leopard. Processors aren't getting any faster-- we're just getting more of them. If they can nail Grand Central (as I understand it) and OpenCL, they'll have managed to crack the performance bottleneck we're facing and give developers a platform that can really sing.

Intel is making an interesting choice here. On the one hand using x86 for graphics work brings to mind the adage "every problem looks like a nail when all you have is a hammer", but on the other this would start to reconcile the fact that half of the processing power in modern systems is connected to the monitor and goes largely unused.

If they can make this work effectively as a graphics card, then I think they'll have solved two problems. First, we can minimize the system bus bottleneck by dividing the computation between the motherboard and graphics card in such a way to minimize the data stream between them. Second, we can much more easily move gaming (and other) computations back and forth between the CPU and GPU as systems evolve.

They have to make it work as a graphics card though, and it's going to be tough to get this kind of system to rival dedicated pipelines in raw polygons rendered.
 
think differently - no more polygons

They have to make it work as a graphics card though, and it's going to be tough to get this kind of system to rival dedicated pipelines in raw polygons rendered.

http://www.eetimes.com/news/semi/showArticle.jhtml?articleID=208403516

"SAN JOSE, Calif. — Intel Corp. wants to drive a shift to a new kind of graphics architecture, one more in line with its multicore processor roadmap.

The traditional raster graphics pipeline used by companies such as Advanced Micro Devices and Nvidia needs to go in favor of a new and better way to render graphics using ray tracing, said Justin Rattner, Intel's chief technology officer, speaking at an annual gathering of Intel researchers. Rattner also disclosed a separate effort to extend the C++ programming language for multicore processors.

"We believe a new graphics architecture will deliver vastly better visual experiences because it will fundamentally break the barrier between today's raster-based pipelines and the best visual algorithms," said Rattner. "Our long term vision is to move beyond raster graphics which will make today's GPU technology outmoded," he said.

Intel researchers will present a paper on its upcoming Larrabee chip at the Siggraph conference in August, Rattner said. The paper will provide examples of how to create superior images using ray tracing rather than a conventional raster graphics pipeline, he added."​

and later

"Ray tracing is a computational intensive method of drawing images based on following rays of light and their collisions with objects. Rasterization is a traditional method of breaking a scene into many tiny polygons, then drawing and coloring in each shape to give the scene lighting and texture effects."​
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.