Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

digitalpencil

macrumors 6502
Jul 2, 2007
343
0
Manchester, UK
Wicked post man, i've been looking for a decent summary of info on SL for a while now..
Looks like Apple are laying the foundations for a new generation, extending 64-bit and multi-core support, leveraging the gpu for computational processes and access to 16TB of RAM!!!! Any thoughts on what type? I'm really hopeful to see DDR3 roll-out by '09.. then we'd start getting insane speeds!
 

t0mat0

macrumors 603
Original poster
Aug 29, 2006
5,473
284
Home
Cheers.

Memory
I'd imagine it's DDR3 vs FB-DIMM, with DDR3 SDRAM winning hands down in many areas. The gist being that FB-DIMM might be around for 4 socket Nehalem boards, but more servers really, with DDR3 the main memory type for Nehalem Macs i'd imagine. Unless Apple can either change the licence problem with quad socket boards, they'll stick to the best they can get for dual sockets. And that means DDR3 currently. DDR3-1800 from Cell Shock and DDR3-1866 from Patriot review here here

I'd recommend having a look at the variant table on the wiki page here.

Using Anantech's [url=http://en.wikipedia.org/wiki/DDR3_SDRAM]DDR3vs DDR2 article heavily:

JEDEC controls standards to help provide compatibility, interchangeability of computer memory. JEDEC apparently, "specifies voltages, speeds, timings, communication protocols, bank addressing, and many other factors in the design and development of memory DIMMs."

DDR3
DDR2 became the de facto RAM Q3 2006, when AMD switched to DDR2 for their AM2 platform, and Intel introduced Core 2 Duo on socket 775. DDR2 itself has had improvements, both price and speed. (800 vs 1066). P35 chipset (Bearlake) boards which were available from June 2007 accepted DDR3. DDR3 has a faster rated speed, lower voltage, double the internal banks, fly-by rather than Conventional T topolody, optional thermal sensor & changed driver control. April 08 info on DDR3 here.

I'd recommend reading the conclusions page here. From the initial tests, when only early on DDR3 was available - having a decent chipset made much more of a difference when comparing the old chipset to the P35(Bearlake) chipset.

X-bit Labs DDR3 SDRAM: Revolution or Evolution 2007 article conclusion? Lower power usage, overclocking, and available.



Chipsets
Intel chipsets - wiki info here (predominate source used)
Core 2 chipset info [urlhttp://en.wikipedia.org/wiki/Intel_Core_2]here[/url].
If the 2 Nehalem chipsets so far (here) are any indication, DDR2 isn't supported, so it'll have to be DDR3
It's a messy area if you don't want a lot of detail: Intel itself has 6 different areas for chipsets here
  • Performance/Mainstream Desktop Chipsets
  • Server/Workstation Chipsets
  • Value Desktop Chipsets
  • Laptop Chipsets
  • Embedded Chipsets
  • Mature Chipsets

Not that this would stop Apple from going off-road and using a bespoke/non-standard Intel chipset. (There are lots of different chipsets Intel offers - I counted >40 just for Desktops, >20 for Notebooks, .

FB-DIMM seems to be dying, and a long time coming
http://www.theinquirer.net/en/inquirer/news/2006/08/29/fb-dimms-threatened-by-microbuffer
http://www.theinquirer.net/en/inquirer/news/2006/09/07/intel-pulls-back-from-fb-dimms.
http://www.theinquirer.net/gb/inquirer/news/2007/09/26/fb-dimms-dead-rddr3-king

FB-DIMMs for some current setups, and with Nehalem 4S (4 socket, as mentioned seen less due to prohibitive program licencing costs for 4 sockets verus 2 sockets). Mainstream/high-performance/(crazy ass rigs) will get RDDR3 (Registered DIMMs) with ECC chips with DDR3 at 800/1333MHz or more.

FB-DIMMs have caused some hassle - see this macrumors thead here.
Currently Xeon processor = Intel chipset = only support for FB-DIMM (hence Skulltrail using FB-DIMM, and having its ass handed to it by a Nehalem rig).

But, looking towards Nehalem chips, to date, the only Nehalem chipset that will support FB-DIMM2 is "Beckton" - a 4 socket (i.e. 4 CPU) server. Each CPU can have 8 CPU cores giving a 32 core beast. The rest are dual/tri-channel DDR3.

FB-DIMM seems a nasty trade-off to get an Intel server chip. Maybe XServes might take Beckton, and Mac Pros take Gainestown, the 2 socket server CPU. Each CPU having 4 CPU cores, giving 8 cores.

To precis CWallace:

Beckton
- server processor
- Used in groups of 4 physical CPUs (i.e. a 4 socket board)
- Each Beckton CPU can have 8 CPU cores, giving a grand total of 32 cores (64 threads)
- 1x QPI Link per CPU
- FB-DIMM2 memory with four channels.
- Use the LGA 1557 socket.

Gainestown
- workstation & server processor
- Used in groups of 2 physical CPUs (2/dual socket)
- Each Gainestown can have 4 CPU cores, giving a grand potential total of 8 cores (16 threads)
- 1x QPI Link per CPU
- DDR3 memory with three channels.
- use the LGA 1366 socket.

Bloomfield
- High end desktop processor
- Single socket, operates as a single CPU
- Each Bloomfield can have 4 CPU cores (i.e. 8 threads)
- 1 QPI Link
- DDR3 memory with three channels.
- Use the LGA 1366 socket.

Lynnfield/Clarksfield
- "Mainstream" desktop processor
- Basically a Bloomfield but with only two DDR3 channels.

I'd imagine Apple will go with Gainestown for the Mac Pro, and see what Westmere cooks up (again, the wiki points to Quad channel DDR3 for a dual socket server Westmere chip). With a release of the chips prior to MWSF 2009 - Apple may well do a demo of Snow Leopard, on new Mac Pros - what would be a better seller than to show stats that demonstrated that
- the new Mac Pros were e.g. 30% faster than the Penryn versions, the kicker being that
- on Snow Leopard the new Mac Pros were 40% faster.

Software selling systems. Apple could go with a beasty Snow Leopard Mac Pro Nehalem combo, and get some decent bragging rights for them. Intel would surely enjoy this, as MWSF/WWDC is a great place to showcase Intel's chips, via Apple's Mac Pros...

Apple seems to have a fair few aces up its sleeve. Snow Leopard is about cranking the power out of multiple cores., Would they go for it? It makes no sense whatsoever for Apple not to showcase the power of Snow Leopard at WWDC. Seeing as it's about GPU, and multi-core, i'd imagine that there will be at least a few leaks during the next ~10 months, showing some of what Apple is planning. It still awaits to see how Apple is actually going to physically do the whole GPU as CPU thing - whether a bespoke board, or just a high end Graphics card being used additionally for GPGPU etc.


To be honest, i'd imagine that seeing as Intel will be announcing Nehalem at the San Fran Intel Developers Forum (Aug 190-21st 2009), we'll find out more then. http://www.theinquirer.net/gb/inquirer/news/2008/07/07/launch-nehalem-penryns seems to show that Intel's strategy is still unknown.

(As pointed to at the start, the wiki table here shows that
"Beckton" a 4 socket server CPU will use Quad channel FB-DIMM2 *
"Gainestown & Bloomfield using dual & triple channel DDR3 800/1066/1333/1600 MHz memory.
* The wiki seems to point out that servers would support registered DDR3, so I can't quite make out whether Beckton would support both... (this kind of thing would have implications for usage).


Intel's CEO is on record as saying Nehalem is on schedule to be out by the end of 2008).
Westmere chips (the 32nm shrink of Nehalem chips) is estimated from the wiki as:

* Late 2009 or early 2010 for DP server chips.
* H1 2010 for high-end desktop chips (Bloomfield successor).
* H2 2010 for mainstream and value desktop chips, assuming Westmere is released for that segment.
* 2010 for mobile chips, assuming Westmere is released for that segment.

Intel Developers Forum
From the talk list:
"Ultra Mobility:MIDs: Platforms for Innovation"
"Digital Home:“I Love TV.”"
"Software: Developing for the Future of Computing" &
"Research and Development: Reinventing Embedded Computing" - all sound interesting. I'd imagine this will get front page in 3 weeks.

Till then, i'd imagine progress on DDR3 will roll on, just like DDR2 before it, and i'd imagine that more overclocked DDR3 will be more available and cheaper by the time Snow Leopard is on general release.


False positive?

Just a hunch, but the recent macrumors article seems to be a spur - If Apple has the potential on the high end kit to break through the barriers Memory, CPU, GPU wise, wouldn't it make sense for them to be making their own specific chipsets and boards? The article is about the Montevina chipset as part of the Montevina (Centrino 2) platform. Alternatively, Apple could well have its own custom chipsets being made by Intel. Whilst the MacBook Air chip was more off the shelf than initially made out, it's within the realms of possibility that Intel would work on this with Apple if it was viable. The options and ways of plugging in large amounts of RAM is something that would be a main consideration - what connection it would use etc.


In addition to this, what about Apple's orders in for GPUs? Surely by late 2008 it has to have Mac Pros and MBPs, possibly even MacBooks with graphics cards in suitable to be useful for Snow Leopard?


To the customer, Apple's decision to use 3rd party or custom chipsets is not of great significance, as all the chipsets should be functionally identical. However, AppleInsider speculates that Apple must believe there is some competitive advantage in pursuing alternative chipsets, such as improved power consumption.
 

Attachments

  • 7818-p45wall.jpg
    7818-p45wall.jpg
    68.4 KB · Views: 176
  • Dunnington-core-3.FROMHARDWAREZONEjpg.jpg
    Dunnington-core-3.FROMHARDWAREZONEjpg.jpg
    142.4 KB · Views: 214
  • kfest2008-01.jpg
    kfest2008-01.jpg
    211.7 KB · Views: 164
  • Roadmap.JPG
    Roadmap.JPG
    28.5 KB · Views: 154

t0mat0

macrumors 603
Original poster
Aug 29, 2006
5,473
284
Home
Sources

Sources
Several starting points on the area
- As always, roughlydrafted.com's Daniel Eran Dilger is way ahead, but it might be useful to precis his posts so far.
- Digitime's commentary article here regarding Intel, and SL.
- Other

Roughlydrafted.com - quite a selection!
 

t0mat0

macrumors 603
Original poster
Aug 29, 2006
5,473
284
Home
Sources:
- Vijay Anand's HardwareZone article Intel's CPU Roadmap: To Nehalem and Beyond from March 2008 covers a lot of interesting areas.
- Cnet's CPU Roadmap: 2008 & beyond article here (which also covers AMD)


How Intel's plans for platforms & processors in the near term are guided by the multi-year plans extending past 2009, with Sandy Bridge & Larrabee.

In the 2nd post here "future versions" are just touched on -
  • Aliceton
  • Dunnington (the last of the Penryn generation a single die 6 core CPU),
  • Gainestown - Based on Nehalem microarchitecture
  • Beckton - 8 or more core Nehalem based CPU
and mentioned that Mac Pro's are more likely to take on Gainestown CPUs in a dual socket format, than a Beckton 4 socket CPU version which would need FB-DIMM2 memory and not overclock well in comparison.


Dunnington
- last of the Penryn generation
- 6-core processor 45nm
- Works on the Caneland multi-processor (MP) platform (Intel 7300 chipset)
- Thus an big upgrade from the Tigerton X5300 processors which also work on the Caneland MP platform (i.e. pin compatible - drop em in)
- 4 can be used in a quad socket board for server level computing.
- Ready 2H 2008

Swapping out Tigerton processors with Dunnington would thus give a move potentially from 16 processing cores to 24.

As said, Dunnington it seems is a stop gap till Nehalem based MP product comes out, for those wanting to buy new, rather than upgrade Caneland platform based servers. Not really most MacRumor users then!

With full scale availability early 2009, Nehalem deserves a decent look.


Nehalem
QPI - QuickPath Interconnect
- Intel's version of AMD's HyperTransport
- Gives high speed inter-component, inter-processor communication.
- If you've got multiple CPUs, QPI will connect between them.
- Currently it's 1x QPI link per CPU socket, but this may well change.
- They can deliver a total bandwidth of ~25Gb/s per link.
- Can have hot-plug capability, e.g. a processor card. Might not appear at the start.

Integrated DDR3 Memory Controller:
- Improves memory bus bandwidth (via the tri-channel controller)
- Improves the memory bandwidth handling capacity
- Supports registed & unregistered memory DIMMs
- Supports current DDR3 800/1066/1333 Standards. No doubt it'll have to scale, as DDR3 is getting overclocked to 1800 already.

Tri-channel means 3 memory channels per processor, with each channel supporting up to 3 DIMMs, so
1 CPU = 3-9 memory slots.

And here's the rub: Depending on the board used, it'll have 3,6 or 9 slots. A dual socket server board could have up to 18 DIMMs
A quad socket server board could have up to 36 DIMMs.

4GB a DIMM. $2,399.00 per 8GB. Want to max it? >$172,000 for 128GB if you don't have a discount, and use Apple RAM (which isn't competitive)...
Even if you went down the dual socket mainstream DDR3 route, you could get 72GB, which isn't anything to sniff at... I'd sure love to see a memory specialist pimp a machine out with that much.

Integration of a graphics core into Nehalem could occur - as hinted at on page 4 of the hardwarezone article here. More would be heard from at IDF by Intel you'd imagine. Nehalem multi-core chips prior to Snow Leopard, and then ones with integrated graphics core(s?) at some point after? We'll see.

The architecture has lots more
- Increased Paralellism
- Better Algorithms
- Enhanced Branch prediction
- Simultaneous Multithreading (SMT):- (SMT doubles the potential number of overall threads that can be run simultaneous on each core). Intel reporting SMT can deliver 20-30% more performance depending on the app at just a slight increase in power consumption. So the more threaded the workload, or application, the better the gains. I
- Intel SSE 4.2
- Improved Virtualization Performance

Beyond Nehalem? Westmere is the die-shrink to 32nm, and Sandy Bridge is the 32nm change of microarchitecture, bringing new extensions to the instruction set: Advanced Vector eXtension (AVX) - 256 bit vectors that will increase peak FLOating Point performance (FLOPs) (Up to 2x).


AMD
Put down $4.2 billion and some stock to acquire ATI Technologies. Like Intel, AMD's got the visual computing buzzword too
AMD's CE: "Visual computing is playing a larger role in what we are doing, going forward."
With Socket AM3 desktop chipset, PUMA mobile platform, 45nm Opteron server/workstation, Shrike mobile platform (a unified CPU, chipset & GPU, to creates one APU - Accelerated Processing Unit) in the pipeline.

However, Intel has in that same timeframe 45nm Nehalem with QPI then 32nm Westmere coming out, & Intel's mobile plans inc. Nehalem C2D - Auburnsdale, Nehalem C2Q - Clarksfield, both of which incorporate an on-die GPU. Maybe Calpella, the successor to the newly introduced Centrino 2 (aka Monteivina which isn't a world mover) will actually be a bit more of a crowd pleaser, and QPI user.

AMD has 45nm plans, desktop chips to go dual, quad, octo core, with Bulldozer apparently up to hexadeca 16 core for 2010 potentially. But As Cnet points out, by 2010, a year on from Snow Leopard, dual/quad core chips will be ubiquitous, and thus Mac Pros will likely be 8 core and above, with 16 or more threads on Intel, and Intel will have bragging rights to the first native octocore. And an octocore Mac is like a unicorn burger. Mighty tasty, but a long time in coming.

2005 - Dual core Pentium Extreme 840 90nm 3.2GHz? Yours for over $1000
2008 - C2D E8500, the fastest of the Wolfdales, 3.16GHz, yours for ~£400

2005? "Unfortunately, not all applications are multithreaded, and many won't be for months or even years into the future."


Larrabee: Visual Computing
Visual computing means computation of visual information - rendering, HD audio/video processing, physics model processing.
They plan to do this by:
utilizing a programmable and readily available architecture such as several simpler Intel Architecture (IA) cores. Intel plans to add a vector computational unit to each of the cores as well as introduce a vector handling instruction set. They believe their leadership in the total computing architecture of the various platforms and a vast software engineering department will help them achieve their goal of creating Larrabee.

Intel can then scale it up as required for different market areas. Expected in the 2010 timeframe (just in time to match AMD's Fusion), so within about a year from Snow Leopard. Is it a discrete GPU? Is it more along a graphics card? Could it use QPI and drop into a socket on new Nehalem boards?
Who's to say Larrabee couldn't complement a rival graphics card, and be a slave object?
It'll support Direct3D, Open GL, and i'd be suprised if it wasn't happy using OpenCL either.

IDF - 2007 gave some information from Intel about a system on a chip SoC design e.g. here. Since it's been quiet since, i'd imagine there's something in the skunkworks. the fetchingly titled EP80579 Integrated processor family just sounds soooo.... intriguingly boring.

It's a CPU core (Pentium M - w00t! the predecessor to the predecessor of Core 2, but in all fairness, what my Dell runs on :(), memory controller, integrated GPU, input controller (ethernet, USB etc) and other various gubbins depending on the chip flavour.
Kinda cool you can fit all that into a chip. 11Q TDP up to 21W, so made for MIDs primarily.

Update for Larrabee
It looks like a GPU and acts like a GPU but actually what it's doing is introducing a large number of x86 cores into your PC
As kinda thought of SIGGRAPH will have some interesting information, and Intel has opened up some more info, ahead of presenting a paper called "A Many-Core x86 Architecture for Visual Computing."
The frontpage macrumors article here comes from extremetech.com article here as they got a preview.

It's a stand-alone chip, based on the universal Intel x86 architecture. It's aimed at the PC market - i.e. Gaming, and being a competitor to Nvidia and AMD-ATI's discrete (stand-alone) GPU products.

Extremetech points:
- Intel is saying/hinting that "the first Larrabee-based products will be graphics cards targeted at the performance or high-end graphics segment. Those cards will also be able to perform other stream computing tasks."
- Larabee won't be showing up integrated onto motherboards or aimed at the high performance computing mainframe market right out of the gate.

- No actual figures of performance
- No feature sets for products
- No word on core numbers for products (Core count shown on slides from the briefing went from 8 to 48)
- Extremetech has the release date as late 2009/2010
Larrabee is aiming at the many core CPU, that's also a programmable GPU chip. With those many cores essentially based on the Pentium architecture., with multi-threading & 64 bit instructions thrown in.

The key takeaway here is that almost the entire graphics pipeline is being rendered in software, albeit software running on specialized, high performance x86 CPUs with specialized vector units, not on the host processor in a PC.

What else does it do? Support for "full context switching and preemptive multitasking, virtual memory and page swapping, and full cache coherency. These are features developers have come to expect in modern x86 CPUs, but don't yet exist in modern GPUs."

The arrangement of the processing cores on the chip means
performing almost all parts of the graphics pipeline in the same bank of general purpose processors allows for perfectly efficient load balancing: You're never "wasting silicon" in Gears of War if you have a bunch of render back-ends that sit largely idle while they would improve performance greatly in F.E.A.R.

Intel says all this means getting near to a linear scaling of power. More cores that can multi-task means more power to do that range of functions, you don't get that much of a tail off. Intel isn't giving actual figures yet, just relative ones, but check out the speed increase, as more cores are used.

Memory bandwith is an issue, so Intel uses "binned rendering" (aka tile rendering - splitting down a frame into chunks - tiles - and then efficiently sorting these out, then rendering them). Their techniques could help reduce the memory bandwidth per frame by >2x. The tile size is altered so that one processor in Larrabee can process one tile. The more the cores, the smaller the tile &/or the faster the frame rate i'd imagine.

We also don't know what Nvidia and ATI/AMD will have in that time frame. That's well into the next major architectural overhaul, and it's certainly possible that those companies are working on many of the same "neat on paper" ideas Intel has been with Larrabee. Certainly, a number of them make a lot of sense when you consider the increasing generalization and programmability of GPUs. So the story on Larrabee is hardly beginning, and where it fits into the competitive landscape is still entirely unclear.

Did Apple get a heads up about Intel's plans, prior to starting work on Snow Leopard?
And how small will these go? Seeing as the iPhone is only 640x320, it's a fraction of a desktop screen. Could they just make a smaller one, or a low TDP version? It'd be interesting to get the stats vs the current successor chip to the iPhone.

SIGGRAPH
Some other interesting things:
"Advances in Real-Time Rendering in 3D Graphics and Games"
"EDT-IPT 2008 Emerging Display Technologies and Immersive Projection Technologies"
Zcam being back :)
Mocap for the masses with iPi Soft - Desktop Motion Capture (aka Shoot 3D) using a digital camera/web cam to do home mocap
Intel's paper of "Why 3D Application Development is Driving Graphics-Industry Convergence" is on August 12th.
image-metrics.com's photo-real animation
RapidMind, Inc
AMD has a few bits and bobs, including "A Unified Programming Model for Multi-Core CPUs and Many-Core Accelerators by AMD",
GPU-Accelerated Video Encoding: State of the Art
Thursday, 14 August, 1 - 2:30 pm
Hall G, Room 1

NVIDIA has several presentations, including "CUDA: The Democratization of Parallel Computing"
Interesting haptic and tactile progress also
Butterfly Maglex Haptic's are showing up too, a personal fave.
Airborne Ultrasound Tactile Display - A kind of theramin (3D force fields :))
Stop motion goggles, a flat sheet communication device.
 

t0mat0

macrumors 603
Original poster
Aug 29, 2006
5,473
284
Home
LLVM: Low Level Virtual Machine

It's "a suite of carefully designed open source libraries which implement compiler components (like language front-ends, code generators, aggressive optimizers, Just-In-Time compiler support, debug support, link-time optimization, etc)."

Apple has it's fingers in LLVM, as shown by Apple engineer, Chris Lattner being the head of LLVM.

It's low level. Things like the Java or .NET VMs are more high level. The basic design of LLVM is an unlimited register machine (URM), familiar to most computer scientists as a universal mod.

The gist is it's an infrastructure for a compiler. A compiler is "a computer program (or set of programs) that translates text written in a computer language (the source language) into another computer language (the target language)". A code babelfish. It's a mid-level optimiser and code generator. It's codegen support for targets is large.

  • Written in C++
  • Optimises at multiple time points, or stages: it's designed for "compile-time, link-time, run-time, & "idle-time" optimization of programs written in arbitrary imperative programming languages."
  • Started in 2000 at the University of Illinois
  • Currently supports compiling of C, C++, Objective C, Ada, & Fortran programs.
  • A bit like a model, it's got various work happening at the front and back end.
  • clang is LLVM's frontend for C-based languages (see lower down)
  • A list of news articles here

Some 2007 article headlines:
Mac OS X 10.5 Leopard Review
Interview with Scott Petersen on the C/C++ in Flash Player Sneak
Apple Developing a New LLVM C Front-End
Apple putting LLVM to good use
Connect the dots: iPhone, OS X, LLVM, ARM, Ruby?
An iPhone Performance Secret: LLVM
What's powering the iPhone?"
The ARM Backend Of LLVM"

There apparently was the 2008 LLVM Developer Meeting 2 days ago, August 1, 2008 in Cupertino, at the Apple campus. Wondered how that went down?

The 2007 Ars Technica article covers it well and points out that LLVM is an open-source project they took under the Cupertino wing, hired the lead and other developers, and have been actively improving the code.

Think of it as a big funnel: every sort of code you can imagine goes in the top, all ending up as LLVM IR. Then LLVM optimizes the hell out of it, using every trick in the book. Finally, LLVM produces native code from its IR. The concentration of development effort is obvious: a single optimizer that deals with a single format (LLVM IR) and a single native code generator for each target CPU. As LLVM gets faster and smarter, every single compiler that uses LLVM also gets better.

It's used by Apple currently in the OpenGL Engine Leopard - if you look at the news list, you can see Ars Technica was pointing at LLVM for "Leopard and beyond" 2 years ago. Why OpenGL?

When a video card does not support a particular feature in hardware (e.g., a particular pixel or vertex shader operation), a software fallback must be provided. Modern programmable GPUs provide a particular challenge. OpenGL applications no longer just call fixed functions, they can also pass entire miniature programs to the GPU for execution.

LLVM helps by helping out code wise. Why interesting?

Don't be misled by its humble use in Leopard; Apple has grand plans for LLVM. How grand? How about swapping out the guts of the gcc compiler Mac OS X uses now and replacing them with the LLVM equivalents? That project is well underway. Not ambitious enough? How about ditching gcc entirely, replacing it with a completely new LLVM-based (but gcc-compatible) compiler system? That project is called Clang, and it's already yielded some impressive performance results. In particular, its ability to do fast incremental compilation and provide a much richer collection of metadata is a huge boon to GUI IDEs like Xcode.

I know this LLVM subsection is quite a digression, but even if it's only used in a limited capacity in Leopard, LLVM is quite important to the future of Mac OS X. Indeed, it could also be important to the present of the iPhone and other OS X platforms.

Graphics, ARM, XCode, OpenGL... LLVM kinda links to all these.

In 2006, Arstechnica noted that the "LLVM JIT optimizations combined with the new multi-threaded OpenGL stack have yielded a doubling of the frame-rate in "a very popular MMORPG" (which is code for "WoW")." Maybe not true but interesting.

Another reason? The iPhone uses LLVM. LLVM helps optimise code for the iPhone. LLVM has ARM support and LLVM is being integrated with Apple's primary compiler (gcc) in XCode. PA Semi? Fabless ARM production seems to make more sense.

A July 2008 talk onclang, and a few pdfs.

Clang: Why a new front-end?

gcc = GNU compiler collection. It's a "set of compilers produced for various programming languages by the GNU Project." It's "been adopted as the standard compiler by most other modern Unix-like computer operating systems, including Linux, the BSD family and Mac OS X" (Also seen in Symbian, Playstation, Dreamcast so says the wiki).

GCC
- Its front-end is slow & memory hungry
- Front end is not easy to work with
– Learning curve too steep for many developers
– Implementation and politics limit innovation
– GPL License restricts some applications of the front-end
- Doesn’t service the diverse needs of an IDE

Short answer: LLVM will be quicker than gcc at compiling for OS X, and use less resources, and clang will be better.

Also, looks like clang gives nicer code checker - http://www.rogueamoeba.com/utm/2008/07/14/the-clang-static-analyzer/


Placeholder for LLVM, Sproutcore,
CUPS, ZFS. If only Macrumors' guides were as usable as the WYSIWYG Snow Leopard Server wikis.... :rolleyes:
 

valdore

macrumors 65816
Jan 9, 2007
1,262
0
Kansas City, Missouri. USA
My God.. yeah, t0mat0, you should be getting paid for all this knowledge and research.

and I'd like to say that I'm excited for Snow Leopard, and the trend in general to make software more for multi-core processors. I have an 8 core Mac Pro but it seems like that doesn't really matter that much if the software I'm running wasn't designed for it to begin with...
 

Cromulent

macrumors 604
Oct 2, 2006
6,801
1,096
The Land of Hope and Glory
and access to 16TB of RAM!!!!

Sounds good, but no home computer will get anywhere near that for at least 3 - 5 years. Apple are just building a foundation which they can work on for the next 10 years or so. Grand Central and OpenCL are them seeing the move to multi-core, multi-processor systems with extremely powerful graphics card and just bringing Mac OS X inline with the rest of the industry.

I'm interested to know if SL will support SLI / Crossfire though. That would tie in with OpenCL and Grand Central pretty well and would certainly follow the premise of SL which is too increase performance.
 

t0mat0

macrumors 603
Original poster
Aug 29, 2006
5,473
284
Home
Enthusiasts might not be able to add on masses of memory, but they can stock up on CPU power. Also, seeing as Nehalem is coming in soon, a refurbished Mac Pro with minimal settings could be a steal! - Drop some marked down CPUs and memory in and you'd have a sweet system. However, if you're looking at a Mac Pro anyhow, it'd make sense to get a low end Nehalem rig, and then add in memory, and replace the CPUs as and when.

No home computer...That's kinda why I put up the picture of the potential supercomputer of the Apple II's day - What's not to stop someone as said, fabbing a specific board? Last year's "enthusiast" rig was Skulltrail. Horrendously overpriced compared to mainstream CPUs of 2008. Apple is definitely building a foundation. Whether SL will go multi-touch overboard is another matter, but the stated intentions are to trim, tighten up, make leaner faster more powerful etc. Apple will actually be having some big changes (Quicktime X, iTunes 8/ X). I think Apple might go with X, as then they could take QT & iTunes to 11, in parallel with the OS to XI, if they so chose. It could go to 10.7, but i'm not hot on the history of the nomenclature, it's just a thought.

The first page post shows the relative move of default memory available in a Mac. Taking the iMacs and MacBook Pros - the maximum was 2GB till 2006, 3.25GB usable till late 2007, and Max of 4GB currently due to size issues. Unless memory gets smaller, or the space for it gets bigger, the compact condense Macs will not be able to suddenly increase the actual amount of memory they can take on, in comparison to the Mac Pro, which has more space due to it's tower structure (e.g. it can do up to 32 GB currently).
As the Apple site says, the Mac is designed for "unequalled expansion", "higher capacity, more flexibility, and endless possibilities"...
Currently, the maximum memory a system could take fitted onto a board anyhow. Now that bar has been raised, it's well within possible that a canny guy will create a stack of boards, and link all this potential power in. It's not like you need to plug 15TB of DDR3 RAM in. 128GB would be about half the memory bandwidth I think the cnet article said.

Intel surely wants to get back into the High Performance supercomputer ring. (e.g. it's sided next to Cray , infor here)Apple is offering a kick ass system to get into it, and that could leverage it's Larrabee and other technology, whether in some custom built boards, or just clustering some specced out Mac Pros.

As for SLI / Crossfire - Yes. Quite to highly likely to be able to take multiple graphics cards. Unless Intel can provide a decent rival, they'd be shooting themselves in the foot to not open it up. I'd imagine the question is specifically if Macs could get SLI / Crossfire support. Again, much more likely, seeing as SL is all about using GPUs, for GPU work, and general purpose computation. As the current Mac Pro site says: "Graphics. The next generation. Introducing all-new, high-end, blow-you-away graphics."
The spec page shows you can do a fair bit with a Mac Pro. Will Apple be more BTO for iMacs etc? Not as likely. Again it comes back to the mythical mid size tower, which if made available would also be easily able to offer the chance to choose your graphics card you eventually have in it, like the Mac Pro.

@ Valdore - Your position is one that Apple will try and demonstrate will be improved by just an OS change. SL may well let you "unlock" the power within your multiple cores of your CPUs. Who wouldn't say yes to some of that?

E.g. The current Mac Pro's fastest CPU option is the Xeon Harpertown - the 3.2GHz is the X5482, (i.e. aka the Skulltrail C2X QX9775). So it's not like Apple held back last time. But the Nehalem upgrade will give much nicer price points, and also the option of just dropping in newer LGA771 socket (aka socket J) compatible Intel CPUs (or at least getting the 3.2GHz CPUs cheaper potentially). Hope that doesn't sound too mumbo jumbo.
 

kaiwai

macrumors 6502a
Oct 21, 2007
709
0
Christchurch
Placeholder for LLVM, Sproutcore,
CUPS, ZFS. If only Macrumors' guides were as usable as the WYSIWYG Snow Leopard Server wikis.... :rolleyes:

When one looks at what is being added to Snow Leopard, I can't work out why some assume it isn't a feature release :p

For me, less space used is a feature, lower memory usage is a feature :)
 

t0mat0

macrumors 603
Original poster
Aug 29, 2006
5,473
284
Home
I think Apple is basically able to market it how it wants to. With the Vista barrage coming, they'll be able to counter any of it with Snow Leopard's and Leopards features i'd imagine.

Seems that the PC (e.g. Dvorack) and also Mac press do a pretty good job when Microsoft gets it wrong anyhow (e.g. Mojave "not scientific, but we'd like it to be seen so so we'll call it an experiment").

The Dilger series of articles above link covers a lot of this, as seen in the myth series breakdown here.
 

zumpy

macrumors member
Jan 11, 2008
33
0
Discussion forum for "Snow Leopard"

Just wondering if anyone knows of a link to a discussion forum on the new os X. There must be a developer's release available, and it would be interesting to read any first hand experiences.

zumpy
 

t0mat0

macrumors 603
Original poster
Aug 29, 2006
5,473
284
Home
I'd imagine you're in it:
Mac Forums > Apple Software > Mac OS X Forums > Mac OS X

Developers are free to jump in on this thread if they like, as you could make your own thread specifically to get ongoing feedback from developers using SL.

An interesting article in wired on "The One" - basically imagining what the cumulative might of the computing world would be like summed together. cloud computing

- For me, the most interesting figure, was that 1 human mind still had more synapses (100 trillion) than there were hyperlinks (27 trillion).

As another side point, iPhone wise:Critical mass, crucial network density - when things become useful, and suprisingly, didn't mention Metcalfe's Law : "the value of a telecommunications network is proportional to the square of the number of users of the system (n²)" Whilst the number is wrong invariably, the concept that more could be good is there.
The counter argument seen by early adopters is that the signal:noise ratio can get worse as more come aboard.

As Shirky might say, more is more, but more users is different - network usefulness blossoms, sometimes

The reach of an iPhone, the density of them, and the usefulness of the apps can all be tweaked, via web use. Physical density of iPhone owners is needed for some apps, but not all.
Even if the general population may have a low density, as the article mentions, pockets will spring up. The author highlights cabbies, but i'd imagine that campuses come this fall are going to be huge. Whoever can create a hit app to get critical density and get students to pay for it or use it and get ads? I'd imagine the company would find Apple there first, welcoming them aboard!
 

iMacmatician

macrumors 601
Jul 20, 2008
4,249
55
10.6 has a very subtle UI change, more in line with the iPhone UI. I don't really see anything too big past that.
Screenshots?

In terms of UI, would they change it too much? Leopard brought in Spaces. If I was to bet, it would be multi-touch incorporation, to a greater degree. I'd imagine we'll see where that's heading by the end of the year when the MacBooks and MacBook Pros have had their refreshes (Montevina, or even later next year with Nehalem). I'd imagine that Apple would want the multi-touch to be incorporated in the hardware sold a year before Snow Leopard was release, otherwise any additional features in this area would be "new hardware customers only" which would suck. I'd imagine that Snow Leopard would be bringing in features that Windows 7 has billed thus far. Could be a spurious guess, but i'd imagine a lot of thunder is going to be stolen come WWDC 2009.
There's supposed to be multi-touch trackpad support in Snow Leopard.

Beyond Nehalem? Westmere is the die-shrink to 32nm, and Sandy Bridge is the 32nm change of microarchitecture, bringing new extensions to the instruction set: Advanced Vector eXtension (AVX) - 256 bit vectors that will increase peak FLOating Point performance (FLOPs) (Up to 2x).
Sandy Bridge is officially due in 2010 but it looks like it will be released in early 2011. Sandy Bridge will also have 6 cores for the DP server version.

AMD has Istanbul, a 6-core CPU otherwise similar to Shanghai coming in H2 2009, then a 12-core (2 Istanbuls) behemoth named Magny-Cours (sounds like "many-cores"). Bulldozer has apparently been delayed to 2011 on a 32 nm process. A Bulldozer core is at least as powerful as a Nehalem core, which should generate some interesting comparisons between 6-core Sandy Bridge and 8-core Sandtiger.

Larrabee is apparently scheduled for (Summer) 2009 on 45 nm, with a further shrink to 32 nm the following year.
 

t0mat0

macrumors 603
Original poster
Aug 29, 2006
5,473
284
Home
Dipity timeline

Larrabee gives another take on the Snow Leopard Hero banner ;)

Dipity timeline here

If you feel like adding, give me a shout :)
 

Attachments

  • Snowleopardhero2.JPG
    Snowleopardhero2.JPG
    12.8 KB · Views: 172
  • Snowleopardhero.jpg
    Snowleopardhero.jpg
    40.5 KB · Views: 144
  • onemorething.jpg
    onemorething.jpg
    53.1 KB · Views: 142

t0mat0

macrumors 603
Original poster
Aug 29, 2006
5,473
284
Home
Core (i7) Innovation - August 11
Intel's branded Nehalem as Core i7 apparently. Intel's official launch expected Monday 11, launch Q4 '08.
Monday 12 is the SIGGRAPH paper.

Recap:
Beckton - server processor (Q2 09)
Gainestown - workstation & server processor (Q3 08) The Mac Pro wait? The updated Mac Pro is nearly here...
Bloomfield (Q3 08) - 3 coming - 2.66, 2.93, 3.2 (TDP of 130. Currently MBP's is 44W, Mac Pro is 150W)
Lynnfield/Havendale

If you want "hardcore" the Extreme edition Core i7 will get a mention too.

Interesting to see Pixar has dumped AMD, and gone Intel, citing Larrabee and Nehalem.

IDF - Summary

If you think Chip making is simple, i'd recommend watching the video on the history of the making of Nehalem - and where it's coming for. Current tech articles on it do actually gloss over quite a lot of the detail. It could be seen as boring by some, but it is interesting (at least to me) - you actually see that they've put a lot of energy in going effectively back to basics, looking at fundamental issues and assumptions - it's probably analagous to Apple going from 64-bit with the G5 back to 32-bit with Intel Core, then back to mostly 64-bit with C2D, then hopefully blazing ahead with a complete 64-bit Snow Leopard, which still has capability to deal with 32-bit apps.

Pressroom has a fair bit here
Videos are here.

I'd recommend the top ones from here:

  • Next Generation Intel® Core™ Microarchitecture (Nehalem) Family of Processors: Screaming Performance, Efficient Power - Rajesh Kumar Webcasthere
  • Software: Developing for the Future of Computing- Renee James pdf here Webcast here
  • Inspiring Innovation -Craig R. Barrett
  • Digital Enterprise: IA = Embedded + Dynamic + Visual -Patrick Gelsinger
  • Mobility: Where Will "On-the-Go" Go? -Dadi Perlmutter
  • Ultra Mobility: MIDs - Platforms for Innovation -Anand Chandrasekher
  • Digital Home: "I Love TV." - Eric B. Kim pdf here
  • Research and Development: Crossing the Chasm between Humans and Machines -Justin Rattner pdf here Transcript here
  • A Conversation with Steve Wozniak, Apple Computer, Inc. Co-Founder Dr. Moira Gunn Interviews Steve Wozniak Webcast here
  • Splitting the Atom: A Peek into the Intel® Atom™ Processor - Dr. Shreekant (Ticky) Thakkar
  • Using Information Technology to Meet 21st Century Challenges and Opportunities - Eugene (Gene) Meieran, Rodney Brooks, Bran Ferren, Story Musgrave, Michael S. Blum, M.D.
Moorestown - An Atom power MID was running World of Warcraft and hooked up to the monitor system. A note on Atom apparently "the gritty details of the product will have to wait until IDF Taiwan for its full coming-out party."

Business Wire news:
  • Intel CTO Says Gap Between Humans, Machines Will Close by 2050
  • Intel & Yahoo! to Bring the Internet to Television
  • Intel Introduces First IA System on Chip for Consumer Electronics, Expands Internet to TV Experience
  • Wave to Demonstrate Data Protection Using Intel(R) Anti-Theft Technology at Intel Developer Forum Logo
  • VirtualLogix(TM) Delivers Virtualization Support for Intel-Based Mobile Internet Devices
  • Intel Shifts Future Core(R) Processors into Turbo Mode (Info here but it's covered better in the top
  • Rambus to Showcase High-Speed Memory Technologies and Architectures at IDF 2008
  • Hynix Demonstrates World's First 16 GB 2-Rank R-DIMM Using MetaRAM Technology
  • One Voice Announces Voice Control for Intel-based Mobile Internet Devices
  • Fresco Logic Demonstrates Industry's First SuperSpeed USB Data Transfer at Intel Developer Forum
  • Lucid HYDRA 100 Chip Series Available for Customer Validation BIG NEWS
  • NEC Electronics Announces New Device Wire Adapter Chip to Enable Connection of Wired USB Peripherals to Wireless USB-Based Host Systems
  • NetLogic Microsystems Showcases Industry's Broadest Portfolio of Content Processors Targeted at Accelerating Deep-Packet Inspection Functions on Multiple Intel(R) Architecture-based Platforms at the IDF Fall 2008 BOOOO
  • NEC Electronics America Showcases Wireless USB and Power Management IC Solutions at Intel Developer Forum Fall 2008
  • New SATA Spec Will Double Data Transfer Speeds To 6 Gb/s
  • IDT to Demonstrate DisplayPort-Compatible Receiver Solution at Intel Developers Forum San Francisco

VPO Press Kits: Lots on Display Link - Asus, LG, Intel chipsets, InFocus projectors, Kensington New Dual monitor adapter using Displaylink... 3D SSDs MIDs


Upcoming IDF prior to Snow Leopard:
IDF Taiwan (Taipei): October 20 – 21, 2008 (Atom launch/new info)
IDF China, Beijing April 8-9, 2009
IDF US, San Francisco September 22-24, 2009
Presumably a cross-match of the roadmap to these IDFs would show if anything else big might be announced.
 

t0mat0

macrumors 603
Original poster
Aug 29, 2006
5,473
284
Home
Calpella - The chipset beyond Montevina
Calpella will support Nehalem notebook CPUs - Clarksfield & Auburndale. Auburndale will have a graphics core integrated into the CPU package. As Engadget says: "and it's a Digitimes rumor, so expect things to significantly change by the time the first Nehalem laptops hit the street in the second half of 2009." Fits quite nicely into Apple's schedule - after Back to School, before Christmas.

Shot in the prediction dark
Just going over 2 articles at roughlydrafted from 2 years back, that I hadn't touched upon previously, it seemed it was worthy of a re-appraisal of the concepts put forward.

I'll smarten this up later:
Taking on Exchange - no one's really blogged this - but this is going to be a huge disrupter in SMB
Apple Takes On Exchange Server
http://www.roughlydrafted.com/RD/RDM.Tech.Q1.07/8DFEC70D-ED31-46AD-B23A-558AF0473F91.html
http://www.roughlydrafted.com/RD/Home/3FE506E2-FD6D-4FC6-BC9C-055F27279DF4.html
http://www.roughlydrafted.com/Oct05.Leopard4.5.html
the Xserve mini - where Time Capsule, Apple TV Take 3 etc are heading, along with the mac mini...
http://www.roughlydrafted.com/RD/Home/3FE506E2-FD6D-4FC6-BC9C-055F27279DF4.html
http://www.roughlydrafted.com/RD/Home/72A2033E-F9A5-4EDA-991D-E1F0178C6AF5.html
http://www.roughlydrafted.com/RD/Q4.06/89A26CE6-2846-4B13-8E81-E234AB9EC561.html
Basic take is PBX killer, ultraportable Office telephony system, that can run without e.g. BT and expensive line rentals. More anon.

Some points are made in Myth #8: It's just an OS[/size] - the last in the series from Daniel Eran Dilger at roughlydrafted.com.

Snow Leopard promises to "obsolesce Entourage". Daniel sees sense in Apple bundling the Exchange friendly Snow Leopard version of Mail, iCal, and Address Book into the next version of iWork (2009/2010?). This means PowerPC users get the benefits. iWork then is a head to head against Office. Exchange is one of the major reasons it seems offices go Windows, versus Apple.

With HP's information on Vista uptake - it could well be that Enterprise is really holding off on Vista.

Could a business version of MobileMe go for business? Apple is marketing MobileMe as "Exchange Server for the rest of us". Apple could well make a play against Outlook from the posts it seems. With costs an issue - the cost of a few Macs, and the SL server might be a ROI that would be covered within a year or two, when comparing with the per user costs per year of Exchange. The email = file, versus Outlook's pst etc system, means that i'd imagine ZFS on a Server will mean that a Snow Leopard Server running IT system would have a lot easier task to do backups, etc, and recover files for users using the system. And the information retrieval via Spotlight would be quicker. Also - the ability to store files, rather than use emails as a way to store files, could be useful too.

iWork on the mobile? Office is currently weak in some places - check out SlideRocket for example - much easier to collaborate, and automatically update presentations, in a very easy to use way.

Snow Leopard's push messaging services could easily be turned to Business applications - for day to day collaboration for example.
The concept of annual updates, incremental improvements of applications via semi-regular updates (see iPhone...) does create value.


Mini Xserve
Ludicrous idea? By 2010, with ZFS potentially nestling down, wouldn't some enthusiasts want SL Server functions in their network or on their OS X machine? It seems Time Capsule, iTV, Mac Mini all skirt the area. The Mac mini could fairly easily branch out to a SL mini server - Media storage, remote access, centralised media playback, offline ripping, data crunching... Heaven forbid the words games console ;)

PBXs
Watching Scoble's view into the Office online team at Microsoft recently on Fastcompany.tv, they were demoing Sharepoint, Exchange online atc - RD's comments about the weaknesses of Exchange Server ring true - they're backing away from it. Microsoft was talking about basically getting rid of the cost of having too many plumbed in lines. The cost of line rental alone is pretty high - it'd be cheaper to have PAYG SIMs on stand by on another network, presumably. Apple is a step behind a step ahead - if they wanted to, they've got the iPhone, they've got the ecosystem - why not offer a great deal for Business?
Expensive desk phone + expensive wiring in, PBX, other boxes. A PBX for 100 users? RD has it down as $100,000 with voicemail etc on top another few grand.
Apple could easily sweep the board. Personally, i've seen what BT offers in one regard, having had a chat with a BT engineer, and dealt with the system. The costs for getting an engineer in are ludicrous for certain things (new phone extension, adding lines, changing features etc). Asterisk shows that you can move open source in.

Haven't had time to mash the links yet, but it seems it's an underrumored few areas, that Apple could hit - No word on iLife, on iWorks, on other business opportunities, etc.

Was a conversion to LED for displays the investment? How could Apple make the larger ACDs go LED? Or is it something else?
 

iMacmatician

macrumors 601
Jul 20, 2008
4,249
55
NDA ;)

Next time I'm near my 10.6 installation I'll take a couple of snaps.
:apple: I'll be waiting. :apple: Thanks in advance. :)

It's interesting how none of these UI changes we have seen in 10A96 (do you have a later build, or are the changes in other places?).

That says that apparently mobile Nehalem is on a 32 nm process. Well, say hello to quad-core iMacs, MacBook Pros, and maybe even MacBooks. :) Combined with Snow Leopard, this will be really cool.
 

t0mat0

macrumors 603
Original poster
Aug 29, 2006
5,473
284
Home
Wiki here has Clarksfield (i.e. mobile 32nm) as Q3 2009, whereas all the other is 2010.

Ideas for why Westmere mobile versions are coming prior to desktop/server/workstation etc? Don't have the time to check now.

I have non-sourced note saying that prototype 32nm processors will be shown at the Spring IDF in H1 2009. Production in early H2 2009, volume shipments beginning ~Q4 2009, with first chips shipping late 2009, with it mainstream buyers 2010. An Intel March press release for 2009/2010.

If anyone wants to start up some Predictify.com questions, that'd be interesting - i've emailed the front page to see if they're thinking of getting a predictify linked page to get some rumours up (general Apple ones) and see if the macrumors.com community is a decent predictor.
 

t0mat0

macrumors 603
Original poster
Aug 29, 2006
5,473
284
Home
What can we do with cores

A SIGGRAPH tangent
Gigapixel images - think photosynth in a flat plain. Perspective and curved projections blended.
http://www.youtube.com/watch?v=B5UUrxL_2t0

What GPU can do for you - Lightstage
Now, some amazing video you need to see if you haven't already. This is basically Bullettime v2. It's like a hi def LIDAR webcam. It's.. highly worth watching.

Jules Urbach of OTOY

It's a bit like the tech they had post-CT, once you get 7 minutes in - you can create the lighting.

Part I http://www.youtube.com/watch?v=x5yVjHaJ0PI

Part 2 http://www.youtube.com/watch?v=XnihU4zCXe8

http://www.youtube.com/watch?v=ROAJMfeRGD4&NR=1

Capture, render, manipulate - the next field of research - Image Metrics, who do capture animation software. Reminds me of Paul Erkart who did work on facial expressions, cataloguing all the different sorts you could have, and also work on micro expressions.

To then be able to use these images, to be able to do lip synch, to do whatever series of facial expressions you want using this captured model. You could link this to a webcam actually capturing your face real time if you had enough power. Kind Wizard of Oz stylee.

Part II - 5 minutes in - Projects any 3D object - Holograph. This is actual Star Wars level tech. I've seen 3D screens in my time, but this seems to actually be making headway. No headsets, totally 360 degree view.

HDR, bullettime, mocap - it's a techie dream.

Imagine this, with SIGGRAPH http://www.geek.com/microsoft-research-unveils-unwrap-mosaic-video-editing-tool-20080813/

The link to Snow Leopard? This is what you can run on a GPU currently. Give it a year, and multi-core optimisation...

The more you dig the more you find:

Cinema 2.0 - No more need for 3D model making. You simply capture object in 100% photo realism with lightstage and then put it into a game. And that include people - which you can then make a model of, then with the skelton, and facial info software, you can then lip synch, manipulate etc. Lightstage is the tech to get the panoramic shots of the human - they pull different faces to sort out all the different potential faces (frown, happy, sad etc). For a non-animate object, it's much simpler.

http://www.ajax-blog.com/otoy-developing-server-side-3d-rendering-technology.html

http://www.ajax-blog.com/the-truth-...tic-3d-world-and-otoy’s-rendering-engine.html

OTOY
Jules Urbach, founder & CEO of OTOY, has been working with AMD for 2 years and has been working on server-side graphics processing for gaming and other applications. So 3D rendering could be part of the cloud. It's been demoed before - and could work on PC, Mac, iPhone presumably. gaming as a service ftw.

Looking at Debevec - he had a hand in bullet time. He was working on HDRI (High Dynamic Range Imaging) back in the 80s

The Lightstage with Photomodeler Pro (which creates 3d models & measurements from photos.). Could Photosynth & Panoramio do this?

http://www.cgtantra.com/forums/showpost.php?p=10393&postcount=6

Siggraph 2000 Debevec presented Light Stage 1.0
Siggraph 2002 Debevec presented Lightstage 3.0 with a 156 LED dome working on visible & IR light.

Siggraph 2005 - Debevec's team brought "Performance Relighting and Reflectance Transformation with Time-Multiplexed Illumination"

Light Stage 5 - A big bi dome with more LEDs, using very high speed cameras - 4800 frames a second. Basically the LEDS flash in different combination at a very high rate, so they can show a lot of different lighting environments every 1/20th second for example.

http://www.fxguide.com/article268.html&mode=nested
http://gl.ict.usc.edu/Research/3DDisplay/

http://ptonline.aip.org/journals/doc/PHTOAD-ft/vol_60/iss_11/24_1.shtml

A decent url on it - here

http://arstechnica.com/journals/har...-significant-market-share-gain-by-end-of-2008

Other things
Image Metrics, USB 3 for Macs, FW bump, Lucid & Hydra - surely an "attempt to build a completely GPU-independent graphics scaling technology" might be warmly met. Maybe it's a technology stop gap before Larrabee shows it true potential, that might not get taken up here. Link here. It's a SoC that links to a CPU, and the GPUs (Nvidia or AMD), and would apparently be either on a motherboard, or on a graphics board. Is this another inkling of Apple potentially making more bespoke motehrboards, and not going with a standard chipset?
"Lucid is claiming nearly linear scaling on up to 4 GPUs compared to 50-70% with SLI or CrossFire". Implementations available in early 2009 apparently. Chip draws 5W. Extremetech link
 

t0mat0

macrumors 603
Original poster
Aug 29, 2006
5,473
284
Home
64-Bits

Prince McLean (the pen name at least over at AppleInsider) aka Dan over at roughlydrafted.com sure does churn out those articles - it's been nearly 1 a day rate!

The article, Road to Mac OS X 10.6 Snow Leopard: 64-Bits
Article is here

Looks like this is part rehash of the myth series summarised previously, part new information/different take - this is more about features, than disproving disses real or imaginary to Snow Leopard. I'd recommend reading the Myth 2 - No 32 bit support review hereo (though the graphic below is useful in seeing the transition to 64-bits - The system, KEXTs, drivers and finally kernel moving across.

64-bit computing, 64-bit support:

1980s: PCs went 8-bit to 16-bit to 32-bit architectures.

What does the bit architecture bit mean? By increasing the bit a bit (hoho) there was more addressable memory for applications. Much better to have lots of memory available than keep writing out to hard disc, which is slow. We still see this today.

A "CPU's architecture, memory address bus, and its data registers (used to load and store instructions) may all have different bit widths."

By the late 80s, Apple had a full 32-bit hardware setup - the Mac II's 68020 processor, and the "32-bit clean" Mac System 7 software, which together enabled applications & the system to theoretically use as much as 4GB of directly addressable memory.
"By 1995, Microsoft was shipping its own 32-bit Windows API with WinNT and Win95 to take advantage of Intel's 32-bit 80386 and 486 CPUs."

Segue to a bit later - the 4GB limit of 32-bit memory addressing could be begin to be felt- see the maximum addressable levels on page 1 of this thread. It's not progressively going up, it's been stuck there for a while.

The 1994 migration Apple took to PowerPC was a move toward 64-bit computing - "PowerPC offered a scaled down version of IBM's modern 64-bit POWER architecture, with 32 individual 32-bit general purpose registers; Intel's 32-bit x86 was a scaled up version of a 16-bit processor, and only offered 8, 32-bit GPRs. The lack of registers on x86 served as a significant constraint on potential performance and complicated development."

To ease this RAM limitation problem prior to a move to 64-bit CPUs, Intel added support for ""Physical Address Extension" or PAE to its 32-bit x86 chips, which provided a form of 36-bit memory addressing, raising the RAM limit from 4GB to 64GB. Using PAE, each application can still only address 4GB, but an operating system can map each app's limited allocation to the physical RAM installed in the computer."

Want to use >4GB of RAM on a 32-bit PC using Microsoft? You need support for PAE in the OS kernel, with this seen in Enterprise, Datacenter, & 64-bit versions of Windows. Standard 32-bit versions of Windows XP, Vista, & Windows Server are "all still constrained to using 4GB of physical RAM, and they can't provide full access to more than about 3.5GB of it, making the limit an increasingly serious problem for desktop Windows PC users."

Late 90s - Windows NT was ported to 64-bit architectures "such as Digital's Alpha, MIPS, PowerPC, and Intel's ill-fated Itanium, but this also only benefitted high-end workstation users."

Apple's real 64-bit hardware came with the PowerMac G5. "The G5 processor delivered 32 individual 64-bit GPRs and a 42-bit MMU (memory management unit) for directly addressing 4TB of RAM, although the PowerMac G5 hardware was limited to 8GB."

2003 - AMD released its Opteron CPU using an "AMD64" architecture. that turned out to be a more practical alternative to upgrading into the world of 64-bits than Intel's entirely new Itanium IA-64 design. The new 64-bit PC, also called x86-64 and x64, largely caught up to PowerPC by suppling 16, 64-bit GPRs, and potentially a 64-bit memory bus to address 16EB (16 million TB) of RAM. AMD's x64 processors can theoretically address 48-bits, or 256TB, in hardware.

2003 - Mainstream PCs got a taste of 64-bit - the Opteron CPU using an "AMD64" architecture. "AMD's x64 processors can theoretically address 48-bits, or 256TB, in hardware. In practice, no PC operating system currently supports more than 44-bits, or 16TB of virtual memory, and of course considerably less physical RAM."

More basic PC users might not need 16TB, but are starting to see the possibilites of going >4GB RAM, and 32-bit PCs can't help. 32-bit XP won't be going above this limit.

With GBs of RAM to hand, you need the OS to see it, be able to
1) Address >4GB RAM for the entire system
2) Be able to give each individual application access to a decent share of it.

2003 - The 64-bit PowerMac G5 was constrained by the 32-bit Mac OS X Panther. Panther allowed the system to support >4GB of memory, but still corralled each app into its own 32-bit, 4GB space.

2005 - Mac OS X Tiger enabled more, but shipping Macs could only physically support 8GB RAM. [There's a fair bit of software jiggerpokery to help improve the general situation, see the article].

The article's graphics show the main gist:
my.php


2006 - The migration to Intel meant that Apple had to go 32-bit systems on Core Solo and Duo CPUs. Some of the benefits of the PowerPC were lost, and so the speed of the CPU was relied on a fair bit.

Late 2006 - Apple widened support to include the 64-bit x64 PC architecture in the new Mac Pro & Xserve. Subsequent desktop Macs using C2D also delivered 64-bit hardware support. Tiger got updates, and Apple got back to the same level of 64-bit support for x64 Intel processors as it had for the PowerPC G5.

So in 2006, Apple had gone from some 64-bit support, downgraded to 32-bit, but in doing so got all the Mac product lines to Intel, and started pushing it's products and thus its users to 64-bits through the C2D Macs. And as the article notes, "In its spare time, the company also threw the iPhone together while also working to develop its next jump in 64-bit operating system software."

2007 - Leopard had more 64-bit support. Cocoa is now 64-bit, Carbon wasn't given full 64-bit support so developers are nudged towards Cocoa if they want to deliver full 64-bit applications with a UI.

Adobe is one company with a product in need of an update - CS4 is only going to be a 64-bit app on Windows, as it's legacy code is based on Carbon.
Adobe's position is that a 64-bit version Mac app from them is CS5 or beyond - they need to port the UI code of Photoshop and other apps to Cocoa. Photoshop is an app that would really benefit from porting, as big files, big usage of filters etc could do with lots of accessible RAM.

So currently, Mac OS X Leopard hosts both 32-bit & 64-bit apps on top of a 32-bit kernel. Leopard uses PAE, so the 32-bit kernel can address 32GB of RAM in the Mac Pro and Xserve; Apple's consumer Macs can only support 4GB RAM, but at least unlike 32-bit XP, they they can use the entire 4GB (with appropriate hardware support). "Leopard's 32-bit kernel enabled Apple to ship 64-bit development tools to give coders the ability to build applications that can work with huge data sets in a 64-bit virtual memory space (and port over existing 64-bit code), without also requiring an immediate upgrade to all of Mac OS X's drivers and other kernel-level extensions. That transition will happen with Snow Leopard."



Snow Leopard will have 64-bit support down into the kernel. So Mac systems could accommodate >32GB of RAM currently available via 32-bit PAE. With the kernel supporting full 64-bit memory addressing, as mentioned previously, the user can add as much RAM as they can afford and as the mainboard has space for. [Annoyingly the question as to whether you could stack RAM as something akin to a PCI board and link it into the mainboard to boost the RAM levels even higher isn't touched on].

Presumably you might be able to make a 2U Xserve though perhaps with space to put RAM.
 

Attachments

  • road-to-sl-080826-6.gif
    road-to-sl-080826-6.gif
    104.1 KB · Views: 163

t0mat0

macrumors 603
Original poster
Aug 29, 2006
5,473
284
Home
Multitouch

Oh yes. Come on you beauty- Apple keeps teasing with all these patents! See the front page -

This could overlap for a touchscreen iPod, or a multitouch pad etc, or even a display.

As is pointed out in the thread (currently up to 6 pages) is how this would fit -
I wonder who out of Seth Weintraub, Daniel from roughlydrafted, Arn and a write in Mac mag question spot could answer how Apple could fit in a touch framework, but have it transparently hid for things that don't have access to multi touch currently (e.g. a current Mac Pro).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.