ANOTHER processor change? If I have to go through this all over again a forth time I will stop programming and become a barista!![]()
What are you using Forth for, anyway?
ANOTHER processor change? If I have to go through this all over again a forth time I will stop programming and become a barista!![]()
If you, as a programmer, still give two craps what chip your code is running on -- well, you suck.
instead of fetching and decoding the instruction again, the CPU directly accesses the decoded micro-ops from the trace cache, thereby saving considerable time. Moreover, the micro-ops are cached in their predicted path of execution, which means that when instructions are fetched by the CPU from the cache, they are already present in the correct order of execution
Would it be silly to think that this project is about creating a Native LLVM processor?
Build the runtime optimiser as a block that could be tied directly to a standard core or even bunch of cores. Similar to how Apple use LLVM to dynamically switch OpenGL code from GPU cores to CPU cores.
Can you provide a reference on your statement about OpenGL? I don't think it's correct.
The LLVM JIT compiler can optimize unneeded static branches out of a program at runtime, and thus is useful for partial evaluation in cases where a program has many options, most of which can easily be determined unneeded in a specific environment. This feature is used in the OpenGL pipeline of Mac OS X Leopard (v10.5) to provide support for missing hardware features.[5] Graphics code within the OpenGL stack was left in intermediate form, and then compiled when run on the target machine. On systems with high-end GPUs, the resulting code was quite thin, passing the instructions onto the GPU with minimal changes. On systems with low-end GPUs, LLVM would compile optional procedures that run on the local central processing unit (CPU) that emulate instructions that the GPU cannot run internally. LLVM improved performance on low-end machines using Intel GMA chipsets. A similar system was developed under the Gallium3D LLVMpipe, and incorporated into the GNOME shell to allow it to run without a GPU.[6]
This doesn't sound like a new idea at all, nor does it sound very important since modern chips do branch prediction to keep the pipelines full and do it well.
Consider, for example, the trace cache, which does the same thing but based on dynamic execution of the program, and so requires no compiler support.
http://en.wikipedia.org/wiki/NetBurst_(microarchitecture)#Execution_Trace_Cache
Apple don't design CPU's, they design SoC's
Why do people think this is an architecture change ?
All it is is a new technology for the already in use ARM chips in Apple's IOS devices. At most there will be new API's for developers.
Now this is the cool stuff I want to hear more about.![]()
This certainly sounds interesting...I guess it could have performance implications when you have nested loops. Otherwise it seems the performance benefit would be minuscule as it will only be improving the pipeline when a loop completes.
If you, as a programmer, still give two craps what chip your code is running on -- well, you suck. That would also explain why barista is an acceptable move, salary wise.
Abstract, brother, abstract. Use blocks, adapter objects, build lightweight APIs around your process intensive work and use the core APIs wherever possible -- and advances in chips becomes basically free. Stop ice skating uphill.
Since Apple seems to be gearing up the trademark on the term, you could be hearing more about this very very soon.
If this is what I suspect it is, Apple is looking for multiple ways to efficiently have higher performance portable devices with longer battery life.
That's actually very interesting. I suppose they did this because the original Intel GMA lacked support for vertex programs so that part of the OpenGL stack had to run on the CPU.
Apple trademarks everything. I recently saw them trademark the word, "automagically". This is ridiculous.
Garbage. Exactly the opposite.
If you are trying to stand-out by pushing the envelope in real-time crunching per second (which applies to many of the games, audio, voice processing, image processing and VR apps in the App store, among others), then knowing precise details about the performance of your chip, and how to tweak your code for it, is absolutely necessary. It's the difference between a smooth 30 or 60 fps game or VR view, and some jerky mess with glitching sound.
Same if you are trying to absolutely minimize battery use per crunch for longer running apps. S*cky are the programmers who don't care about this stuff, and waste the user's battery life as well as contributing to so-called global warming.
This is correct if you're using Java, but if you're coding in C then you better care about the chip your code is running on. Statements like this just make you seem like the sucky programmer.
Apple don't design CPU's, they design SoC's
Why do people think this is an architecture change ?
All it is is a new technology for the already in use ARM chips in Apple's IOS devices. At most there will be new API's for developers.
Is it just me or is Macroscalar not really a marketable term?
I know next to nothing about all the technical stuff you've been discussing, so it was an interesting (and partially comprehensible) thread to read through. But I don't see Apple using the term Macroscalar as they have used Retina.
Will they not someday move towards a closed environment where people create their software with iProgram and publish it exclusively to the Mac App Store? I wouldn't want this to happen, but is there even the slightest chance Apple would ever want this?