But... The computer has to run flash to be useful![]()
The missile nose cone targeting microprocessor I developed for DARPA ran Flash 1.0. Missiles get bored on those long flights and like to play flash hangman.
But... The computer has to run flash to be useful![]()
So when you run out of physical ram all you can do is (hopefully gracefully) die?
I wonder if that is why there haven't been any games that really push iPhone OS devices. It appears that the (later) hardware is capable of Gears of War type graphics (purely on a technical level) but RAM shortage can be problematic.
I'm sorry, but who are you again? (Not my words...)
<snip>
Back in the early days adding a VM pager was considered a (somewhat) acceptable way to increase effective memory capacity without getting more RAM, but since the gap between hard disk and main memory has increased since then, the purpose of the pager is more a last-ditch feature for stability: if the working set of your workload uses more physical memory than you are pretty hosed performance wise.
Looked plenty fast enough in the videos, not like I'm doing Handbrake rips on it.
Interesting tid bit but is the processor really important in a product like this?
haha ok. I am both honored and flattered to be both copied and quoted by such a well respected poster.
I assume you are discussing the Access Speed, not the size gap?
I am curious to see how this does or doesn't change on desktop systems in the future as SSD becomes more popular / standard. While it still is slower to access than physical memory, it is still much quicker than rotating storage.
Now assume that when whichIndexArray contains FALSE, indexArray1 contains large random numbers. Then the processor will try to read a random location.
To the naïve programmer (rather than the CPU designer), it would
seem to be obvious that a speculatively-executed instruction
could not be permitted to cause an exception of any kind. Anything
else would violate the semantics of the ISA.
Either any potential exception would be delivered later when the
code path hit that instruction, or speculative exceptions
simply would not occur. (That is, if any speculatively-executed
instruction would cause an exception - that speculative
instruction is unwound and never happened.)
In terms of RAM usage: developers can always manually load and unload resources to stay within RAM limits, and usually they can make smarter decisions about which resources to load/unload when compared with the OS VM pager. .
Back in the early days adding a VM pager was considered a (somewhat) acceptable way to increase effective memory capacity without getting more RAM, but since the gap between hard disk and main memory has increased since then,
Really? Why? If I speculatively executing a floating point instruction, all progress has to stop if it generates a divide-by-zero? If the speculative instruction is a load/store, I can't keep going if there is a cache miss? If a conditional branch is followed by another conditional branch, I have to stall until I determine if the first conditional branch was correctly predicted?
In reality, in most out-of-order-retire microarchitectures, these things are permitted to occur. If it turns out that the code branch should not have been taken, things are unwound and you pay any applicable time penalty. You just have to make your branch prediction correct often enough that the benefit of guessing right most of the time more than makes up for the penalty you pay when you guess wrong.
Of course, when you take power consumption into account, the whole calculus changes.
To the naïve programmer (rather than the CPU designer), it would
seem to be obvious that a speculatively-executed instruction
could not be permitted to cause an exception of any kind. Anything
else would violate the semantics of the ISA.
Either any potential exception would be delivered later when the
code path hit that instruction, or speculative exceptions
simply would not occur.
I think that we are in agreement here - as you say "things are
unwound" and the bad thing never happened.
One point though - to me "exception" is an ISA-defined method
of notifying the code stream that something did not work as
expected.
You don't have exceptions for cache misses, since the cache is
not defined in the ISA....
By the way - how can you speculatively execute a store instruction? Do you put the store into the write queue and
let it sit until the store is committed?
By the way - how can you speculatively execute a store instruction? Do you put the store into the write queue and
let it sit until the store is committed?
Store where? Into a renamed register... sure. As long as nothing outside of speculative stream depends on it can just throw the contents away.
Into memory? no.
Store where? Into a renamed register... sure. As long as nothing outside of speculative stream depends on it can just throw the contents away.
Into memory? no.
It's not uncommon to have a dedicated store queue that holds speculative stores (for use in shortcircuiting subsequent loads). In the old days we used to write speculative stores into clean cache lines (and mark them dirty and speculative) as I described above.
It's not uncommon to have a dedicated store queue that holds speculative stores (for use in shortcircuiting subsequent loads). In the old days we used to write speculative stores into clean cache lines (and mark them dirty and speculative) as I described above.
The first isn't memory. A hodge-podge between special registers and memory. The latter doesn't really work, as you also pointed out above. Once can bring multiple contexts into execution it breaks down ( sharing something that is only one execution context specific). If can't run Unix ( or any other multiprocessing OS, let alone go multicore implementation) that really isn't a real product, IMHO. A nice hack perhaps, but not something going to release.
This may not happen before the speculation is resolved, so no need to pre-emptively stall processor B. And, of course, processor B might not even be caching the speculative address, in which case there's no reason at all to stall.
Reading this thread, I realized that it's been ages since there's
been a CISC vs RISC debate here.
Maybe people do learn, after all.![]()
It can run UNIX.
Is this still about ARM? If so, I don't know why that was even a question. Corel had a machine called the Netwinder, a DEC StrongARM running Linux. DEC's StrongARM evaluation board ran a flavor of BSD. This was 10-15 years ago.
Under the hood, not a heck of a lot of difference, and compilers love x86.
Definitely access speed. I believe that actually hard disk capacity has increased at an exponential rate exceeding both CPU speed and RAM speed. Hard disk latency, on the other hand, has only decreased by a few times since the early 1990s.
I'm curious to see how SSDs affect things too. I've run benchmarks that suggest that disk swapping performance on Mac OS X is limited by disk bandwidth, not latency, and so SSDs biggest advantage (the latency) wouldn't help there, but the increased bandwidth would.
On the other hand, maybe you don't want to use SSDs to back physical memory since it could affect the longevity of the drive negatively? I really don't know anything about SSDs though.
That's a great point.
I know that older technology of flash memory had limits on how many writes could occur before you reach the limits of the chips. I have also read articles proclaiming 5 - 30 years of continuous writes, depending on the technology, and components used within a modern SSD.
Truth is, I am like yourself, not 100% up to speed on this technology, perhaps I should hit the books again too.
In the case of the iPad does it matter what technology is in use? Is Apple using the same flash memory that the iPhone/iPod (nano shuffle and touch) use? Which would be closer to what is used in SD cards not what is used in SSD's or am I totally off base?