iPad Tech Specs: Cortex A8, 256MB Ram, PowerVR SGX 535

Or they could leave it as "device" as I would hope that the person would generally know what they're holding in their hands.

After 30 long years on this blue planet, I can deduce the device in my hands is an iPod touch and If a similar message popped up on my iPod touch It wouldn't have me assuming that my Nexus One was low on memory. ;)

Whereas most people would be looking around and saying "A device? What device?" or they stop reading as soon as they read "device" because it all goes over their head. "Device" is programmer's jargon. You don't write messages for programmers, you write them for ordinary users.

So how is a user to know that by "device" you mean the whole thing (in that case an iPad), and not the display device, or the storage device, or the audio device, or a camera device attached with an adapter, or the keyboard device attached with another adapter?

So if my wife read this alert, and asked me what it means, I would read it and say "it means the programmer is a lazy bum who can't be bothered to write messages that you can understand. It also means that the program doesn't work properly, and if you exit it and start it again, it will probably work for a while, but not for very long".
 
So if my wife read this alert, and asked me what it means, I would read it and say "it means the programmer is a lazy bum who can't be bothered to write messages that you can understand. It also means that the program doesn't work properly, and if you exit it and start it again, it will probably work for a while, but not for very long".

How is the programmer meant to read your wife's mind to understand what message she would understand. Surely people aren't that backwards that they don't understand what a device is? Vocabularies are that small outside the programming world? Are we talking about a world of Homer Simpsons who look for the "Any" key? Surely the idea of the iPad was that it represented one whole device?
 
The fact that with merely 250MB RAM they have squeezed such a slick performance is beyond magical — somehow, after playing with the device, it no longer feels to cheesy to say that M word :D
 
Your point is clear, but I think you have to be realistic : I don't think even 0,01% of the planet knows what a kernel is ;)

We all know about it since way back then. We used to fight over it whenever a coconut is broken

coconut-oil.jpg
 
The fact that with merely 250MB RAM they have squeezed such a slick performance is beyond magical — somehow, after playing with the device, it no longer feels to cheesy to say that M word :D

Define "performance". You probably mean to say that it loads pages faster than iPhone, right? It's OK because web browsing is what this device was designed to do.
 
Wirelessly posted (Mozilla/5.0 (iPod; U; CPU iPhone OS 3_1_3 like Mac OS X; en-us) AppleWebKit/528.18 (KHTML, like Gecko) Version/4.0 Mobile/7E18 Safari/528.16)

gnasher729 said:
Or they could leave it as "device" as I would hope that the person would generally know what they're holding in their hands.

After 30 long years on this blue planet, I can deduce the device in my hands is an iPod touch and If a similar message popped up on my iPod touch It wouldn't have me assuming that my Nexus One was low on memory. ;)

Whereas most people would be looking around and saying "A device? What device?" or they stop reading as soon as they read "device" because it all goes over their head. "Device" is programmer's jargon. You don't write messages for programmers, you write them for ordinary users.

So how is a user to know that by "device" you mean the whole thing (in that case an iPad), and not the display device, or the storage device, or the audio device, or a camera device attached with an adapter, or the keyboard device attached with another adapter?

So if my wife read this alert, and asked me what it means, I would read it and say "it means the programmer is a lazy bum who can't be bothered to write messages that you can understand. It also means that the program doesn't work properly, and if you exit it and start it again, it will probably work for a while, but not for very long".

If I had the ****ing thing on my hand I'd know what they were on about.

Are there really people out there that thick that they wouldn't know that "device" being displayed on the screen of said "device" means the "device" they have in front of their eyes?

Are we honestly saying that there are simpletons out there who can purcase an iPad, sync it with their Mac/Windows PC, purchase applications on the App Store and execute those apps but they don't understand simple English by the meaning of the word "device"?
 
Hate to tell you but everything in this world is outdated before it hits manufacturing :) Again, Wish people understood technology !

Well not always after all the G5 was never outdated (well sort of).

It's not worth investing in tech that doesn't have a plan to outdate itself.
 
The fact that with merely 250MB RAM they have squeezed such a slick performance is beyond magical — somehow, after playing with the device, it no longer feels to cheesy to say that M word :D

Well, duh. There's no multitasking so no app in the World can fill up all 256MB of RAM, and we have that awesome new A4 processor. Of course it's going to be fast.
 
How does that saying goes:
"The early bird gets the worm, but the second mouse gets the cheese"

I will wait for the 2nd generation.
I am sure Jobs will make some changes to it.
 
Half correct, half wrong. Application code and operating system code is automatically swapped in/swapped out, using the files containing the code as the backing store. I think non-modifiable data will work the same. Modifiable data, however, cannot be swapped out.

Sure modifiable data can be swapped in OS X. As you noted, read-only data and code are swapped right back into the original executable files from which they originated.

Modifiable data is swapped into dynamically generated swap files. By default, OS X keeps these swap files in /var/vm/swapfileX. (Where X is a number starting at zero. After every reboot, all existing swapfiles are deleted, and then new swapfiles are generated, used, freed, and reused as needs arise and go away.)
 
As you noted, read-only data and code are swapped right back into the original executable files from which they originated.

I think that this is clearer if you say:

As you noted, read-only data and code pages are discarded when memory is low, and if needed again are read back from the original executable.​

Read-only pages are never written back to the original file - there'd be no point in doing that.

I'd also assume read-only memory-mapped data files would behave like executables - pages would be discarded and re-read as needed. Memory-mapped COW files would be a mix - unmodified pages could be discarded and re-read, modified pages would have to stay in RAM (on the phone OS, on OSX they could be written to backing store).
 
It's always easy to tell when a genius hasn't paid $15,000 to Apple in a LIFETIME

What is it wth you guys do you expect Apple to jump everytime you get an itch?

This is crap. Blu-Ray simply isn't of interest to enough people to make a difference. There has been a flash implementation available on the Mac for ages if you are stupid enough to enable it. FireWire ports are history. Matte screens suck, get over that and the quality of your work will go up.

In any event maybe Steve isn't listening because you are wrong. Very few people give a damn about BluRay and your other complaints are garbage.

Bye and take your self important ass with you. I'm really tired of this crap, the world does spin around you personally. Further the Mac Line isn't that far out of date, it isn't like there is a gold rush of competeing hardware.

Funny but Apple is still selling Mac hardware like crazy. Some of the product line is very good value. As to chasing fads and opening themselves up to competition from the Chinese you must be talking about the zero margin constantly changing PC industry.

First very few of Apples computers could rationally be called obsolete. Second you will be really disappointed if Apple decides to take the PC industry in a different direction. Especially if that vision doesn't jive with your narrow views. Lime it or not Apple markets stable hardware platforms that many professionals appreciate.

Dave

That's OK, Davey. You go play with your new iToy now and let the rich grownups talk.

And when you're through with it, the trash can is over there, full of Jobs obsolete iCrap.

:apple:
 
I think that this is clearer if you say:

As you noted, read-only data and code pages are discarded when memory is low, and if needed again are read back from the original executable.​

Read-only pages are never written back to the original file - there'd be no point in doing that.
Good catch. Much better way of putting it.
 
It was never planned. It cost too much.
Again, you're not fully understanding your own argument. The only conclusion that comes from your argument is that had 512MB had been exactly the same price as 256MB, the iPad would have 512MB installed.

That is simply not the case. Even as it stands, the price difference would amount to 0.5-1% depending on model--an amount less than Apple's rounding to hit even price marks like $499. In other words, the price difference is near as makes no difference.
Power optimization and memory management aren't dependent on memory size. All good companies should optimise and manage memory regardless of the amount of memory available. As I said already phones that existed before the iPad have more RAM, are smaller and have less heat dissipation abilities than the iPad.
You're handwaving around the issue. For any particular project, there are precise engineering limits and targets; for this one, Apple had limits so precise that they had to lobotomize the ARM design, a drastic step beyond contemplation for most electronics manufacturers, tweaking power and die size on a level even more precise than RAM silicon; 256 vs. 512MB is hamfisted in comparison, something you can't seem to grasp. Powering 512MB and 256MB in silicon necessarily requires a significant difference in power, a significant difference in physical die size, differences in processor hardware to address all that memory (i.e., more silicon and more power again), and accordingly, a significant difference in the heat output of the chip.

256MB met their performance targets and fit within their required tolerances and ceilings for the A4. 512MB may not. The existence of other devices with other hardware configurations says absolutely nothing about what limits were imposed by physics, by design, or by deliberate choice in this one.

By your admission that 512MB was never planned, you're clearly suggesting that the engineers didn't offer a design that had it, which then got cut for budget reasons, defeating your own argument. This is almost certainly because you've never dealt with an electronics company and don't understand what you're saying. Again, I'll repeat: more for the sake of more isn't part of good design or good engineering.
 
Again, you're not fully understanding your own argument. The only conclusion that comes from your argument is that had 512MB had been exactly the same price as 256MB, the iPad would have 512MB installed.

That is simply not the case. Even as it stands, the price difference would amount to 0.5-1% depending on model--an amount less than Apple's rounding to hit even price marks like $499. In other words, the price difference is near as makes no difference.

You're handwaving around the issue. For any particular project, there are precise engineering limits and targets; for this one, Apple had limits so precise that they had to lobotomize the ARM design, a drastic step beyond contemplation for most electronics manufacturers, tweaking power and die size on a level even more precise than RAM silicon; 256 vs. 512MB is hamfisted in comparison, something you can't seem to grasp. Powering 512MB and 256MB in silicon necessarily requires a significant difference in power, a significant difference in physical die size, differences in processor hardware to address all that memory (i.e., more silicon and more power again), and accordingly, a significant difference in the heat output of the chip.

256MB met their performance targets and fit within their required tolerances and ceilings for the A4. 512MB may not. The existence of other devices with other hardware configurations says absolutely nothing about what limits were imposed by physics, by design, or by deliberate choice in this one.

By your admission that 512MB was never planned, you're clearly suggesting that the engineers didn't offer a design that had it, which then got cut for budget reasons, defeating your own argument. This is almost certainly because you've never dealt with an electronics company and don't understand what you're saying. Again, I'll repeat: more for the sake of more isn't part of good design or good engineering.

You are incorrect. The circuitry for supporting 512MB, and the 512MB itself, would have almost 0 effect on power consumption, and certainly would not take any more silicon real estate. It would also require no change in the CPU itself.
 
You are incorrect. The circuitry for supporting 512MB, and the 512MB itself, would have almost 0 effect on power consumption
That's simply not true. The impact on the overall system would be minor, but the difference in power required to flip 2 billion bits and 4 billion bits is significant unto itself. Remember that the A4 is drawing about half a watt, in total--less than a notebook RAM module.
and certainly would not take any more silicon real estate.
Also not true. DRAM density at this size cannot be perfectly doubled. 4Gb and 2Gb are different physical sizes.
It would also require no change in the CPU itself.
Not to the CPU, but to the A4 SoC. You are treating the design as a regular desktop construction, and it is not. This is essentially a complete motherboard in less space than a CPU socket. Differences that are infinitesimal even on notebook computers are not so in embedded devices.
 
That's simply not true. The impact on the overall system would be minor, but the difference in power required to flip 2 billion bits and 4 billion bits is significant unto itself.

Also not true. DRAM density at this size cannot be perfectly doubled. 4Gb and 2Gb are different physical sizes.

Not to the CPU, but to the A4 SoC.

First, you should know I designed CPU's for over a decade for AMD, Sun, and Exponential Technology, and I have a Ph.D. where my dissertation involved the design of a memory controller and memory chip.

As to your first point: you don't flip 4 billion bits (or 2 billion bits) every cycle. Just because you have more RAM doesn't mean you suddenly gain the capability to modify it all every cycle - the number of bits that can actually flip each clock cycle is the same (assuming the same size databus, which in this case is 64 bits). You may be thinking of refresh, but refresh doesn't flip anything, and the power impact is very small in the grand scheme of things. The power consumed each clock cycle is primarily a function of the number of bits that are flipped each cycle.

Your second point seems to be the the size of the RAM chips, not the size of the CPU chip. Yes, the RAM is likely to be physically larger if fabbed in the same process, but so what? The package is more than big enough to handle it, and even if the package needed to be expanded, there is plenty of empty space inside the case (and a bigger package would actually dissipate heat better, which is probably a good thing).

Your third point is wrong. The RAM chips are not on the SoC, of course. So nothing has to change on the SoC itself. Handling double the memory requires only one additional address bit, but the SoC doubtless already supports 32 address bits, which is more than enough - no way Apple or anyone else pares out the logic dealing with the high order address bits, else they'd need a new CPU each time they increased the memory capacity downstream. If you knew what you were doing you could replace the RAM chips right now with chips double the capacity and it would work.
 
As to your first point: you don't flip 4 billion bits (or 2 billion bits) every cycle.
I'll defer to your judgment here, but in my experience litigating these issues, maintaining RAM state requires a constant, albeit small, application of power.
You may be thinking of refresh, but refresh doesn't flip anything, and the power impact is very small in the grand scheme of things. The power consumed each clock cycle is primarily a function of the number of bits that are flipped each cycle.
Indeed, and with more RAM comes more data, more transactions, and more bits flipping. "Very small in the grand scheme of things" is particularly apt here. Your experience isn't in these kinds of devices based on what you've said previously. The entire iPad consumes less power than just the N270 CPU in the JooJoo, for example. A milliwatt here and there is big.
Your second point seems to be the the size of the RAM chips, not the size of the CPU chip. Yes, the RAM is likely to be physically larger if fabbed in the same process, but so what? [...] Your third point is wrong. The RAM chips are not on the SoC, of course.
They are. The entire A4 is a single package, containing the ARM core, the PowerVR GPU, and the RAM.
If you knew what you were doing you could replace the RAM chips right now with chips double the capacity and it would work.
Unless there is some amount of talent that can split open packaged ICs, that's not true.
 
I'll defer to your judgment here, but in my experience litigating these issues, maintaining RAM state requires a constant, albeit small, application of power.

I'm now a litigator myself, and the one thing I know having worked so long in industry is that litigators don't know nearly as much about the technology they are litigating as they think they do.

The "constant" power you refer to is for refresh, and it is tiny compared to the read/write power, and probably less than 0.1% of the overall system power.

Indeed, and with more RAM comes more data, more transactions, and more bits flipping.

This argument makes no sense at all. If your working set is bigger than 256 MB and your program functions, it means you are manually paging to flash RAM (or reading static pages back from flash RAM), which burns MUCH more power than your DRAM power.

"Very small in the grand scheme of things" is particularly apt here. Your experience isn't in these kinds of devices based on what you've said previously.

My Ph.D. dissertation was exactly on this topic. It's entitled "High Speed Cache Memory Hierarchies for Yield-Limited Technologies" and the point was to get the maximum speed for the least amount of power and the smallest number of transistors. "Very small in the grand scheme of things" means that the increased power consumption is less than the variation in power consumption caused by varying CPU workload, and thus it has no implications on the overall system design.

The entire iPad consumes less power than just the N270 CPU in the JooJoo, for example. A milliwatt here and there is big.

No, a mW here and there is not big. Burning one more mW has no effect on cooling, and reduces battery capacity by 30 seconds.

They are. The entire A4 is a single package, containing the ARM core, the PowerVR GPU, and the RAM.

No they are not. Being in the same package does NOT make them on the same SoC. A SoC is a system on a CHIP. There are THREE chips in the package: two RAM chips and a single CPU chip that comprises the CPU, GPU, and assorted logic blocks.
Unless there is some amount of talent that can split open packaged ICs, that's not true.

Of course you can split open packaged ICs. If you have the equipment you can then desolder the RAMs and resolder on new RAMs. My point was not that this is a practical exercise, however. It was that the CPU chip is already capable of driving 512MB of memory (indeed, much more than that), and there is no change required to the CPU chip to accomplish that. (And, again, by "CPU chip" I mean the one chip of the three chips that is not off-the-shelf Samsung DRAM).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.
Back
Top