Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You have made yourself look so terrible and coky, so full of ego and vinegar that no one outside yourself would take a thing you say seriously. I am appalled by your actions on this forum and your blatant and wanton attack on someone who bested you in each and every response he posted - and he did it without resorting to your sophomoric language and posting style.

I do not care who is right or wrong on this subject but this guy is an attorney and CPU designer - yet you provide zero details about your alleged credentials. Funny how that works, huh? You made yourself look the part of a puffed-up child who was just proven wrong. You acted in such a way that was inappropriate and simply uncalled for. I read and re-read your posts and the responses to said posts and not once did I read a retort that put words in your mouth. You put them their and cannot take them back - and it shows just how petty you are. You should be yanked from this forum for being a grade-A inconsiderate poster who has little to no respect for others. You should be ashamed of yourself.

D

I'll never be ashamed of defending myself against a basher who tells lies about me. You should be ashamed for attacking the victim and not caring about the facts of the matter.

I proved my case, others have confirmed it. The one attacking me was bashing me because he didn't read the report he was quoting!

FWIW, I'm not cocky, just very dismayed. I find it tragic that people are so easily swayed by self appointed authority figures, even when said authority figures are obviously spreading falsehoods. This guy claims to have a PhD and automatically, you presume he's right, despite the facts, which you reject out of hand with "I don't care whose right". It is not a difference of opinion when someone tells a lie about what someone else said in order to denigrate them.

This is why politicians lie, by the way. It is very effective.
 
So, I showed the x-ray picture to my wife (a non techie/geek) and her response: "wow, that looks like mine sweeper"

lol..oh I love her.
 
Yeah, the memory bus claim is pretty funny. I can't say it is not twice as wide, but I can say its absolutely hilarious to see people say that the CPU is "just a cortex A8" and then claim the "memory bus is twice as wide". Unless people are in the habit of making SoCs with a memory controller whose bus is half the width of the CPUs, it would be pretty pointless to double the width of the memory bus without redesigning the CPU, in which case it is not a "cortex A8", which while we're at it, is more like a confederation of designs than a single entity.

I've worked for and benchmarked systems at two different RISC system vendors, and doubling the width of the "memory bus" around a given CPU microarchitecture was not only done, but made a measurable difference in performance. Halving (and even quartering!) the width of the memory bus was also done in extremely cost sensitive systems. Doubling the memory bus width inside the package in conjunction with lowering the memory clock speed to save power might be one of the interesting trade-offs they could have made.
 
I've worked for and benchmarked systems at two different RISC system vendors, and doubling the width of the "memory bus" around a given CPU microarchitecture was not only done, but made a measurable difference in performance. Halving (and even quartering!) the width of the memory bus was also done in extremely cost sensitive systems. Doubling the memory bus width inside the package in conjunction with lowering the memory clock speed to save power might be one of the interesting trade-offs they could have made.

Surley this is code dependent though?

I mean tight looped code that contains few cache misses won't benefit from this sort of memory interface change. Likewise constantly long branching code that contains many cache misses won't benefit much either......

not arguing, just trying to understand and learn, rare around here I know :)

interesting your idea about lowering the ram clock speed but doubling the bus width (to save power)..... surely again the benefits are going to be code specific based on how many cache hit/misses.....


As far as I can see the ifixit piccies only show the die lapped back to metal 8 (I presume power mesh and metal strapping, but don't know for sure), not sure how somebody could assert that its 100% a cortex8 from the layout/lapped shots I've seen.......

in answer to somebodies question about whether you can buy bare memory die, yes you can... flash, sram, dram, obviously in quantity.

L
 
Next: Scientists at the Large Hadron Collider have analysed the iPad at a sub-atomic level, however, they have yet to find where exactly the "magic" is within the device. Some have speculated that the "magic" lies within the quarks that compose the Northbridge of the processor, while others believe that it is stored in another dimension to which the iPad has access. If the second proposition is true, then there has to be a protocol that the iPad uses to communicate with the parallel dimension.
 
how many years will it take for this company to rival Intel's chips? 5 years, 10 years?
(if ever, Intel's has a huge head start that will keep growing. I don't know if Apple has this type of plan anyways)

These ARM designs are not trying to beat Intel on absolute performance (e.g. LINPACK benchmarks) and get into the top 500 supercomputer list, it's all about highly effective low power designs for mobile and embedded devices.

So, on that basis, when will they (Apple/ARM) rival Intel chips? I'm not sure when they "rivalled" them (i.e. were on equal terms) but they've certainly been comprehensively beating Intel since 1990 (http://en.wikipedia.org/wiki/ARM_Holdings).

Apple wasn't involved at the time but, as far as the ARM architecture is concerned, even the very first silicon back in 1985 made the Intel competition look pretty sick (and National Semiconductor and Motorola).

- Julian
 
Unfortunately, all I can prove is there is a guy with a name similar to my handle on these forums who is both a lawyer and a CPU designer. I cannot prove I am that guy, though the fact that I've been on this board for a very long time, with the handle "cmaier," is somewhat persuasive (unless you believe I've been lying in wait for exactly such a moment). Feel free to email the person or call the person corresponding to those links and ask. That should prove it to you.

I hope you know what you do is bloody intense stuff!!!

Basic electronics confuse the !@#$ out of me and I've worked closely with the Commerical Lawyer at Franklin Law so IP isn't any simple stuff. (If I were to become a lawyer I would probably follow my fathers footsteps and become a family lawyer though.)

At least I found my strengths.
 
Some people like knowing things about the devices they use, others just like knowing things.

Thing is, why does this crap go to page1?

EDIT:

Oh and

cmaier/econgeek/cmaier/econgeek/cmaier/econgeek/cmaier/econgeek/cmaier/econgeek/cmaier/econgeek/

yawn.jpg
 
Surley this is code dependent though?

I mean tight looped code that contains few cache misses won't benefit from this sort of memory interface change.

What? Not sure if you are talking about code fetching/misses or data fetching/misses but either way it is incorrect. For exactly as cmaier pointed out earlier. Perhaps another stab at won't be pointless.

If you are going to need the code/data at address X and likely will next request the code/data at X+1 (where 1 is the 'word' size) then fetching and transfering the data at X and X+1 all at once will cut your requests in half.

You seem to looking at what happens after the data is in the cache. There isn't a zero penalty to getting the code/data into the cache in the first place. Similarly, you also need to take into account what you are fetching. For a tight code loop you may get the code into the cache. However, the data is a quite different matter. If you have a large (relative to the L1 cache size) matrix it probably will not fit in the L1 cache. Even more so if the L1 cache has some set associativity present. If it does you are doing some relatively trivial matrix math (e.g., multiple two 10x10 matrices ). As opposed to dealing with large blocks of pixels on the screen or 1000's of elements in a mesh.

The other factor folks seem to skimping over is that the GPU is hooked to the same memory. Go look at the trend line of GPUs and see if the "wider" isn't a long term trend.

http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units
http://en.wikipedia.org/wiki/Comparison_of_ATI_Graphics_Processing_Units

and yet pixel representation has remained primarily constant over time. (8 bits per primary color). Likewise the native register word is independent of bus width to memory.





Likewise constantly long branching code that contains many cache misses won't benefit much either......

Again once you get to long branch you are not going to need the next instruction word after the one you just branched to? The same X is very likley followed by X+1 effect.

Even in data. Let's say finished multiplying row X from matrix A and column Y from matrix B. The next one in tight loop will be row X and column Y+1 from B. Sure the column Y+1 from B is a series of extra pulls (if matrix B is on row major order) but the striding through row X will be inorder ( again assuming stored in row major order) until end of column inner loop till jump back to beginning elements of row. Similarly, when finish all of the columns will followed by row X+1 (if stored in row major order it is a sequential read).


Similar, doesn't have to be vectors and matrices. If declare commonly cooccuring variables together

int x, y , z ;
double a , b , c ;

z = x + y

then if when read x also read y into a cache line you are more ahead of the game than:

int x ;
double a ;
int y ;
double b ;
int z ;

z = x + y

The first example is more common of what you'd see in real code. The second is jamming up localized , sequential allocation.




in answer to somebodies question about whether you can buy bare memory die, yes you can... flash, sram, dram, obviously in quantity.

OK thanks. However, that would seem to make you more vendor and specific memory implementation dependent than the notion that ifixit had in their write-up of making you less. Pinouts and packages can be standardized. What is inside the package is not likely to be standardized. Especially, as connection tolerances get smaller and smaller. Similarly, the thermal validation for a bundle gets more complicated. Also subject to supply problems if memory vendor decides to tweak design/implementation or just straight up discontinue to chase a "hotter" part of the memory market.

If Samsung is building millions of other 2 RAM + ARM die packages with Samsung memory, it make the different ARM die in side of same bounds as the other Samsung ones can just slide it into place using exactly the same supply chain as the one already in place just with a slightly different ARM die. Reusable designs are generally much cheaper because spread the common design element costs out over more products. If the A4 ARM die is more power efficient than the common Samsung ARM dies it will fit the thermal constraints. Just need to tweak the connectors.

Also likely that other ARM setups need that wider bus with too from the current iPhone's package era. Bigger screens (more GPU bandwidth needed) and faster clock rates ( CPU pipeline bandwidth to fill. ) I doubt the A4 is the only one rolling out the factories this year with that feature.
 
I'll never be ashamed of defending myself against a basher who tells lies about me. You should be ashamed for attacking the victim and not caring about the facts of the matter.

I proved my case, others have confirmed it. The one attacking me was bashing me because he didn't read the report he was quoting!

FWIW, I'm not cocky, just very dismayed. I find it tragic that people are so easily swayed by self appointed authority figures, even when said authority figures are obviously spreading falsehoods. This guy claims to have a PhD and automatically, you presume he's right, despite the facts, which you reject out of hand with "I don't care whose right". It is not a difference of opinion when someone tells a lie about what someone else said in order to denigrate them.

This is why politicians lie, by the way. It is very effective.

I rest my case - he provided facts and you bashed him relentlessly and exceptionally harshly given the context. It is not WHAT you say but HOW you say it. You have refused to give your credentials..what are they?

D
 
how many years will it take for this company to rival Intel's chips?

They already do if you are limited by power, as in "mips"/mW. What system with an Intel x86/ia32 CPU chip will run off of a coin cell battery for any useful length of time?

And "Green" computing is getting to be a big thing these days.
 
i never believe ipAD will get troubled by ORDINARY, i adore it anyway...:apple:
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.