Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

wankey

macrumors 6502a
Original poster
Aug 24, 2005
601
296
I can't believe the amount of youtube amateur reviewers going "64-bit = faster" That is a complete lie. 64-bit doesn't mean it's automatically faster. Sometimes it even slows the program down. The only real benefit of 64-bit is when you're dealing with a lot of ram usage and high bandwidth software like gaming, video editing, photo editing, and other things.
 
I can't believe the number of youtubers making tutorials on how to use the rm command, like it's better than emptying the trash. One user went on for ten minutes about how when you empty the trash the file isn't really gone, it's just compressed and can be brought back by hackers.


He had HUNDREDS of comments thanking him for teaching such useful information.
 
One user went on for ten minutes about how when you empty the trash the file isn't really gone, it's just compressed and can be brought back by hackers.
To clarify this, Trash does not compress files. Nor does rm.

However, neither of them actually removes the file data by default, just the filesystem reference - so it can be recovered by someone with physical access to your hard drive. Secure Empty Trash and srm both remove and overwrite the file (multiple times, I believe), but the default Empty Trash and rm do not.

Oh yes, on 64-bit, wankey is perfectly correct. It's not an instant speed up or anything (but Snow Leopard does that in other ways), only a speed up with respect to high RAM usage or highly complex programs (when properly optimized for 64-bit).
 
To clarify this, Trash does not compress files. Nor does rm.

Right, my point was there's no compression, and once those blocks are overwritten with new data there's no way of recovering data. In fact the youtube tutorial also advised against secure emptying, he said it was the exact same as emptying the trash.
 
In fact the youtube tutorial also advised against secure emptying, he said it was the exact same as emptying the trash.
Youtube is well known to be the source of all stupidity, right? ;)

As proof that it's not the same thing, dump a large file (or a lot of small files) into the Trash, and secure empty it. Load up Activity Monitor as it does it's thing. You'll note the "Locum" process (Latin for, approximately, "Placeholder") chugging away - it's the overwrite-with-random-data process, near as I can tell. You should also notice your disk activity going nuts, as a result.
 
I can't believe the amount of youtube amateur reviewers going "64-bit = faster" That is a complete lie. 64-bit doesn't mean it's automatically faster. Sometimes it even slows the program down. The only real benefit of 64-bit is when you're dealing with a lot of ram usage and high bandwidth software like gaming, video editing, photo editing, and other things.

The situation is more interesting on OSX. The combination of the new objc runtime (64 bit only) and the 4:4 memory split for 32 bit apps on OSX mean that the benefits of 64 bit apps are somewhat larger than they are for other platforms.

That said, yeah, you'll definitely see apps where increased memory/cache/bus pressure from 64 bit slows things down a bit.
 
Its faster for me. Booted Snow Leopard holding 6 and 4 and now everything is super snappy.
 
not necessarily
Once the blocks used for a file originally are written for another file, it's very very difficult to get much useful out of it. Not impossible if it's a tiny file, functionally impossible if it's a large file (filling full block), and quite likely actually impossible if it's intentionally overwritten randomly several times by Secure Empty Trash or srm.

Its faster for me. Booted Snow Leopard holding 6 and 4 and now everything is super snappy.
Placebo effect! (There are small speed differences, but they shouldn't be drastically noticeable.)
 
As one of the guys who invented AMD64 (I refuse to call it x86-64), I'd like to point out that it will generally make things faster. Even running 32-bit software under a 64-bit kernel will make things faster, because AMD64 provides more registers and some other improvements that make all code faster. At AMD I found this to be about a 20% improvement across-the-board.

The exception is that if you take a 32-bit program and simply recompile it without thinking about your datastructures, you may end up simply doubling the amount of RAM your program uses. Many times it makes sense to keep some structures 32-bit and others 64-bit. (For example, I may have 64-bits addressed data, each of which is assigned a color. The color field need not be 64-bits. etc.) Depending on how much physical RAM you have, and depending on the particularly memory subsystem implementation, you may end up worse off due either to paging or due to lack of memory bandwidth.
 
Once the blocks used for a file originally are written for another file, it's very very difficult to get much useful out of it. Not impossible if it's a tiny file, functionally impossible if it's a large file (filling full block), and quite likely actually impossible if it's intentionally overwritten randomly several times by Secure Empty Trash or srm.

youd be surprised

theres a reason why the DoD has 7 swipes as a policy
 
As one of the guys who invented AMD64 (I refuse to call it x86-64),
Everyone called it AMD64 until Intel realized they were going to have to adopt it. I always call it AMD64; if you invent it, you name it. Now, there appears to be some kind of dirty campaign to claim it is x86-64, and AMD is trying to change it. I ask: If AMD created it, isn't it theirs to name? And I think AMD64 was the name well before the first actual release, since Linux and FreeBSD call the architecture amd64 in their source trees. Apple seems to incorrectly use x86_64(but at least makes sense). Microsoft uses x64 (makes no sense, so as not to offend Intel.)

Even running 32-bit software under a 64-bit kernel will make things faster, because AMD64 provides more registers and some other improvements that make all code faster.
Wouldn't this only be in kernel or driver code?

...you may end up simply doubling the amount of RAM your program uses. Many times it makes sense to keep some structures 32-bit and others 64-bit. (For example, I may have 64-bits addressed data, each of which is assigned a color. The color field need not be 64-bits. etc.)

This shouldn't be too big an issue, since all compilers I am aware of do not increase the size of int; only pointers increase in size.
 
Everyone called it AMD64 until Intel realized they were going to have to adopt it. I always call it AMD64; if you invent it, you name it. Now, there appears to be some kind of dirty campaign to claim it is x86-64, and AMD is trying to change it. I ask: If AMD created it, isn't it theirs to name? And I think AMD64 was the name well before the first actual release, since Linux and FreeBSD call the architecture amd64 in their source trees. Apple seems to incorrectly use x86_64. Microsoft uses x64 (so as not to offend Intel.)

Wouldn't this only be in kernel or driver code?



This shouldn't be too big an issue, since all compilers I am aware of do not increase the size of int; only pointers increase in size.

on linux, i used to see a lot of problems where longs would change size. By now it might all be fixed - you have to remember I was doing all this in the early days of AMD64, even before it was publicly released.

The increased registers help all user-ring code, too, so long as the code is compiled to be aware of AMD64 (even in 32 bit mode). If the code is not compiled to be aware of AMD64, then it still helps somewhat because of register renaming - internally there are more registers than appear to the architecture, and they are assigned an architectural meaning depending on what is needed. (So, for example, several of them may correspond to the DX register, each for a different process). Since AMD64 has more architectural registers, it tends to have more physical registers. In pure 32-bit mode, some of these are typically unavailable. But, definitely, the 20% improvement comes from re-compiling with amd64 turned on, in 32-bit. (My experiments were all done with gcc).
 
The exception is that if you take a 32-bit program and simply recompile it without thinking about your datastructures, you may end up simply doubling the amount of RAM your program uses.

It's sometimes even worse than that, actually. If a particular structure goes from below the malloc threshold to above it, it's not that hard to accidentally quadruple its size and/or kill allocation performance. The threshold has been raised to 127kB in Snow Leopard both for performance and to help avoid this scenario (unfortunately with an associated cost in increased heap fragmentation).
 
It's sometimes even worse than that, actually. If a particular structure goes from below the malloc threshold to above it, it's not that hard to accidentally quadruple its size and/or kill allocation performance. The threshold has been raised to 127kB in Snow Leopard both for performance and to help avoid this scenario (unfortunately with an associated cost in increased heap fragmentation).

Good point.
 
There are some noticeable improvements in places you might not expect.

I have some pretty bad PDFs (nothing big, but the content on some of the pages really slows down most PDF readers and crashes some!) and if I look at those PDFs in Preview's 32-bit mode you can notice that it struggles with them more than it does in the 64-bit mode.
 
youd be surprised

theres a reason why the DoD has 7 swipes as a policy
There's always a margin for error with policies like that - it's not like they have something that can recover data off of 6-pass rewrites, but not 7 for example - if they did, the policy would likely be up around 15 or more. It's nearly impossible once the block has been fully overwritten (barring caches and similar duplicating the data elsewhere) to recover much, if any, prior data from the block.

Indeed, it's still widely accepted that once data has been fully overwritten, even only once, it's functionally (not theoretically) impossible to recover anything significant. If you can provide sources that say otherwise (for modern hard drives), I'd be very interested in reading through them.

The main danger with letting the OS 'naturally' overwrite the data is that blocks are not consistently overwritten (meaning you might or might not 'get' the block you deleted when you make a new file), nor is a new block always fully filled (ends of files and files <4KiB). There exists similar problems (as I mentioned) with caches, metadata indices, and other silent duplication of data.
 
The increased registers help all user-ring code, too, so long as the code is compiled to be aware of AMD64 (even in 32 bit mode). If the code is not compiled to be aware of AMD64, then it still helps somewhat because of register renaming - internally there are more registers than appear to the architecture, and they are assigned an architectural meaning depending on what is needed. (So, for example, several of them may correspond to the DX register, each for a different process). Since AMD64 has more architectural registers, it tends to have more physical registers. In pure 32-bit mode, some of these are typically unavailable. But, definitely, the 20% improvement comes from re-compiling with amd64 turned on, in 32-bit. (My experiments were all done with gcc).

That is interesting. I would have thought that the results of something like that (amd64 awareness) would be identical to what you would get if the compiler is aware of register renaming.
 
That is interesting. I would have thought that the results of something like that (amd64 awareness) would be identical to what you would get if the compiler is aware of register renaming.

If the code is aware of amd64, then it can make explicit use of additional architectural registers, thus reducing the number of times it needs to go to the memory subsystem.
 
If the code is aware of amd64, then it can make explicit use of additional architectural registers, thus reducing the number of times it needs to go to the memory subsystem.

I mean 32-bit code... x86 code. Does it do that by assuming more rename register, unrolling loops farther, etc?
 
I mean 32-bit code... x86 code. Does it do that by assuming more rename register, unrolling loops farther, etc?

The 32-bit code can use the extra architectural registers (of course, not the 64-bit versions that are overlaid on the 32-bit versions), meaning that it doesn't have to perform as many time-consuming load/store instructions. There are also some improved addressing tweaks and things that can be of value to 32-bit code.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.