Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I know this thread is about the 970FX...... but I'm curious to where IBM's 750GX is? It was supposed to go into production in December, as stated in IBM's roadmap.... but of course that has come and gone. The G3, while non-existent in Apple's from here on out, is still very applicable to use in existing Mac upgrades.

Perhaps IBM decided to move the 750GX down to a 90nm core along with the 970FX? Considering the 750GX @ 130nm could reach 1.1ghz (8W), perhaps with a die size reduction it could clock similiar speeds to Intel's Pentium M series? The G3, while lacking SIMD, still is a very highly refined CPU...... Clocks fast without breaking it's 5-stage pipeline trend.......and currentley the only PowerPC with 1mb of 1:1 L2 cache.
 
Originally posted by Plutoniq
I know this thread is about the 970FX...... but I'm curious to where IBM's 750GX is? It was supposed to go into production in December, as stated in IBM's roadmap.... but of course that has come and gone. The G3, while non-existent in Apple's from here on out, is still very applicable to use in existing Mac upgrades.

Perhaps IBM decided to move the 750GX down to a 90nm core along with the 970FX? Considering the 750GX @ 130nm could reach 1.1ghz (8W), perhaps with a die size reduction it could clock similiar speeds to Intel's Pentium M series? The G3, while lacking SIMD, still is a very highly refined CPU...... Clocks fast without breaking it's 5-stage pipeline trend.......and currentley the only PowerPC with 1mb of 1:1 L2 cache.

The question is why would you have a 750GX when you can have a 970FX for the same price or cheaper. IBM can print out nearly twice as many 90NM's on one of those large Wafers and Apple odering even more 970's would drive the prices even lower. IMO, there is no reason to create a 750GX. The 970FX will become the staple processor, IMO, with Speed and other features such as SMP among other things to differentiate the line. Anything else would only add to Apple's expense and for no good reason.
 
Originally posted by Sun Baked
Not really the G4 at 1.33GHz is running max at 30-45W depending on the chip 74x5/74x7, version number, and the core voltage. Plus the rest of the numbers seem to be single CPUs.

So the single 130nm PPC970 may very well have been cranking out 97W @ 2.0GHz -- just seems quite high when Motorola was always talking about their 10W G4s.

I_am_Andrew posted this MAXIMUM POWER chart at ARS... and if you follow the discussion, Apple may have swapped a couple numbers.

These graphs are the maximum thermal design power are so are misleading to use in a comaparison, you cant reach this number even at MAX load.
 
And why is IBM making a comparison to the Itanium 2 in terms of power consumption. Itanium 2 is a workstation chip and the G5 is not. Show the power 5 as well in the chart.
 
Originally posted by army_guy
And why is IBM making a comparison to the Itanium 2 in terms of power consumption. Itanium 2 is a workstation chip and the G5 is not. Show the power 5 as well in the chart.

Hmmm... What makes Itanium 2 a workstation chip and G5 not? How do you define a workstation anyway, I guess the defination will be as clear as Apple's defination of personal computer. :)
 
Originally posted by stingerman
In practice they matter a lot less than you would have us believe. Look ahead algorithms minimize and in many cases circumvent latency factors altogether. And, yes a faster bus does minimize latency issues. Too, the G5 bus is Switched with out-of-band management, which is much faster than the shared busses and contention slowdowns of the XEON systems.

Another factor affecting performance is bandwidth and the G5 architecture has greater bandwidth. So when you are dealing with larger data sets, latency is a much smaller component of the performance equation. Apple has designed the G5 to perform very well on multimedia applications, including photos, videos and music. Transferring large textures for 3-D realtime graphics greatly benefits as well.

So to simply speak of latency as the driver behind performance is misleading. I believe that the G5 has the best balance of Latency, Bandwidth and bus frequency for an overall faster system. Couple that with IBM's advanced look ahead algorithms and it isn't an issue at all.

Yes the G5 has good bandwidth but latency is still important. Opteron has a max bandwidth of 6.4GB/s x 2 assuming the OS is NUMA aware, however in real world in manages around 5.5-6.0 GB/s. Couple 12.8GB/s with ultra low latency and you have awsome momory peformance.
 
Originally posted by jj2003
Hmmm... What makes Itanium 2 a workstation chip and G5 not? How do you define a workstation anyway, I guess the defination will be as clear as Apple's defination of personal computer. :)

The G5 is not a workstation chip/computer, all the features and advantages of a workstation are not part of the G5. IBM would not be producing the G5 if it was classed as a workstation as it has the POWER series.
 
WHAT A BASIC workstation is.


1) Expandability.
2) Internal RAID capability at least 1 and or 5.
3) Use of ECC/Registered Memory.

Iam not critisising the G5, not at all its a good machine for the market its aimed for but it is a HIGH END PC not a workstation.

The G5 is however not suitable for everyone and has limitations some serious some not and so does OS X and Apples own choice only to allow OS X to run only on the G5/apple machines.

These are my own personal grips with the G5/OS X/Apple as too why the MACS are not suitable for ME. These are not complaints they are things iam pointing out to all those saying anyone can SWITCH.

1) Cant run my collection of PC games.
2) Cant run 3D studiomax and Softimage.
3) No MS Visio, MS office yes but when writting technical reports Visio is the tool I use for the diagrams/schematics.
4) Unable to use a Linux distribution to run EDA tools such as MENTOR/CADENCE. Iam pushing it here!!! whose gona want to run EDA tools on a MAC.

5) No room for more hard drives, weres my RAID 5.
6) What about my AGP 110 card, WILDCAT 7210.
7) My ECC memory. Errors do happen, no one wants errors period.
8) Unable to upgrade systems (CPUS etc....)

All of you people say x86 is dead and here we have MENTOR/CADENCE/SYNOPSIS releasing $15k-$500k+ tools for LINUX x86 with Opteron now following (64-bit applications).
 
Originally posted by ffakr
Where exactly is the trolling?
They guy comes in here and says 'I've looked at Opteron and 970 (in G5) and the Opteron is better for my work.. yet the G5 is better at multimedia'

Does any one here argue with the fact that the Opteron's on die memory controller has less latency than systems that still use a Northbridge? The guys code lives and dies on memory access. Opteron has an advantage. That's not a trol, it's life.

The 970 has a higher theoretical IPC, it has lots of bandwidth, but it isn't the fastest processor for every task under the sun. Opteron/Athlon64 is an excellent cpu. My next machine will be a G5 (not an upgrade to my Athlon) but that doesn't mean I wouldn't want an Athlon64 also. :)

As for Big Mac... It's a cluster. It is a relatively loosely coupled machine (even though infiniband is pretty damn fast). It's going to run the code that people bench clusters with very fast... code that runs parallel well... calculations that can be broken into discrete 'chunks'. From what Tortoise is saying, The G5 would indeed work fabulous for calculations where you are more concerned about parallelism than the ability of every CPU to access memory as fast as possible. If you are that memory bound, you will probably have issues with achieving enough parallelism anyway.

Tortoise, I'm curious.. are you benching the individual systems? Like a dual Opteron vs. a dual G5.. where you are dealing with a shared memory pool for the processors? Or do you see a significant advantage in clusters where each node only has a memory advantage to their local memory pools?

just curious.
Ffakr.

I agree with some of your points, the no 3 supercomputer is no3 in terms of processing performance, somehow they have forgoten the memory bandwidth is the number 1 factor when building a supercomputer (ask CRAY) and the G5 does not have that. A supercomputer (CRAY T1, X1 etc...) and a cluster are 2 very different things. Memory is handled 1000X more effectively in a SC were as a clustor chokes under the load.

Just been looking at some CRAY X1 (X2 is on the way) specs, thier impressive to say the least, they have quite a few slides on SC interesting reading. Whats interesting is that the cost of a cluster ends up being MORE EXPENSIVE based on a usage of say 5 years. More expensive in terms of power consumption, node counts, effieciency at peak operation with higher node counts, replacement of faulty components, downtime.

What scares me is Redstorm 10'000 Opteron 2xx series CPUS with CRAY interconnects. The CRAY X1 interconnects shift 50GB/s sustained and thier a few years old. The line between a SC and a clustor will be blured when CRAY releases this.
 
Workstation...

I've been doing computers for 25+ years, and am really curious on that "not a workstation" processor thing? What would qualify that for you?

Certainly, the Power5 is a higher end workstation. But that doesn't mean the 970 isn't. IBM usually has a spread of workstations that go from their lower end single chip solutions (604's and variants) up to their multi-chip solutions (Power2, 3, 4, and 5). But that doesn't mean their single chip solutions weren't workstations. Compare the 970, and it is faster than anything on the planet of just a few years ago.

There are many workstations (Sun especially) that were not very expandable. Pizza boxes with 1 or no slots were popular workstations. So expandability is not the issue. And the G5's are highly expandable.

Internal RAID is certainly not a qualifier, most workstations have not have internal RAIDs (see above). But I can certainly do RAID 0 or RAID 1 on a G5. And you can actually shoehorn more drives in and do RAID 5 if you wanted. Personally, I think a SAN (as in FibreChannel XRAID) would make a much better RAID solution than trying to shoehorn one inside the machine anyways, and you can service many machines.

ECC is certainly NOT a requirement. I've known many workstations that did not support that, though it is common.

Besides, all those things would qualify a XServe as a workstation, yet the G5 is not?!? But the discussion was about the 970 -- not the box it is put in. The facts are the 970 can support all those things.

So what makes the 970 less a workstation class processor than one that is beneath it?

For me, a workstation is a function of the software and usage of what you're doing -- not just how many slots it has, what kind of RAM it uses, or your storage subsystem choices.
 
I said a basic workstation. those SUN pizza boxes usually run on the distributed network so their an exception. Iam probably being to critical about these points but I was referening to the Apple G5 not the 970 specifically. I should of been more clear. It takes alot more than $2999 for me to build a machine for which I can call a workstation, ive had no complaints from people who dont even blink to spend $5000-$7000 on a machine.
 
Bargains...

Maybe you don't like bargains.... :)

Seriously, by the time you buy a dual processor version, with two drive (so you can RAID), and put some serious RAM in it, you're at a lot more than a couple thousand. I think I was at $3500 for one I built up recently. Then you have to talk software.

But the point still is that for many things the G5 will outperform the Itanic or P4 or Athlons, and you can definitely get workstation class performance out of one. I was getting "workstation class" performance out of PPC-601's or 68000's a few years ago. Prices and setups may vary. I do agree Apple is more limited on choice/variety, and you may want something that you can't get easily the way Apple's bundled them. But saying "it's not a workstation" is sort of derisive and narrow minded.
 
More...

1) Cant run my collection of PC games.

I don't consider games the decision making factor in workstations. But the facts are that you can run many of the same games. But if you need a gaming "workstation"; sure, a PC has more choices.

2) Cant run 3D studiomax and Softimage.

I don't run those. But there are plenty of 3D and rendering packages for the Macs. If you are only willing to run one package and not consider others, then sure, stay on your platform. That doesn't mean that plenty of others aren't using it for workstation stuff because the packages they chose worked fine for them.

3) No MS Visio, MS office yes but when writting technical reports Visio is the tool I use for the diagrams/schematics.

I use OmniGraffle which has better presentation and seems faster (and imports Visio files). Honestly, I haven't done a competitive evaluation -- but there are choices out there, if you want to look.

4) Unable to use a Linux distribution to run EDA tools such as MENTOR/CADENCE. Iam pushing it here!!! whose gona want to run EDA tools on a MAC.

First it is not MAC, that's an address. It is Mac (not an acronym). This is the equivalent of me claiming to know PC's and calling them "those electronic thingy's".

There are plenty of LINUX distros on the Mac. But usually when I'm setting up LINUX stuff, I'd be setting up a server -- not a workstation. We were discussing workstations, remember?

There are certainly cases where you might turn a machine into a turn-key workstation -- that's your choice. But for general productivity and running more than one thing, you generally want a UI that's a lot better than anything I've seen on LINUX.

5) No room for more hard drives, weres my RAID 5.

I use externals and SANs. They work better for the workgroup.

6) What about my AGP 110 card, WILDCAT 7210.

Yawn. We're going to play model wars now? Where's my one model of one card that will do one thing?

No one said the Mac (or G5) is better for everything, or that it will work for one configuration you setup. But calling it "not a workstation" is lame. Saying it isn't the right platform for you because of X, Y and Z requirements is more reasonable.

7) My ECC memory. Errors do happen, no one wants errors period.

Yawn. Detectable memory errors are like 1 in 50 years with ECC if you look at MTBF's. I don't know what the OS support is -- usually the machines just crash or freeze to prevent corrupt data. Big savings to me. (NOT!).

But if you do need it, just buy an XServe and XRAID and it has the ECC and RAID stuff, and in a better box (IMHO) -- and is even more in the price range you're targeting. ($5-10K is no problem).

8) Unable to upgrade systems (CPUS etc....)

That's Apple being stupid. So what? I've played the same game with PC's. Try to put an Athlon in a P4 bay, or try to upgrade. Facts are facts, most upgrades are VERY selective as to which processors they'll take, and 95% are never used. Busses change and so on. So having it upgradable means limiting its potential (by putting it in a machine that isn't design for the new optimum). That's why most people don't upgrade...


[[All of you people say x86 is dead and here we have MENTOR/CADENCE/SYNOPSIS releasing $15k-$500k+ tools for LINUX x86 with Opteron now following (64-bit applications).]]

I've never said that x86 is dead. I think it is a lame architecture that they've done a FANTASTIC job of dragging forward far longer than it deserved to be. Who thought a 35 year old ISA would still be hanging around. But hey, the damn things still work, and they've bolted enough cruft and new modes on that it works decent -- and I don't think it is going anywhere. But that's a testemant to mans sloth and laziness, not to engineering quality. Cest la vie (such is life)....
 
You have here in other posts put forth some interesting comments. Some of the topics I've talk about my self in the past. but I do have a few counter points to pass on.

First the chip in question is the 970, G5 is Apples implementation of the chip in a tower. The 970 is as capable as any other processor for work station level implementations. Maybe not the highest peformance workstation but none the less all of the accepted features.

I tend to agree very strongly with your concerns about expanadbility and frankly don't understand why so many are so foolish as to allow the marketing program to scramble their brains. For the markets that the G5 was targeted at it simply does not have enough space internaly for storage device. So in a sense I disagree with you, it really isn't even a good implementation for a high end PC, storage expansion is critical here.

As to the statements on the OS, well that is realy garbage. If you don't like OS/X there is always Linux. Linux is certianly a workstation class OS. As to some of the listed software it makes me think that maybe all you really need is a three year old PC and not a workstation. If so just wait for MS to come out with their PC emulator.

Video graphics support does suck on the Mac platform. Hopefully Apple is aware of that and is addressing the problem.

As to the issue with Engineering level software this is and always has been an issue with Apple hardware. It is rather a big shame too. But I would not use the lack of support here as an argument against calling the G5 a workstation. The workstation type applications that do run on the G5 just aren't the mainstream engineering applications. With the advent of OS/X, Apples X-Windows subsystem and the 970 I would not be surprised to find that Apple is trying to facilitate ports. Now this may not help you personally now but it does open more "workstation" markets.



I geuss what I'm trying to say is that the G5 implementation does have a lot of issues as far as being a machine that professionals can respond to. Some of these Apple could easly address if it wanted to. The problem though is not the 970 itself, which is a bit rediculous to suggest, but the implementation. The G5 implementation though does serve a subset of the workstation market. It is a different subset then what you are familiar with but none the less a market.

The question becomes then does Apple have any intention of releasing a workstation level machine. To be honest I thought that a big brother to XServe, the rumored 3U server, would have made an excellent workstation given the right expandability. That machine has yet to see the light of day. I like the thought of a combo server workstation implementation because frankly the dedicated "workstation" market is going to disappear, the writing is on the wall here. So one has to wonder where Apple stands with respect to the high performance computing market.

Thanks
dave


Originally posted by army_guy
WHAT A BASIC workstation is.


1) Expandability.
2) Internal RAID capability at least 1 and or 5.
3) Use of ECC/Registered Memory.

Iam not critisising the G5, not at all its a good machine for the market its aimed for but it is a HIGH END PC not a workstation.

The G5 is however not suitable for everyone and has limitations some serious some not and so does OS X and Apples own choice only to allow OS X to run only on the G5/apple machines.

These are my own personal grips with the G5/OS X/Apple as too why the MACS are not suitable for ME. These are not complaints they are things iam pointing out to all those saying anyone can SWITCH.

1) Cant run my collection of PC games.
2) Cant run 3D studiomax and Softimage.
3) No MS Visio, MS office yes but when writting technical reports Visio is the tool I use for the diagrams/schematics.
4) Unable to use a Linux distribution to run EDA tools such as MENTOR/CADENCE. Iam pushing it here!!! whose gona want to run EDA tools on a MAC.

5) No room for more hard drives, weres my RAID 5.
6) What about my AGP 110 card, WILDCAT 7210.
7) My ECC memory. Errors do happen, no one wants errors period.
8) Unable to upgrade systems (CPUS etc....)

All of you people say x86 is dead and here we have MENTOR/CADENCE/SYNOPSIS releasing $15k-$500k+ tools for LINUX x86 with Opteron now following (64-bit applications).
 
Beat the Itanium, ha ha you must be out of your mind!!

I tried the Itanium just for a laugh a few months back before I recieved my BLADE 2500, a correctly setup Itanium with the appropriate 64-bit drivers, OS and software crush's all 64-bit platforms including all of the POWER series. I did HSPICE simulations ranging from 512MB-6GB database sizes and the Itanium was faster than the SUN, and the POWER by roughly 15-20%.

Forgeting the cost for a moment Itanium is a good platform but but unfortunetly it has come too late and has delivered too little, companies using EDA software do not drop everything to upgrade to faster hardware regardless of the cost. Itanium does not have user demand and does not have many 64-bit applications apart from HSPICE (which Intel forced sysnopsis to make a 64-bit Itanium version) and some of the CADENCE tools.

A platform with no software support has no future regardless of cost or performance. Oh yeah the cost the cost its more than twice the cost up fully configured BLADE 2500.

Itanium is a joke and Intel should really scrap it and start a fresh, it takes software companies to much money and time to port software to Itanium and the performance gain is not thier and together with a high price tag doesnt justify the platform or the performance you get. Stability is OK at best but IMO it could be better.
 
Itanium

Army_guy: On top of everything you said, which I agree with, the Itanium is so power hungry that you can't get anything close to the rack densities that you get with competing processors. As far as processing per cubic foot or processing per watt, the Itanium comes in dead last.
 
OK, i was talking about Itanium as a workstation implementation.
I think Itanium requires at least a 4u if Iam not mistaken, then again as I said before, air cooling went obsolite long ago, server manufacturers are experimenting with liquid cooling for packing high power cpus into racks but ive seen nothing more.

Manufacturers are being so f##king lazy in implementing it, i mean you can move 200W ++ of in a water block less than 1u thick.

Even the G5 in a 1u rack mount would get very hot to say the least, although those nice fans will generate one heck of a racket as there very high pitched. its a server so noise wouldnt be a problem id worry about the temperatures.

To put things into perspective CRAY was using liquid cooling even in the early 90's geeshh.
 
Liquid...

Liquid cooling adds cost, complexity, reliability, weight and other things. It doesn't dissipate heat, it just helps move it. You still need a fan and radiator to dissipate it to the air. And it is more inneficient (you spend heat and power to move the heat somewhere else). Ultimately, it can be necessary, but if you don't need it, then don't use it.

(Also Cray used super-cooling systems, which are different than just liquid cooling that most are thinking of).
 
yes the heat has to be dissipated by a radiator my mistake corrected above, reliability well depends on the pumps more pumps well redundancy. yes cray uses supercooling I think they cool the liquid below ambient and submerse components in the fluid, but its the same priciple only much cooler.
 
Hello all, this is the first time I've posted to this forum, however I've been reading it for a while. Actually, what prompted me to post was the torrent of misinformation from mr. army_guy.

1. the itaniums are server class processors, not workstation class processors. mark that the xeon is intel's workstation class, and pentium is the consumer class. the g5 is a more general workstation/server processor. it works well for both tasks.

2. the itanium was intel's failure, and came out far before both apple's and amd's 64bit implimentation. it's way overpriced and underpowered. the itanium 2 is decent and competes with the amd's opterons and the g5, although the opterons usually have the advantage. the g5 and opteron also support 32bit execution, where the itaniums do not. the intel and amd platforms, however, do scale up farther in number of processors. However, a dual processor system (intel, amd, or apple) makes a respectable server, cluster node, or workstation.

3. unless you are doing development, games have nothing to do with a workstation. even then you have test systems which match the spec of your customers, i.e. consumer-class computers. for instance, one can spend 10k on a decent rendering system for a workstation which would do very poorly with 3d games. the requirements are totally different, with games placing a high priority on a short (real-time) rendering pipeline.

It's fine to say these systems do not meet your needs as a consumer or a professional, but please refrain from distorting the discussion with semi-facts/definitions.

Also, cray uses supercooling to bring the conductors to a superconductive state, thereby exponentially increasing the efficiency. pcs stay well away from that, and it has nothing to do with watercooling, except that one can usually have a much larger radiator than heatsink.

Back to topic, though... The facts about the heat disipation of the 970fx are sort of muddled. All of this is news to me, so by reading this thread and a couple others, it just looks like there's a small chance the processor will make it into a powerbook in a week.

Just wondering, though... Surely there has to be a ton of people out there in the manufacturing plants, and droves of hardware testers who would know of the immediate release of a powerbook g5. I mean, it's not a small task. Wouldn't it have been leaked that they are on the way?

And surely they wouldn't be shipping to stores already, right (referring to the price changes of the powerbooks in stores)? Usually apple announces a new product is ready to ship and then it actually ships at least a month later.

I was looking at a history of apple releases and it showed something like an 8 month period between the announcement of the g4 (or maybe it was the g3) powerbook and the actual finished product.

But my question is, how do we usually get the rumor info? Is it usually from the hardware plant, or the shipping agency, or from a dev guy, or a tester, or what?

I'm just wondering, 'cause it seems to me like there should be a lot more noise if it's a powerbook g5 release.
 
Leaks...

Leaks vary greatly.

Most of them come from people in the marketing organization, thus things are pretty close to release. These people get the info sooner, and have to set price and marketing campaigns.

Some of them come from development. Those are sooner, but looser. They test many models that never make it to market, and often they are testing competing version of the same model. And in manufacturing, they might trim out a feature or two.

Fewer of them still come from outside sources. Apple seeds some people with hardware (DVT seeds). The thing is that if you use those seeds, you usually have a strong reason for them, so are going to be extra-careful not to leak and lose your license. Getting seeds is a competitive advantage and worth a lot to your organization; so most have tighter security than Apple themselves.

Apple outsources manufacturing of many lines. Sometimes they leak. They have less vested interest in keeping the secret; especially if Apple put out competitive bids and the other guy won it. Again, they don't want to lose the business, but Asia is not really known for keeping industrial secrets. These leaks are usually fairly near release.

Some of them come from Apple as misinformation or test marketing. I doubt many, but companies chase leaks a variety of ways. A common one is to leak different "official" documents into different channels, with slightly different specs, and then see which specs make it to the rumors (to figure out what you need to plug). Or there's games with the competition or politics, etc.
 
Originally posted by altaic
... Actually, what prompted me to post was the torrent of misinformation from mr. army_guy....

I'm impressed that you can even read what he's written. I managed to wade through a couple of his posts before I got to the point of deciding that it was too much work to try to decode his poor grammer and typing.

... Surely there has to be a ton of people out there in the manufacturing plants, and droves of hardware testers who would know of the immediate release of a powerbook g5....

Would it be worth your job to you to leek info about the new PB if you worked at Apple? Not only loosing your job, you might also find yourself at the wrong end of a law suite. So, given the degree to which Apple works to keep a lid on things, I'm not at all surprized that info doesn't leek too much...
 
Glass houses

Originally posted by Snowy_River
I'm impressed that you can even read what he's written. I managed to wade through a couple of his posts before I got to the point of deciding that it was too much work to try to decode his poor grammer and typing.

Would it be worth your job to you to leek info about the new PB if you worked at Apple? Not only loosing your job, you might also find yourself at the wrong end of a law suite. So, given the degree to which Apple works to keep a lid on things, I'm not at all surprized that info doesn't leek too much...

I had a little trouble wading through your post, but I think I've translated it:

"grammer" = grammar
"leek" = leak
"loosing" = losing
"law suite" = lawsuit
"surprized" = surprised
"leek" = leak, again

Knowing that there are plenty of posters here whose first language is not English, I tend to cut everyone a lot of slack. I suggest you do the same, at least until your own skills improve. :)
 
Re: Glass houses

Originally posted by splashman
I had a little trouble wading through your post, but I think I've translated it:

"grammer" = grammar
"leek" = leak
"loosing" = losing
"law suite" = lawsuit
"surprized" = surprised
"leek" = leak, again

Knowing that there are plenty of posters here whose first language is not English, I tend to cut everyone a lot of slack. I suggest you do the same, at least until your own skills improve. :)

Point taken. Typos happen. ;)

FWIW, I try to cut people slack, too. I just draw the line when I have to read what they wrote several times just to figure out what they're trying to say. (And that line is simply my choice not to try to read their posts any more. It's not really meant to be a jab against them. However, I can see that how I phrased things it might have come across that way...)
 
Re: Re: Glass houses

Originally posted by Snowy_River
Point taken. Typos happen. ;)

FWIW, I try to cut people slack, too. I just draw the line when I have to read what they wrote several times just to figure out what they're trying to say. (And that line is simply my choice not to try to read their posts any more. It's not really meant to be a jab against them. However, I can see that how I phrased things it might have come across that way...)

Fair enough. Thanks for keeping things civil. I struggle with that a lot myself.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.