-hh:
I apologize for my rudeness and respect that you took the high road. My observations were limited to page 12 of this thread. Having looked back a page I see more illogical statements coming from Yamcha than you.
Not a problem...its not uncommon for readers to 'jump into' an ongoing conversation and have missed some of the prior dialog.
While I appreciate your algebraic reasoning, we both know there are too many factors to make it compelling.
Understood & understandable. The 'algebra' wasn't intended to be complete, but merely lay out the core structure as an illustration.
Rearranging:
EDIT: I just realized what you were missing from your equations. Yamcha's main argument (I believe) is that PC's give more value for the money. So the point is that (Windows)(Fast HW) $<$ (OS X)(Slow HW) while the productivity of the 2 sides is equal.
Yes, its absent. Since 'cost' is more complex than mere purchase price (eg, alluding to lifecycle costs) it would have been a distraction at that time. Cost is another variable and while I agree that it can't be ignored, it was able to be skipped so as to not lose sight of the more basic contradiction. In some businesses, factoring cost goes by the name of "CAIV" (Cost as an Independent Variable).
Can you explain what you define as "lifecycle cost"? It sounded to me like a property of the hardware and software, which would exclude the nebulous "user productivity factor." If that is the case I don't see how a Mac would come out on top.
The problem with defining lifecycle costs is that it can be done at many different levels of resolution, depending on how precise you want your numbers to be.
In principle, you can't ignore anything, which means that that nebulous productivity factor has to be addressed. However, we can initially set it aside and be pragmatic by following the Pareto Principle (aka 80/20 rule) to look for the major contributors and ignore the small ones, which will hopefully give us decent confidence that we're on the right track and that these small details won't fundamentally change the final conclusions ... but to be correct, we really need to go through everything, and to be thorough, perform a full blown sensitivity analysis too. The problem with this is that we'll end up spending dollars to account for pennies.
Here's one basic 'lifecycle cost' calculation approach:
Lifecycle Cost/month = (Purchase Cost + Electricity Costs + Operating Costs + Downtime Costs + Repair/Upgrade Costs - Residual Resale Value) / (months in service)
For example:
System A:
= ($2700 + $500 + $100 + $700 - $1000)/60 months
= $3000/60 mo
= $50/month
System B:
= ($1500 + $400 + $400 + $500 - $400)/48 months
= $2400/48 mo
= $50/month
Granted, these values are merely representative of the basic conceptual process, and in this case, I've purposefully rigged them to result in the same exact notional $50/mo lifecycle cost, to illustrate that with two very different starting points, one can end up at the same result.
Using such a framework, we can apply and test various assumptions and so forth. For example in the above, the first system costs more upfront, but it also has a higher residual value, longer useful life and lower downtime costs - characteristics that are all generally attributed to Apple / OS X systems. FWIW, you'll also perhaps notice that I made the "A" system's upgrades cost more ($700 vs $500).
The next step after this would be to add in complexity for the factors that we ignored on the first cut, such as in trying to better quantify the costs of downtime and those nebulous variations in productivity. In this case, I've simply pulled a dollar value out of my wazoo to populate the former, and I didn't even bother to try to account for the latter.
FWIW, my first cut at modeling downtime would ideally be to estimate how many hours per month one spends on a computer system doing "maintenance work" of one sort or another, and then multiply this by a reasonable hourly rate for the value of one's time.
However, I don't have that data, so another reasonable approach is to apply a "frequency of problems" ratio, which then gets multiplied by a 'how much did one problem probably cost?' swag. Here, I chose a ratio of 1:4, based roughly on the average number of forced reboots that I generally encounter per month, and $100, which is one man-hour fully burdened, rounded off. What should be obvious but needs to be clearly stated is that if one considers their time worth $0/hour, this factor will drop out.
Hope this helps you to understand the general approach I'm applying. There's a lot of stuff in the field of "Lifecycle Management" cost accounting that can be boring as all get-out when it gets down into the weeds.
In practice, one should always start with a list of what items are probably important and what are not, and make explicit what simplifying assumptions you're making to help you find the Elephants first, then the Lions, Buffalo, etc...and hopefully, you'll never have to worry about the mice - - but the paradigm contradiction is that if there's a whole bunch of mice, even something that otherwise initially seemed small enough to ignore can quickly add up to be something that can't really be ignored. As such, both frequency and severity must be monitored, which happens to be the same process as is used in a classical Risk Assessment.
-hh