1) WTF is a "controlled drop"? Exactly how many variables are they actually controlling?
2) Why would I care about a "controlled drop" test result anyway when every time I drop my phone it is due to an unpredicted lack of control?
(PS - I'm not saying these tests are entirely useless... just wanting people to stop pretending to be so scientific about them)
The point isn't that, "your drop might be controlled so let's try a controlled drop" (which is what you seem to be reacting against in particular), it's that they're doing
repeatable drops. Like, "we drop
every phone we test under the same repeatable circumstances (height, angle, surface, ambient temperature, etc.)". As an insurance company, they already have plenty of data in their files saying "we get claims on (X) phone at (Y) rate per number of policies issued".
They're testing this year's phones against last years phones, and other models of phones, using the same repeatable circumstances. If they were to find that, hmm, "new model (X) is 20% more likely to break than older model (Y)" in our tests, and they know how many claims they got on model Y, and how many policies they wrote on model Y (and thus what percentage of policies got a claim), then they can predict how likely they are to get claims on new model X. And this lets them adjust the rates they charge for insuring model X accordingly.
If they tested new phone X by just randomly flinging it over their shoulder, how would they correlate the results against the results they got for last year's model Y? What if they had randomly flung it a little higher, or lower, or harder, or softer, or so it landed differently, than last year? Then they couldn't reasonably compare the results. So, it's quite important to them (bottom line and everything) to test each phone in as close to the same way as possible. They likely have a test rig that can drop a phone in various carefully controlled attitudes (face down, face up, various edges down, on a corner, at various angles, etc), from various carefully controlled heights, onto various carefully controlled surfaces. They may also control for weather conditions (temperature, humidity), as some materials are more likely to shatter at lower temperatures (for instance). And they make notes on what settings they used for everything (and precisely how much and what kinds of damage resulted - which would affect what kinds of repairs were necessary and how much they are likely to cost). So when they come back next year with the new model (or next week with a phone from a different company), they can test it in the same controlled, repeatable way.
The tests are highly useful to them, in predicting how much insuring a given phone model is likely to cost them. The results are interesting to us in a more general way (phone X smashes very easily, phone Y is surprisingly resistant to breakage), and they likely release the information to the public because it generates positive press about their company/brand, and because it might put the notion in a few people's minds that they could insure their phone with this company. They already did the tests for their own use - releasing a summary of the results is free advertising at that point.
Then again, there are also a bunch of idiots on YouTube, who buy and drop brand new phones who are simply doing the calculation, "hey, if I do this, I'm likely to get advertising revenue that is substantially more than the cost of the phone, and it promotes my channel", with no intentions of using the data itself for any useful purpose. Just page views. Those folks are generally not using controlled, repeatable tests.