In fact you're quite wrong. Regardless of touchscreen lag, one can enter input much faster than 20 characters per second (if one could). However, it will be delayed by however many milliseconds before any such input is displayed on the screen. It's no different than if your mouse input was delayed by some lag in the system. It doesn't prevent you from jiggling the mouse as quickly as you want on your desk, that input will still go through to your OS and any software you're running, and your cursor will move just as quickly as you entered it, but it won't begin the sequence of movements until after a delay, and it won't stop moving until that same delay after you've stopped moving the mouse.
Without instantaneous (perceived) reaction by your mouse to your hand movements, you would find it quite difficult to click on anything with precision. You're really just guessing, and might easily overshoot. The delays we're talking about here are relatively small, and you learn to compensate, and gain muscle memory, but nonetheless a tenth of a second is quite perceptible. A twentieth of a second is quite perceptible too, for the record, so Apple is hardly perfect here. You need to remember that our brains evolved to deal with a physical world where action-reaction is truly instantaneous. Any lag over 20 ms is going to be apparent for small movements like those made on a phone touch screen. For a tablet, something larger with quicker movements over longer distance, the lag will need to be reduced even more to be imperceptible. You don't have to take anyone's word for it, watch the Microsoft video and see for yourself. If you watch that, and try to tell us that there's no meaningful difference between 100 ms and 50 ms, and then again between 50 ms and 10 ms, then you're simply a liar. Actual engineers and scientists who have studied this problem will call you a liar too, or perhaps visually/mentally disabled.
This is not some ideological fight between Android and iOS. These are facts, here. Apple has either made it a priority to reduce latency, or lucked into it due to various other decisions they made in building their devices and programming their software. I'm sorry if you're an Android user and don't want to believe that Apple's touch screens are more responsive in a perceptible way. But they are. This new (third party) test will be able to show us how less sucky various screens are. I don't like to think of the iPhone 5 as being twice as fast. It's more like it's half as slow. It's still slow, by almost an order of magnitude.
I'd like to think you're not just a troll, but it's obvious you haven't watched the video or read anything from various companies (like Oculus VR) who have studied input latency extensively to understand perception and improve user experience.
You might be right or I might be right. It's a rather complex subject. However, there is another aspect to this. From the information provided by Agawi it's impossible to say how valid their test methodology is and whether it makes sense at all. Here is why I think so.
* The physics behind the touch sensors is the same for Apple and other phone vendors.
* Apple does not manufacture (and I am not even sure they design) the sensing part of their panels
* I would be very surprised if on hardware level it took millisesconds to detect the touch. It's probably measured in microseconds.
* Agawi does not have access to hardware level signals (nobody but OS or maybe even just some control hardware does). From their description of the test, it looks like they just developed a trivial app that flashes screen when receiving signal from API.
* There is a legitimate question how good software developers they are

and whether they know how to use APIs correctly.
* They did not specify if they used Java code on Android or C/C++.
* I am not familiar with software development for mobile devices but I am familiar with GUI design in general. They described their test methodology as follows: "We built simple, optimized apps to flash the full screen white as quickly as possible in response to a touch." The question here is what actually happens after initial touch event. Does API immediately passes this information to application? In conventional GUI tools when they process, say, mouse click event, the low level processing does not necessarily pass this event to application right away. It will wait a predefined amount of time to see if there will be a second click. Then it will signal either a single click or a double click event. This "latency" is by design. Similar things may be happening in their test. They say that they measure the time from the moment the "finger" touches the screen to the moment the picture changes. What is OS waits to determine whether it's a simple touch or a "scroll"? What if OS wants to eliminate "noise" in cases where while performing a "scroll" the finger touches screen for, say' less than 50ms, then goes up, then down again and then moves? Another example: if API can handle multi-finger gestures, such API would need to wait certain time after initial touch (two fingers would never touch the screen at the same time) to determine what type of gesture this is. Then it's a matter of compromise: do you want to be fast (use smaller delay) or accurate (use longer delay)?
* unfortunately, they did not use an actual finger for the test

The touch panel/OS might behave differently depending on the capacitive signature of the touching object. Many devices now provide palm rejection features. I do not know if those are available on iPhone and whether they can be bypassed at API level. The sensor/OS may wait to determine what type of touch this is.
So, in short, while iPhone's touch screen might indeed be snappier, I doubt that we can rely on this particular test (with the amount of information we have) to make a judgement.