Miller is describing how, by bombarding applications with arbitrary data, you might be able to make them crash. He wants us to believe that this gives him an insight into how to uncover and subsequently exploit flaws in those applications. I want some proof he can learn something useful from his 'technique', he hasn't got any such proof.
Think about it (though this might be a stretch for some contributors) I could show how to break down your door by using a massive hydraulic ram or by piling up large boulders against it, is this a practical means of forcing entry? Not in my view, no.
I'm assuming you're not a programmer. I can see why this approach doesn't make sense to a non-programmer, but it is valid. (I think his 30 vulnerabilities are proof that it does something.)
The basic idea is that most security vulnerabilities are based on software which doesn't carefully check the input that it receives. The input might be too large, or too small, or might supplies values that are logically invalid.
Example: a program that divides numbers expects you to type in the dividend and the divisor, then it prints the quotient. You type in "5" and then "0". If the program doesn't check carefully, it will divide 5 by zero which will crash the application.
This sounds like a benign example, but a lot of times programs crash because they were writing to memory without being careful... well if you're an attacker and you discover that a program writes to memory without being careful, then you know you might be able to insert shellcode* into that program's memory and trick it into running your shellcode.
(*Shellcode is code which spawns a new shell. It's similar to the shell you see in terminal. If you trick a program into spawning a shell, that shell inherits whatever privileges the original program had. There are a lot of variety of shellcodes, but one particularly nasty kind is one that opens a remote shell. Now the attacker is using the target computer across a network, or even across the internet. Most firewalls won't block this kind of connection, either.)
Now, if you "fuzz" an app (bombard with random inputs) while it is running in a debugger, then when the program crashes, the debugger will show you what code was executing just at that moment before it crashed. From there you can work backwards and explore the program's internal state to see if that bug might be exploitable for inserting your shellcode.
There's a difference between a vulnerability and an exploit. A vulnerability really means a theoretical means of attack on a piece of software. An exploit, on the other hand
Rather than thinking of security in brute physical terms (breaking in a door), think of software as being a massive labyrinth surrounding your house. You'd like to believe -- but can never be 100% sure -- that the only way in the house is to approach the door with the correct key. But there's always this possibility that somewhere in that massive labyrinth there is another route which goes around the door and bypasses the need to have that key.