An application developed with the SDK can access the address book (and does have access to the various fields stored therein such as first name, last name, home phone, etc.) And such an application can cause the phone to dial a telephone number. Additionally, you can access the on-board microphone.
However...
There are significant limitations-
1. The user must explicitly launch your application. This means that you cannot trigger based on the user "clicking" the button on a hands-free set or the iPhone ear buds. This limitation basically forces the user to look at the screen and push a button-- which means it's just about as easy to push the "phone" application and select a favorite (especially if the favorites list is already selected).
2. There is no storage for a voice tag for the entries in the address book. This might actually be possible to add, since you can access the address book data-- including adding records-- but I wouldn't be willing to bet on that. In any case, there's no provision for the capture of such a tag, so your application would need to do that. That might be an issue for the user, in that they'd need to use two separate applications to maintain contact/phone information.
3. There is the possibility of being able to do a phonetic algorithm to "interpret" what the use is saying and match that with data stored in the address book. This runs into the issue of (1) knowing which number to dial when the application recognizes a given name assuming the user has multiple phone numbers and (2) the amount of horse power available to do the speech processing. Given that the device has trouble scrolling if table cells are set to transparent even when it's able to make use of the on-board CPU and GPU, I'm not sure how well it bodes trying to "understand" what an arbitrary user is trying to say without explicit tags.
rob.