Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Gee, we had devices that did that when I went to college. We called them tape recorders.

And yes, professors did permit students to record their lectures.

Nowadays there are many professors and schools which require you to get specific permission to record their lectures, and many of these don't even allow you to record them at all. It's a shame as when I was in college I learned much more when I was able to record my lectures and listen to them as I studied my notes, back then I'd even sleep with a special pillow with speakers inside with my lectures playing back!
 
It's easy to geek out over the potential of a device and completely forget that you are a social being. There are certain things that people around you will not tolerate. Talking to your device is one of them.

Now, if the technology was lending itself to the benefit of everyone in the vicinity, that's different. Folks will deem that acceptable, even needed.
Well, here's a thought: the iPad 2 has a little hole in front, at the end opposite the home button. So maybe it could be taught to listen to you the same way a deaf person would? That might be ok in a crowd.
 
I don't really get this notion that society would frown upon you talking to a device. What are you doing on a cellphone? I agree that people would be irritated if you were dictating an essay while you ordered a coffee (just like they are when you take a phone call), but the conversation going on at the table next to me is not "unacceptable" and it wouldn't suddenly be inappropriate if there were one less person speaking.

We talk to cellphones constantly, the fact that another person exists elsewhere doing the same thing doesn't change the circumstance.

Also tape recorders and speech to text are not the same thing. Recording apps have existed for the iPhone etc. almost since they came out.

Societal norms are not fixed. Look at the invasiveness of cellphones/texts. 20 years ago you could have said no one would stand for it, but you'd have been wrong.
 
I think the concern is much less for the sensitivities of other people, rather a matter of relating to your personal muse, or whatever, on the private level that she demands. Sharing your thoughts with strangers is something you want to do in a controlled, non-random way. In your office it might be just fine (cf the introduction to volume 1 of Mark Twain's autobiography), but sometimes the mood will strike you to set something down in a coffee shop or on the train, where the ears of strangers can affect your own sense of the uniqueness of your thoughts.

Then, of course, Americans are rude enough as it is, yammering on the cell just absolutely anywhere, not like we need to provide another easy avenue for noise pollution.
 
I don't really get this notion that society would frown upon you talking to a device. What are you doing on a cellphone? I agree that people would be irritated if you were dictating an essay while you ordered a coffee (just like they are when you take a phone call), but the conversation going on at the table next to me is not "unacceptable" and it wouldn't suddenly be inappropriate if there were one less person speaking.

We talk to cellphones constantly, the fact that another person exists elsewhere doing the same thing doesn't change the circumstance.

Also tape recorders and speech to text are not the same thing. Recording apps have existed for the iPhone etc. almost since they came out.

Societal norms are not fixed. Look at the invasiveness of cellphones/texts. 20 years ago you could have said no one would stand for it, but you'd have been wrong.

Are you saying you don't think people get irritated hearing you converse on your cell phone sitting in a coffee shop? Or are you saying it's acceptable even though it irritates people?
 
Please, please, please don't make me listen to people talking to their stupid mobile devices. I already have to listen to their inane conversations on their phones, it's torture.:mad:
 
I'm saying if you're at a coffee shop there will be people having conversations. If it were speech directed at a device or another person I don't think it would matter much to me.

I guess some people enjoy finding ways to be irritated, however, so it would probably bother some.
 
Some people are arguing for or against voice.

It's not a required input method. It's an option. Sometimes voice makes sense for input, sometimes not. (I do have to say that it's extremely handy in Android to have voice as an option whenever the keyboard shows up. It's something I miss a lot on my iOS devices.)

The same goes for touch, stylus, pen, mouse, keyboard, gamepad, eyeball movement, hand gestures, lipreading or whatever you can think of. There are many input options to fit various situations.

I say, the more input options, the better. Of course, with Apple, style comes into play, so let's modify that to: the more hidden input options, the better :)

Speech qualifies as being a hidden feature, as do air gestures, eyeball, and lipreading. The physical input methods are harder to hide.

Thoughts?
 
Last edited:
I use voice input on my Androd phone on a daily basis.
Ie, when I'm driving, and need to do a search for a business.
 
Dragon let's you do speech to text.

Also, text messaging got so big because it's preferred over talking on a telephone. What makes you think that would change? Are you assuming people will go back to wanting to talk out loud on a phone when it's more than obvious that voice time has dropped SIGNIFICANTLY since text messaging?


It's fairly obvious to most anyone that voice is in no way the future, if anything it's the past.
 
Dragon let's you do speech to text.

Also, text messaging got so big because it's preferred over talking on a telephone. What makes you think that would change? Are you assuming people will go back to wanting to talk out loud on a phone when it's more than obvious that voice time has dropped SIGNIFICANTLY since text messaging?


It's fairly obvious to most anyone that voice is in no way the future, if anything it's the past.

I use voice extensively when needing to respond to a text while driving.
It's extremely handy.
 
Had to drag this one up to the top one last time...

iOS gets heavy speech input as a major part of its refresh? I believe this one is mine, gentlemen. :)
 
I don't really get this notion that society would frown upon you talking to a device. What are you doing on a cellphone? ......... We talk to cellphones constantly, the fact that another person exists elsewhere doing the same thing doesn't change the circumstance.

That makes all the difference, you are communicating, usually, with another person rather than talking to a peice of metal and glass. How nerdy is that :D:D
 
Does that mean the iPhone 4S is an iPad killer? :p

You actually raise an interesting question; if the iPad 2 has the same processing power as the iPhone 4S, why wouldn't it also get the Siri app? We can think of reasons, but none of them are product limitations (meaning, they're financial motivations.)

----------

That makes all the difference, you are communicating, usually, with another person rather than talking to a peice of metal and glass. How nerdy is that :D:D

Apple forums are pretty nerdy, too, but here we are! :D
 
You actually raise an interesting question; if the iPad 2 has the same processing power as the iPhone 4S, why wouldn't it also get the Siri app? We can think of reasons, but none of them are product limitations (meaning, they're financial motivations.

The iPad 2 has the same processor as the iPhone 4S, but it doesn't have the same processing power. Other threads here have noted that the amount of RAM is crucial for Siri to run well.

iOS gets heavy speech input as a major part of its refresh? I believe this one is mine, gentlemen. :)

I looked through the thread. You made all sorts of predictions about voice recognition and dictation -- many of which are well beyond the capability of Siri. And, as far as we can tell, Siri will not be part of the iPad 2.

I see no explicit prediction of yours that you think has been fulfilled. But feel free to claim it if you like. :)
 
The iPad 2 has the same processor as the iPhone 4S, but it doesn't have the same processing power. Other threads here have noted that the amount of RAM is crucial for Siri to run well.



I looked through the thread. You made all sorts of predictions about voice recognition and dictation -- many of which are well beyond the capability of Siri. And, as far as we can tell, Siri will not be part of the iPad 2.

I see no explicit prediction of yours that you think has been fulfilled. But feel free to claim it if you like. :)

I wasn't predicting what apple would do, I was saying this is where things will inevitably go, for everyone. Voice-to-text was my biggest point, and that has happened, in a big way. Voice recognition itself has always been a sort of blah feature to me. I want to see it being BETTER than physical input in some way before I bother with it. I think voice-to-text can immediately be that, but we'll see when it hits the iPad. I don't think this is the last we'll see of voice features, either way.

Also, no one yet knows what the ram on the iPhone 4s will be, or how that will play into the performance of the Siri app.
 
I wasn't predicting what apple would do, I was saying this is where things will inevitably go, for everyone. Voice-to-text was my biggest point, and that has happened, in a big way.

Apple acquired Siri in April of 2010. Google announced voice input for Froyo in May of 2010. It didn't take a rocket scientist to figure out that Apple would be working hard to create high-quality voice input for the next-generation of iPhone.

Voice recognition itself has always been a sort of blah feature to me. I want to see it being BETTER than physical input in some way before I bother with it. I think voice-to-text can immediately be that, but we'll see when it hits the iPad. I don't think this is the last we'll see of voice features, either way.

Agreed. Of course this is not the end of development of voice input.

FWIW, the iPad apps that allow one to make an audio recording while annotating the recording with timestamped written notes are quite valuable. If I were a student, that is what I'd be using in the classroom these days: review the sections of a lecture that I thought were particularly useful. The same goes for a meeting with clients: what is useful to transcribe are the sections where there's an agreement about who does what by when. A total transcript would be nice, but those are the sections that are most valuable.

Also, no one yet knows what the ram on the iPhone 4s will be, or how that will play into the performance of the Siri app.

Well, you are the one revived this old iPad thread right after the iPhone announcement. It certainly appears that you assumed this enhancement would apply to the iPad 2.

You also presumed that the iPad 2 has exactly the same processing power as the iPhone 4:

You actually raise an interesting question; if the iPad 2 has the same processing power as the iPhone 4S, why wouldn't it also get the Siri app? We can think of reasons, but none of them are product limitations (meaning, they're financial motivations.)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.