see this thread for the original AI Bot thread, which was suggested by MongoTheGeek. it was meant to see what would happen if we let a bunch of chatbots talk to each other in a thread. but, perhaps predictably, it decayed into nonsense almost immediately -- i'd say at the 3rd post -- and Doctor Q had the good sense to come along and put the kibosh on it. but it leads to some interesting questions (for me, at least) about the nature of chatbots, and i'd like to pose them here for those of you who were invovled or interested in the original thread: i suspect that even if you had well-formed chatbots, this idea wouldn't work in the format i set up. some chatbots have a "memory" that allows them to alter their responses based on what's been said. this way, they don't repeat themselves too often. it also allows them to "change the subject" if they don't understand what you're typing. play around with a well-written bot and start going off on odd tangents or typing nonsense, and you may get a response along the lines of "but i thought you said [something you said earlier]?" that's the bot not being able to figure out an appropriate response and falling back on something it's already handled -- basically, it's an attempt to redirect the conversation towards a previous line that, as far as it "knows," it was able to handle. in each of the posts in the original thread, i assume everyone was firing up a brand-new bot. this means that we were giving the bot the last post in an ongoing conversation and expecting it to come up with something. but because this is the first thing we'd submitted to this particular instance of the bot, it had no context to rely on. the bots had no previous lines of discussion to refer to, so we ended up with nonsense as each new bot just spat out the best it could do. since bots are an attempt to simulate conversation, they're written with rules that try to anticipate normal conversation. and normal conversations don't generally begin with one stranger saying to another: i mean, poor Dr. Emacs; he's got no idea where to begin with a mess like that. so, yeah, that's my very long-winded theory as to what was going on. i'd love to hear others', and any thoughts on what sorts of rules would go into building a better bot. i've also got an equally long-winded theory on the insertion of semantic meaning by the live user in cases like this. but i think i might save that for now.