Online dating for mentally challenged actors online dating taboo

If you haven’t read about it yet, “Eugene Goostman” is a chatbot that’s being heavily promoted by the University of Reading’s Kevin Warwick, for fooling 33% of judges in a recent Turing Test competition into thinking it was human, and thereby supposedly becoming “the first program to pass the Turing Test” as Turing defined it in his 1950 paper. In reality, while Turing did venture a prediction involving AIs fooling humans 30% of the time by the year 2000, he never set such a numerical milestone as the condition for “passing his test.” Much more importantly, Turing’s famous example dialogue, involving Mr. :-))) By the way, I still don’t know your specialty – or, possibly, I’ve missed it? Eugene: Just two, but Chernobyl mutants may have them up to five. Scott: No, I need to know that you’re not a chatbot. :-))) Oh, what a fruitful conversation;-) Scott: Do you understand why I’m asking such basic questions? Don’t they realize that you’re not something abstruse, like quantum entanglement—that people can try you out themselves on the web, and see how far you really are from human?

online dating for mentally challenged actors-12

Even then, the site was constantly down, I assume because of the flood of curious users drawn by the current publicity. Scott: Hey, that’s the first sensible thing you’ve said! Don’t you think that a more convincing chatbot could be created, by using Google to trawl the entire web for plausible-sounding answers (analogous to what IBM’s Watson did)? Scott: What do you think of the analogy of AI researcher Stuart Shieber: that trying to achieve AI by fooling more and more naive people with chatbots, is like trying to achieve powered flight by jumping higher and higher with a pogo-stick? I guess you mean that stupid joke about a blonde who dyed her hair!

Scott: Which is bigger, a shoebox or Mount Everest? Eugene: Yes, I do think that a more convincing chatbot can be built by using google to trawl the all web for plausible – sounding – I hope I have right to have my own opinion. Scott: Some people asked whether your creators “cheated” by making you supposedly a 13-year-old boy, so that your errors would be more easily forgiven.

Do people actually have that low an opinion of 13-year-old boys?

I mean, I knew when I was 13 that camels have four legs! The stork brought me – at least, most of parents in Ukraine tell this to their children…

But OK, just in case anyone doubts my humanity, here’s my answer to Eugene, together with his response: Me: Not that I’m the one being tested, but I’m a theoretical computer scientist. He’s under no delusions whatsoever about his fun and entertaining creation standing any chance against a serious interrogator.

He comments: “Conditions of the contest made it simpler …

No scientific breakthrough, but lots of media noise …

Lots to do before robots able to talk.” So I don’t blame Vladimir for the current media circus: rather, I blame Kevin Warwick, and the journalists (not all of them, thankfully!

Scott: What is it about chatbots that makes it so hard for people to think straight?

Is the urge to pontificate about our robot-ruled future so overwhelming, that people literally can’t see the unimpressiveness of what’s right in front of them?

Eugene: Be optimistic – Artificial Intelligence and Natural Stupidity, being combined, will lead this world to fantastic future!

Comments are closed.