A recent article discusses the goal of building AI that anticipates user desires, opening with the following teaser:
MANY viewers were probably impressed when a character on Star Trek asked a computer for a cup of tea and it was produced immediately.
Not Kristian Hammond. “I wondered why he had to ask,” says Hammond, co-director of America’s Northwestern University intelligent information computer lab. “A truly intelligent machine would anticipate that its operator wanted tea.”
If you read the rest of the article, you see that their actual project is a bit more sensible in scope, looking at how AI can refine information search and extraction based on contextual knowledge about a user – either historical or current. It’s particularly interesting that there seems to be interest in having the tool search out information about the user as well to inform the refinement process. But after the introduction to the article, I couldn’t help but think of the following quote from Hitchhiker’s Guide to the Galaxy about the Nutrimatic drink dispenser:
When the ‘Drink’ button is pressed it makes an instant but highly detailed examination of the subject’s taste buds, a spectroscopic analysis of the subject’s metabolism, and then sends tiny experimental signals down the neural pathways to the taste centres of the subject’s brain to see what is likely to be well received. However, no-one knows quite why it does this because it then invariably delivers a cupful of liquid that is almost, but not quite, entirely unlike tea.
On the ridiculous replicator comment:
1) Guessing what someone is going to want to drink is going to be extremely difficult considering that it’s in the category of intelligent tasks that humans are good at (predicting the behavior of another human) and we do that particular task badly.
2) Even if it worked perfectly, very little work is saved considering that the drink can be ordered while the person is moving towards the machine.
3) The error rate would have to be extremely low for people to not get frustrated. Much lower than the acceptable error rate for most tasks.
4) Even if this task could be done perfectly, it is still a bad idea. People like having control and freedom of choice, even when it’s an illusion.
On the subject of the linked article, I love this quote: “To Hammond and Larry Birnbaum, the lab’s other co-director, too many scientists working with artificial intelligence spend time on esoteric rather than practical pursuits.”
But then their example, refining searches based upon current context in the computer, is stupid. Of all the Google searches I do in a day, few of them are motivated by something on the computer as compared to something external to the computer. And the work saved for when someone searches for “Armstrong” is that they don’t have to type “Armstrong jazz”. Oooh, that’s worth having something sit on my computer, continuously watching what I’m doing, using up my processor cycles and memory.
And as for history, I’d need to see evidence that it’s a good predictor for this sort of thing to believe it’s useful. And even if it is, it breaks as soon as multiple people use the same computer or television. I don’t want to be doomed to an eternity of lifestyle shows because someone who spent the weekend in my apartment loved “What Not to Wear”.
And there’s already a place where someone is trying to do this, and it demonstrates how hard this problem is to do usefully: Amazon.com recommendations. They have plenty of information on what you think about various categories of products, based on such things as: how much you browse for certain things, what things you’re willing to give up money to get, what things you have in your wish list (which are even ranked), and how you’ve ranked things you’ve bought. Even with all this information, I’ve never bought something that was recommended to me on the Amazon.com front page.
And lets not even get into the legal issues with taking on the task of telling people whether or not their medicines will interact badly.
And, fundamentally, they’re only trying for a really shallow interpretation of context. “He’s writing a paper about Louis Armstrong, I’ll suggest some links” gets you very little as a user. Thanks, clippy, but I’ve already done my research. “He’s writing a paper for class tomorrow, he’s not even close to done and his favorite pizza place closes in an hour, so if he wants to order something, he needs to do it now, so I’ll remind him” is useful context. Probably way too hard to do with current resources, which is why they’re not doing it. But trying to solve a useless problem just because it relates to the same word as a useful problem, ‘context’, when it doesn’t get you any closer to solving the useful problem, is hardly as practical as they claim to be.
I never hear Earl Gray Tea named without thinking of Star Trek.
I agree with Bryan on several points….
It’s hard for humans to figure out what other humans are going to want to drink. I would be frustrated to be given something I didn’t really want. I wouldn’t mind telling the machine what I want.
I’m not sure I agree with him that “People like having control and freedom of choice…” When I look around the U.S.A. at this time, it seems to me many people would rather have someone tell them what to do, how to think, etc. Just look at the widespread success and grown of organized religion.
I agree that the tea example is silly – people are really bad at guessing what other people want to eat or drink. Plus, we often have conflicts between what we want to eat, and the dietary rules we are trying to follow. I may want to eat fewer snacks, but I don’t want my pantry not giving them to me because of some “rule”.
I liked your point, Bryan, about the balance between computational resources versus usefulness. But I am intrigued by the effort to take context into account in creative ways. Not that their search tool is really a useful production-level application, but that they’re trying to draw from more than just the usual sources. Unfortunately, as you note, they probably aren’t the most relevant sources for the problem they are tackling.
Organized religion in the United States is extremely diverse. Even just within Christianity you can find a church that will support nearly any viewpoint. I believe that it’s more common for people to tailor their religion to their beliefs than to tailor their beliefs to their religion. In addition, religion can influence us strongly when we’re young and learning our beliefs, increasing the likelihood that we’ll agree with whatever religion we were exposed to as a child.
I agree that trying to draw from novel sources is a good idea. Normally I wouldn’t put an idea down just because it isn’t practical, if it was intellectually interesting, but in this case, they specifically talk about how what they’re doing is practical.
This just feels a lot like what you’d get if you gave Clippy access to Google.