Competition to design a robot delivering a TED talk


The X Prize Foundation announced the competition to design a robot delivering a TED talk. Everyone is welcome to propose the rules for the competition, which will tentatively give a robot 30 minutes to prepare a 3 minute talk on one of 100 TED talk subjects.

There is a skepticism about the competition. As NewScientist writes:

“Computer scientist Ryan Adams at Harvard University says that such a set-up would reveal little about AI. “Intelligence involves adapting, learning about the structure of the world, making decisions under uncertainty and achieving objectives over time,” he says. “Giving a talk and then ‘answering questions’ doesn’t tell us anything about any of these issues.”

I am skeptical too. First, I naively thought about what kind of data besides the TED API, which provides access to more than 1,000 talks and TED Quotes, tags, themes, ratings and more, could be pulled for the task. But after contemplating on the TED talks (and I watch them pretty regularly) and how often an exciting talk delivers too little actual information (just try once in a while to read transcripts of your favorite talks), I thought that it is not that much of a challenge to make a 3 minute TED presentation, considering that both you and your robot competitors won’t have any credentials.

The recipe might be simple: tell an interesting and emotional story to connect to an audience, ask them to raise hands or stand up, throw some bits of data to feed the brains, and finish going back to your story concluding with a lesson or two. The audience will be delighted. Do you need any AI for that? I doubt. Just upload 100 stories, nice speech recognition voice, some facts from Wikipedia and, with the Internet connection on the spot, grab something what the audience hasn’t yet read on the day of presentation. Appearance matters too.



Next in Big Data: robots learning from video (Youtube including)

'Hey! — they're rerunning the Commander Data show!'

100 hours of video uploaded every minute on Youtube are mostly getting lost and often not searchable because of not added or proper tags. Yet, video contains a wealth of information that can be used far beyond monitoring customers and citizens activities; it can be used for deep learning of computer programs, for example, auto-cars and other robots. That makes video a next frontier in the Big Data buzz.

Researchers at the University of Texas are already using object recognition to create short summaries of long videos so people can know what they’re about without having to rely on titles alone.