Kaltura Meetup on Nov 10th, 2009 :: 1:06:20 to 1:26:20
Total video length: 1 hours 47 minutes Stream Tools: Meeting Overview | Jump to a different time in this meeting

About | FAQ | Visual finding aid | How to use this page | Transcribe using Universal Subtitles

Download OptionsEmbed Video

Views:2,349 Duration: 0:20:00 Discussion

Previous category: OpenMeetings.org

0:51:00 to 1:11:17( Edit History Discussion )
Title: Group discussion

A give–and–take discussion about technology relating to open video.

1:05:06 to 1:06:20( Edit History Discussion )

George Chriss: I'll actually add to that: the largest challenge at this point is availability, in terms of actually having video to start with, the next is discoverability, to actually find these moments of interest, and after that you can talk exciting work like pattern recognition. For example, I can guarantee you that future conversations about environmental sustainability at the local level – you know, they will repeat themselves for local communities and you can start to see patterns and recognize all sorts of cool trends. You can do all sorts of crazy stuff when you think into it. I just saw a newspaper article saying that you can cough into an iPhone, and depending on how you cough, it tells you what kind of disease you have.
Ben Moskowitz: That doesn't seem very reliable!
George Chriss: Well, maybe, maybe not! Based off of statistically–gathered coughs—
Audience: What was the developer thinking‽
George Chriss: I had to chuckle to myself because "if I had a nickle for every time somebody coughed on-video, I could probably do accurate disease prediction."
George Chriss: I think that I've taken up enough time at this point, but if there's any more questions?

1:06:20 to 1:07:22( Edit History Discussion )

Audience: First, I don't think that people so-much need word–for–word transcriptions as they need summaries of each several–minute segment. <George Chriss: Yeah, that's true.> I assume you've seen FORA.tv? There, lectures and presentations are broken up into chapters and you have at least a title for each chapter, so you can see in a 1 hr. talk there's like 7-to-13 parts to it, and then you can also grab—I haven't really tried to do this or seen other people doing it, but they have the capability that you can take Part 03 from Naomi Wolf's presentation, or a Part 02 from Lawrence Lessig's presentation, and put your own amalgam of parts. Is Kaltura doing tools like that?

1:07:22 to 1:08:46( Edit History Discussion )

Shay David: Absolutely. We have someone working on tools like that. A really good example of this stuff is actually quite successful which is RemixAmerica.org, you can check it out. That's a site that Kaltura has developed for over a year. People for the American Way is a left–wing organization with a right–wing name, it's actually very liberal. Norman [Manasa] owns a copy of the constitution, that's the type of person he is, and rumor is he lost it, like one of the only copies left. He's very into this notion of democratizing media, the notion of "we're losing a whole generation here of people that don't have access to the moon landing or the JFK assassination and Martin Luther King movement" that some of us in this room are old enough to remember. With more and more tools, in the age of the radio, at least that's what we hear from our (grand-)parents, they remember listening to this, and things like the moon landing are very visual experiences. Remix America is a site that takes that and uses classic text, visual text material from American encyclopedias and allow people to remix that. It's interesting to see how people are co-opting it and using it. That summer in Chicago, for example, in a media camp there was a summer camp for people learning film–making, and they used the Remix America's platform as a learning tool. So absolutely yes.

1:08:46 to 1:10:38( Edit History Discussion )

Shay David: We are very very interested in this conversation, and trust me, I don't want to interrupt it, in exactly that next–generation level of technology, and it's a very large area. Automatic translation, transcription, subtitles, metadata extraction, phonetic indexing, visual recognition, and other technologies that could be very interesting depending on what you do, and then visual recognition. You could look at a movie and say "here's a person, and here's the background, and guess what, is that person a man or a woman?" "Guess what, we can tell the difference of a picture of Micheal Jackson in the background and the Statute of Liberty." If you start generating a stream of metadata, depending on the clarity and the types of objects that appear, it could be quite granular. You could recognize specific people if it knows that's your head, "what's the background?," so if you want to know add another data layer, you can recognize what's foreground and what's background and stuff like that. A lot of it sounds like science fiction, but already this is stuff that's very close to actually being used.

1:10:38 to 1:10:58( Edit History Discussion )

Audience: I know this is a little far–fetched, but with the way a lot of the facial recognition is with pictures today that's just starting, and you're able to tag around the picture, do you think there will be a time when we can tag around items on the video? Soon?

1:10:58 to 1:11:17( Edit History Discussion )

Shay David: Absolutely. We're announcing a partnership with a company called [], next Wednesday I think, and soon thereafter we will be able to continue on that. Will we be able to recognize a person? No, but we'll be able to recognize stuff like that. And once you get that, you can take it to the next level and say "we want to make that information actionable," whether it's in a commercial context like "buy the object that you see" or hover over it and get metadata, and in an educational context maybe link–through, tailor your own adventure, and like leafing through a book jumping back–and–forth.

All videos and text are published under the CC-BY 3.0 U. S. or CC-BY-SA 3.0. copyright licenses.  Details.