Kaltura Meetup on Nov 10th, 2009 :: 1:04:31 to 1:05:06
Total video length: 1 hours 47 minutes Stream Tools: Meeting Overview | Jump to a different time in this meeting

About | FAQ | Visual finding aid | How to use this page | Transcribe using Universal Subtitles

Download OptionsEmbed Video

Views:332 Duration: 0:00:35 Discussion

Previous category: OpenMeetings.org Next category: OpenMeetings.org

0:51:00 to 1:11:17( Edit History Discussion )
Title: Group discussion

A give–and–take discussion about technology relating to open video.

1:02:45 to 1:04:31( Edit History Discussion )

Audience: Can I ask a question? The transcription part of it, do you actually have a full transcription of it or just the metadata and keywords figured out? And is that an automatic process, or is somebody transcribing it?
George Chriss: There's a bunch of 'nos' to answer that question. 'No' the transcripts don't exist; 'no' many don't at this point, because I'm the only one that transcribes them thus far; 'yes' the video can be broken into categories so that you can say "this is a 20-minute Q&A;" within that there are clips, "OK, this person is speaking," so it shows a picture of the person; [finally,] "here's a full–text transcription of what they said, including keywords that you can categorize all of this stuff on."
Audience: So how do you get the full–text transcription?
George Chriss: You type very fast.
Audience: That's what I didn't want to hear!
Shay David: There are also tools—one of the purposes of the [Kaltura] app exchange is [the listing of] a company that I think you'll be very excited to meet <Leah Belsky: They can type faster than you!>, that does phonetic indexing that allows you to index and keyword everything. They are basically able to do phonetic searches with a learning matrix, so you would be able to connect them using George's technology with the another technology, and when we've finished launching this it's going to be something that you don't even need to program to be able to do, it's going to be "check the box, give me that, give me that [search capability]." "Press the button." But we're going to come to that.

1:04:31 to 1:05:06( Edit History Discussion )

Audience: I think having those transcripts and increasing searchability is really compelling. I mean, I don't really watch videos because the information density is so low if you have to watch the whole thing to get what you're looking at. It's really nice to be able to go to Google, and type something in, and it magically searches though PowerPoints and PDFs and all these things that used to be not-all-that indexable. I have to admit that I'm a geekie, but I forget that I can't actually Google my physical book and get really sad. I think that being able to search that video content will make videos a lot more informative for people. That's very cool.

1:05:06 to 1:06:20( Edit History Discussion )

George Chriss: I'll actually add to that: the largest challenge at this point is availability, in terms of actually having video to start with, the next is discoverability, to actually find these moments of interest, and after that you can talk exciting work like pattern recognition. For example, I can guarantee you that future conversations about environmental sustainability at the local level – you know, they will repeat themselves for local communities and you can start to see patterns and recognize all sorts of cool trends. You can do all sorts of crazy stuff when you think into it. I just saw a newspaper article saying that you can cough into an iPhone, and depending on how you cough, it tells you what kind of disease you have.
Ben Moskowitz: That doesn't seem very reliable!
George Chriss: Well, maybe, maybe not! Based off of statistically–gathered coughs—
Audience: What was the developer thinking‽
George Chriss: I had to chuckle to myself because "if I had a nickle for every time somebody coughed on-video, I could probably do accurate disease prediction."
George Chriss: I think that I've taken up enough time at this point, but if there's any more questions?

All videos and text are published under the CC-BY 3.0 U. S. or CC-BY-SA 3.0. copyright licenses.  Details.