Kaltura Meetup on Nov 10th, 2009 :: 0:58:18 to 1:18:18
Total video length: 1 hours 47 minutes Stream Tools: Meeting Overview | Jump to a different time in this meeting

About | FAQ | Visual finding aid | How to use this page | Transcribe using Universal Subtitles

Download OptionsEmbed Video

Views:8,378 Duration: 0:20:00 Discussion

Previous category: OpenMeetings.org

0:51:00 to 1:11:17( Edit History Discussion )
Title: Group discussion

A give–and–take discussion about technology relating to open video.

0:56:01 to 0:58:18( Edit History Discussion )

Ben Moskowitz: So really quickly, I want to make some examples of what I mean by open video, to fill–it–in a little bit. So George, I think what you're doing is like a perfect example of what the future of video should look like. You've talked a little bit about what OpenMeetings.org does, but do you want to give the pitch again, like "how's does it work?" and "what makes it unique?"

George Chriss: Sure, let me come up front.
Leah Belsky: Are you going to take yourself off-camera?
GC: I never do…

So, um, yeah. Two–three years ago I wouldn't have been able to tell you that my future has a lot to do with video; at the time, [I was] just working with open technologies, [focusing on] community ethics that were touched on here. That said, video is poised for explosive growth in the future, and it does matter how that works both on technical, legal, and social levels. The Open Video Conference I think was very high–impact in terms of starting conversations that are needed to make this actually happen. At the time, one week prior to that conference I actually started the OpenMeetings.org website on the basis that, working on a volunteer basis, I had recorded a whole bunch of meetings for extending conversations that are in those meetings to other communities. (Actually the same community too, because at Penn State it's very unique in the sense that you have an organizational turnover every 4-5 years – 25% of the brightest and most experienced people leave the organization, so it's very punishing for any sort of organizational memory.) That's how at least I got started, and from there I try to do this as quickly as I can, both helping facilitate or contribute in some way to the technical levels, making it really easy to work with open formats and actually get people to use it. You can shoot all the video footage in the world, but unless you actually make it compelling it doesn't really count. That is at least one of the challenges.

0:58:18 to 1:00:16( Edit History Discussion )

The other challenge is, "how do you get these communities that haven't talked to each other yet, because everyone's so new, talking to each other?" In that sense it's been very exciting to just show up to conferences across a whole broad range of topics of societal interest and to start this, and to start it in-earnest. The Open Video Conference was the first conference I showed up to (I had one camera there)—unfortunately, there's still a video backlog, so if you are interested in volunteering some of your time I'd be happy to give you lots of video to edit—the 2009 NYC Wiki Conference (Wikipedia/Wikimania, the people that lead those communities, that was here in New York City), then also other conferences, one from a new, non-profit organization called the Open Forum Foundation, they had a whole conference about Congress and it's geared towards essentially re-engineering the basic constituent–representative relationship and deploying web 2.0 technologies into the actual legislative process, to make government work better than it does now (across a whole range of different offices and organizations).

Ben Moskowitz: So OpenMeetings.org uses the MetaVidWiki code base, right?

George Chriss: Yes; one more conference then I'll wrap that up. Then also PublicMediaCamp down in DC, and that's geared towards discussing how non-profit, PBS–style radio stations and media stations will actually interact with this, because they will be one of the first ones active in this space. They have their own challenges, financially or otherwise. There's a lot of interest there.

1:00:16 to 1:00:52( Edit History Discussion )

George Chriss: To answer the original question, to come full–loop, of what I do: I record these meetings, edit them professionally, that goes to Ogg Theora, which is a new video format, which the open standard for open video at this point in terms of openness. Audience member: What is it?
George Chriss: Ogg Theora. T-H…
Audience: Ogg?
George Chriss: Theora.
Audience: Ogg Theora? I know Ogg, but I've never heard of Ogg Theora.
George Chriss: Ogg Vorbis is the audio equivalent, Theora is the video compliment. That's new, that was one of the premises of the Open Video Conference.

1:00:52 to 1:02:04( Edit History Discussion )

Anyway, so from there it's brought into this wiki, it is a video wiki, in that all the meetings are available and they playback [in-browser]. The transcripts don't yet exist, and so the challenge is to make the transcripts exist, and that's exactly what OpenMeetings.org is. It's just about growing that project and broadening the conversations.

Ben Moskowitz: I would say that OpenMeetings.org and MetaVid are two really good examples of websites that are lighthousing what you can do with video. In both cases, you have just a wealth of content, hours–and–hours–and–hours of footage, and if there're accompanying transcripts, instead of having to watch 20-40 hrs. of content until you find out what the most interesting things are, you can search the transcripts and find that "oh, this thing that I'm really interested in came in at 13:15" and you can jump to that part or reference that part or just pull that part out if your're interested in in ways that you couldn't if the video was just being served in a black box, and you didn't have all these cool, extensible web 2.0 things attached to them.

1:02:04 to 1:02:45( Edit History Discussion )

Audience: I was doing a lot of research working in a library for a couple of years, so that you're saying that in the future, we could be instead of "reading through magazines," actually searching right to media/video content? George Chriss: That's possible now, today. Ben Moskowitz: If that's the thing that you're interested in, I would check out MetaVid.org [and Kaltura.org].

1:02:45 to 1:04:31( Edit History Discussion )

Audience: Can I ask a question? The transcription part of it, do you actually have a full transcription of it or just the metadata and keywords figured out? And is that an automatic process, or is somebody transcribing it?
George Chriss: There's a bunch of 'nos' to answer that question. 'No' the transcripts don't exist; 'no' many don't at this point, because I'm the only one that transcribes them thus far; 'yes' the video can be broken into categories so that you can say "this is a 20-minute Q&A;" within that there are clips, "OK, this person is speaking," so it shows a picture of the person; [finally,] "here's a full–text transcription of what they said, including keywords that you can categorize all of this stuff on."
Audience: So how do you get the full–text transcription?
George Chriss: You type very fast.
Audience: That's what I didn't want to hear!
Shay David: There are also tools—one of the purposes of the [Kaltura] app exchange is [the listing of] a company that I think you'll be very excited to meet <Leah Belsky: They can type faster than you!>, that does phonetic indexing that allows you to index and keyword everything. They are basically able to do phonetic searches with a learning matrix, so you would be able to connect them using George's technology with the another technology, and when we've finished launching this it's going to be something that you don't even need to program to be able to do, it's going to be "check the box, give me that, give me that [search capability]." "Press the button." But we're going to come to that.

1:04:31 to 1:05:06( Edit History Discussion )

Audience: I think having those transcripts and increasing searchability is really compelling. I mean, I don't really watch videos because the information density is so low if you have to watch the whole thing to get what you're looking at. It's really nice to be able to go to Google, and type something in, and it magically searches though PowerPoints and PDFs and all these things that used to be not-all-that indexable. I have to admit that I'm a geekie, but I forget that I can't actually Google my physical book and get really sad. I think that being able to search that video content will make videos a lot more informative for people. That's very cool.

1:05:06 to 1:06:20( Edit History Discussion )

George Chriss: I'll actually add to that: the largest challenge at this point is availability, in terms of actually having video to start with, the next is discoverability, to actually find these moments of interest, and after that you can talk exciting work like pattern recognition. For example, I can guarantee you that future conversations about environmental sustainability at the local level – you know, they will repeat themselves for local communities and you can start to see patterns and recognize all sorts of cool trends. You can do all sorts of crazy stuff when you think into it. I just saw a newspaper article saying that you can cough into an iPhone, and depending on how you cough, it tells you what kind of disease you have.
Ben Moskowitz: That doesn't seem very reliable!
George Chriss: Well, maybe, maybe not! Based off of statistically–gathered coughs—
Audience: What was the developer thinking‽
George Chriss: I had to chuckle to myself because "if I had a nickle for every time somebody coughed on-video, I could probably do accurate disease prediction."
George Chriss: I think that I've taken up enough time at this point, but if there's any more questions?

1:06:20 to 1:07:22( Edit History Discussion )

Audience: First, I don't think that people so-much need word–for–word transcriptions as they need summaries of each several–minute segment. <George Chriss: Yeah, that's true.> I assume you've seen FORA.tv? There, lectures and presentations are broken up into chapters and you have at least a title for each chapter, so you can see in a 1 hr. talk there's like 7-to-13 parts to it, and then you can also grab—I haven't really tried to do this or seen other people doing it, but they have the capability that you can take Part 03 from Naomi Wolf's presentation, or a Part 02 from Lawrence Lessig's presentation, and put your own amalgam of parts. Is Kaltura doing tools like that?

1:07:22 to 1:08:46( Edit History Discussion )

Shay David: Absolutely. We have someone working on tools like that. A really good example of this stuff is actually quite successful which is RemixAmerica.org, you can check it out. That's a site that Kaltura has developed for over a year. People for the American Way is a left–wing organization with a right–wing name, it's actually very liberal. Norman [Manasa] owns a copy of the constitution, that's the type of person he is, and rumor is he lost it, like one of the only copies left. He's very into this notion of democratizing media, the notion of "we're losing a whole generation here of people that don't have access to the moon landing or the JFK assassination and Martin Luther King movement" that some of us in this room are old enough to remember. With more and more tools, in the age of the radio, at least that's what we hear from our (grand-)parents, they remember listening to this, and things like the moon landing are very visual experiences. Remix America is a site that takes that and uses classic text, visual text material from American encyclopedias and allow people to remix that. It's interesting to see how people are co-opting it and using it. That summer in Chicago, for example, in a media camp there was a summer camp for people learning film–making, and they used the Remix America's platform as a learning tool. So absolutely yes.

1:08:46 to 1:10:38( Edit History Discussion )

Shay David: We are very very interested in this conversation, and trust me, I don't want to interrupt it, in exactly that next–generation level of technology, and it's a very large area. Automatic translation, transcription, subtitles, metadata extraction, phonetic indexing, visual recognition, and other technologies that could be very interesting depending on what you do, and then visual recognition. You could look at a movie and say "here's a person, and here's the background, and guess what, is that person a man or a woman?" "Guess what, we can tell the difference of a picture of Micheal Jackson in the background and the Statute of Liberty." If you start generating a stream of metadata, depending on the clarity and the types of objects that appear, it could be quite granular. You could recognize specific people if it knows that's your head, "what's the background?," so if you want to know add another data layer, you can recognize what's foreground and what's background and stuff like that. A lot of it sounds like science fiction, but already this is stuff that's very close to actually being used.

1:10:38 to 1:10:58( Edit History Discussion )

Audience: I know this is a little far–fetched, but with the way a lot of the facial recognition is with pictures today that's just starting, and you're able to tag around the picture, do you think there will be a time when we can tag around items on the video? Soon?

1:10:58 to 1:11:17( Edit History Discussion )

Shay David: Absolutely. We're announcing a partnership with a company called [], next Wednesday I think, and soon thereafter we will be able to continue on that. Will we be able to recognize a person? No, but we'll be able to recognize stuff like that. And once you get that, you can take it to the next level and say "we want to make that information actionable," whether it's in a commercial context like "buy the object that you see" or hover over it and get metadata, and in an educational context maybe link–through, tailor your own adventure, and like leafing through a book jumping back–and–forth.

All videos and text are published under the CC-BY 3.0 U. S. or CC-BY-SA 3.0. copyright licenses.  Details.