Levelling off in increase primarily due to less XO listed courses being enabled
One of the largest collections of Lecture Recordings in the world (> 80,000 individual recordings) Streaming between 700,000 GB to 1.4 TB of data a day (Videotron/Bell quotas are 50 gigs/month) Equivalent to between 8,000 to 14,000 hours of content consumed / day In use/accessed by > 20,000 enrolments
ped·a·go·gy [ped-uh-goh-jee, -goj-ee] Show IPA noun, plural -gies. 1.the function or work of a teacher; teaching. 2.the art or science of teaching; education; instructional methods.
Classroom attendance Am I being recorded ? End of recording ? Med / Dent / non regular classrooms Flashing light @back of lecture hall Confidence Monitor? No Sound Edit / Change Instructor Names, metadata Edit/Trim Start/End of recordings
Is a decreased attendance such bad thing? Trends towards smaller classroom size with more interaction between instructors/teachers Students who are present are more focused/dedicated to being there. Less distracters Delayed/Selective publishing of recordings Focus with tools like clickers for attendance
Course Configuration parameters Empowering the instructors to manage their own course configuration parameters. Default Publishing state Enabling of Downloads, etc... Notification settings on recording availability Recordings Editor Enable/Disable individual recordings once published into the IMMS
Metadata based editing of all fields related to an individual recording Enabled/Disabled Recording Name Recording Type (Lecture, Tutorial) Change/Edit Instructor Name Description of recording (searchable parameters)
Provides the ability to set in and out points on a recording to trim a set recording too. Requires Server Side processing capabilities Require User/Instructor intervention Requirements on role management
Allow recordings to be published into a disabled state, so in the course context, but NOT viewable/accessible to students
cap·tion [kap-shuhn] Show IPA Noun 1.a title or explanation for a picture or illustration, especially in a magazine. 2.a heading or title, as of a chapter, article, or page. 3.Movies, Television. the title of a scene, the text of a speech, etc., superimposed on the film and projected onto the screen. 4.Law. the heading of a legal document stating the time, place, etc., of execution or performance. verb (used with object)5.to supply a caption or captions for; entitle: to capti on a photograph.
Determine Captioning need and expectations Target audience Accuracy rating Level of involvement (hands on/off approach) Demonstrate Cool Platform Captioning Capabilities and associated Modules Discuss Implementation options and strategies
For TV/Movie content : CRTC / FCC mandated Typically embedded in TV signal Typically done in post-production, or near real- time For the WEB ? No one standard, many different players options (Flash, QuickTime, Silverlight, HTML5) Many different caption formats: ▪ DFXP, SRT, SAMI,
Native CC file integration Automated workflow to integrate ASR data in order to jump start crowd-sourced closed captioning. Web Based Caption Creator Web based CrowdCaptionCorrector (CCC)
Accessibility compliance (508 Standards) captioning for students with disabilities as Provides an additional/complementary learning modality for foreign language students.
“Very good recording program! What I like about it is the captions. Sometimes, when I re-listen to the recording, I have to rewind to verify what the professor said. With the captions in this program, I can simply pause the recording and read them. This helps me to save more time. As for the searching option, believe that it is very useful when I need to look up for a specific topic from the class lectures. Thank you.” - Han Julie Do
Vocabulary augmentation via OCR module ASR feedback mechanism : Submission of corrected captions to improve accuracy over time Web Based Instructor Training Tool, to generate customized speaker profiles By captions not being static files, rather living in a database and available to the Crowd- Captioning Interface, constantly getting better.
Integration of third party tools/appliance DocSoft AV platform Single 1RU appliance can generate speaker independent ASR text Can process 22-24 hours content / day (1:1 ratio) Testing Integration options from MAVIS (MS Research) as well as Dragon Dictate from Nuance
Why use students: Students are in effect “subject matter experts” They have an understanding of the context better than any 3rd party translator Know the vocabulary, and the speaker Turn around time, distributed across a number of students, getting high accuracy can happen fairly quickly.
For Pay For Grade For Recognition For Benefit... For “Play”
As a service offered by OSD for students with learning difficulties, the University or department could hire “reviewer” or assign dedicated editors to review and ascertain the accuracy of the caption data. PRO : You know it will get done, in predetermined/predictable time frame. CONS : Could be expensive, time consuming, accuracy from non Subject mater experts.
Participation marks (akin to the use of clickers for presence grade) As “assignments” in language/linguistics departments: - captioning “segments” of recordings, (*correcting translations, language depatments)
By peers / Instructors, by... - posted as participators - community recognition, corrector score (level 1 for 500 corrections, levelling up) Leveraging Student group NTC Writers who are typically doing this already, empowering them as “Editors”.
“Gamification typically involves applying game design thinking to non-game applications to make them more fun and engaging.” Based on core Crowd-sourced framework: Ongoing, and always available as part of the enhanced lecture recording player. Dedicated Caption-It/Review It Interface For example: Loading a 5 caption revision window before being able to get access to recordings.
Segments of the transcript, or caption chunks are stored in the database with their timing information. The Caption-It web app finds segments which have not been reviewed and assign them randomly to different users.
Correcting 5 captions typically takes a little under 1-2 minutes. There are approximately 500 caption segments per hour of recording So it would take < 100 students “playing” the Caption-It 5 revisions to cover an entire 1 hour recording
OCR : Optical Character Recognition Ability to search/recognize visual material presented / captured in a recording for keywords. Works across different mediums, from web pages, to PDFs, PPTs, event in some case hand written acetates.
PROS Works very well, with high accuracy on clearly legible font User Independent, no interaction required to get results Will attempt to recognize everything Hand writing, figures, menus CONS Tries to recognize anything, including text on desktop, menus etc... Can’t recognize everything Can give redundant results
Basic search ability, on standard recordings metadata (date, type, descriptions) Return results for keywords on a per recording or course wide basis Pluggable expandability to return results across index-able data sources Caption data (what was said) Slide content data (what was shown)
Deploy CC, ASR/OCR on a number of courses Deploy CCC module to increase accuracy and evaluate student engagement Look into : Evaluate methods of incenting students, departments (linguistics/language) Evaluate methods of “participation marks”
Ability to do machine based automated translation of captioned data FR / DE / ES Possibility to have that integration with language department
Ability to search Visual content presented (via OCR) Spoken content (ASR / time synched transcript data) User generated content, comments, attachments
Record Bookmarks into a recording Questions, Review Items, Answer to a Question Allow students to interact socially on the lecture recordings Ability to see clusters in the user activity on the timeline
Visualize recording usage Historical data integration
Required for user tracking Required for authentication Required for authorization