Presentation is loading. Please wait.

Presentation is loading. Please wait.

Today’s Topics Image and video processing Image and video applications.

Similar presentations


Presentation on theme: "Today’s Topics Image and video processing Image and video applications."— Presentation transcript:

1 Today’s Topics Image and video processing Image and video applications

2 Image Processing: Color Color histograms – how much of each color is in image – Probability of a pixel in the image being a particular color Color correlograms – how close colors are to each other in image – Probability of finding a pixel of a particular color at a specific distance from a pixel of a known color

3 Image/Video Processing: Subdividing Region subdivision – Sometimes we subdivide images into regions – Spread observed features at edges for more continuous model Temporal subdivision – Video is subdivided into segments – Spread features into neighboring segments

4 Image Processing: Foreground Background Separation Background Modeling – Convert to greyscale – Dynamic model (to cope with changes in signer body position and lighting) BP t =.96 * BP (t-1) +.04 P Foreground object detection – Pixels different from background model by more than a threshold are foreground pixels – Spatial filter removes regions of foreground pixels smaller than a minimum threshold Face location to determine position of foreground relative to the face Videos without a single main face are not considered as potential SL videos 4

5 Image Processing: Other Features Edge detection – Sobel filter Object and Face detection – Skintone models Face recognition Open Source Computer Vision (OpenCV)

6 Example Applications MediaGlow – Clustering and selecting images from collection DOTS – Interpreting surveillance video HyperHitchcock – Authoring and viewing interactive video

7 MediaGLOW: The Concept Task and Opportunity – Many tasks involving photo sharing or publication involve grouping a candidate photos into categories and selecting from among these groups. – There are many different reasons to group photos: based on event, location, subject matter, visual characteristics, etc. – There is the potential for systems to observe earlier user activity and expression to aid their later activity.

8 MediaGLOW: Interpreting User Action Evolving Notion of Similarity via User Expression – Photos presented in a graph-based workspace with “springs” between each pair of photos. – Lengths of springs is initially based on a default distance metric based on their time, location, tags, or visual features. – Users can pin photos in place and create piles of photos. – Distance metric to piles change as new members are added, resulting in the dynamic layout of unpinned photos in the workspace.

9 MediaGLOW: Multi-faceted Visualization of Photo Collections System Expression through Neighborhoods – Piles have neighborhood for photos that are similar to the pile based on the pile’s unique distance metric. – Photos in a neighborhood are only connected to other photos in the neighborhood, enabling piles to be moved independent of each other. – Lingering over a pile visualizes how similar other piles are to that pile, indicating system ambiguity in categories.

10 DOTS: Supporting Use of Surveillance Video The problem – Number and size of surveillance systems are increasing but human attention is limiting factor Approach – Provide summaries of action – Build interfaces knowing limits of automation – https://www.youtube.com /watch?v=9wVAVm8-bQ8 https://www.youtube.com /watch?v=9wVAVm8-bQ8

11 DOTS: The Main Interface Components – Rotating camera bank with activity graphs – Mixed-initiative main viewer – Map with tracking data – Timeline with automatic events

12 DOTS: Tracking Layout Difficulty in tracking is that camera views are often similar Tracking layout places cameras around the main viewer to aid tracking Study showed significant improvement in tracking success over traditional viewer In either layout, map can be used to find activity near a location and time.

13 HyperHitchcock: Interactive Video Issue – Vision: Seamlessly interact with characters in the show – Reality: Difficult to author even simple interactive videos Today, video is included within pages of content but links between playing videos are not common.

14 Support for Hypervideo Authoring Links in video can lead to other video segments – Short main video with branches providing additional detail – Hyperlinks to branches just like in Web pages – Making of a scene in a movie, biography of an actor, different camera angle General hypervideo difficult to author – Simple hypervideo format with only a single active link Novel approach: use automatic video analysis, create an easy-to-use interface, and support simple hypervideo format

15 Uses of Hypervideo Hypervideo well-suited for training video – Overview of topic with links to more detail – Viewers can choose video content based on their prior knowledge and current task Home video is more enjoyable if viewers can select content – Only a small portion of the video is really “usable” – Difficult to watch long home videos – Customize viewing experience for different viewers – More detail on sports events for some family members, more scenes with children for others

16 Detail-on-demand Hypervideo General hypervideo – Links from objects in video to other video – Requires object tracking – Requires interface for indicating and selecting from multiple links Detail-on-demand video – Single link from any video segment – No anchor regions to simplify viewing and authoring – https://www.youtube.com/watch?v=LaXZuLA-68g https://www.youtube.com/watch?v=LaXZuLA-68g

17 Hierarchical Video with Links Video sequences are represented as a containment hierarchy of video elements – Elements are video clips or composites grouping other video elements – Elements are played in sequence Each element can be link anchor or link destination Anchor for innermost element is available while element is playing After link destination video is played, play-back continues at the link anchor

18 Detail-on-demand Links Any video clip or composite can be link anchor or link destination Optional link offsets into destination Links have labels Link return behaviors control the purpose of the link – Play from where the viewer left the video – Play from the end of the source anchor sequence – Play from beginning of the source anchor sequence – Stop playback Different behaviors for destination completion or aborted playback

19 Hyper-Hitchcock Editor & Player Goals Hyper-Hitchcock editor designed to – Reduce cost of producing interactive video – Make resulting video useful to wider audience Simple video player to simulate DVD viewing experience

20 Hyper-Hitchcock Editor Hyper-Hitchcock evolved from Hitchcock video editor Video clips grouped in piles by similarity (e.g., recording time) Workspace to arrange clips – Resize keyframes to trim clips – Clips ordered as horizontal or vertical lists – Place links between clips – Group clips into composites Tree view to visualize containment hierarchy of composites

21 Determining Suitable Video Clips Unsuitability score – Single score for video features such as camera motion and brightness – Estimate camera pan by shifting frames against each other – Require minimum brightness Determine clip boundaries – Select clips that fall in “valleys” between unsuitability peaks – Look for areas completely above unsuitability threshold Peak is clip boundary candidate – Enforce the minimum length requirement Trimming clips – Select portion of clip with minimal area under the curve – Expand the area for longer requested portions

22 Selected Clip Portions

23 Trimming Clips in the Workspace Best five seconds of clip selected by default Resizing keyframe changes length of clip – Picks the best portion around initial five-second portion – Start and end can jump to sentence boundary silence Clip start and/or end can be locked in timeline Locked ends can be dragged Audio energy visualized in timeline to spot words and sentences

24 Video clips to be grouped into a composite (keyframe area proportional to clip length) Composite visualizations cross (area error minimized) sliding dividers (area proportional to clip length) Visualizing Video Composites

25 Attaching Links to Clips and Composites Link anchors and destinations can be clips, composites, or elements inside composites Color-coding and position indicates link attachment in workspace Links in and out of composite Blue: attached to composite Red: attached to element Dashed: between composite and element

26 Hypervideo Player Video player with controls for following and returning from links Several improvements based on user feedback – First version indicated links in timeline and showed the label for the active link – Next version showed labels in timeline – Current version includes keyframes for active link and for link history User study suggests further improvements

27 Navigation Aids in Hypervideo Player Keyframe list for navigation history – Shows followed links with more remote navigation indicated by size of keyframe – Label of followed link as video caption Link indicator in timeline (in blue) – Link labels and keyframes in timeline – Keyframe grows, label completes, and link recolored when active

28 Impressions from Users Pilot editing study with two participants – Subjects own video report on a trip to Japan family outing to mountain bike race – Few problems but link return behavior was confusing for one participant Study of hypervideo player (6 subjects) – Plumbing training video re-authored as hypervideo – All participants were able to find answers quickly in the hypervideo (9 - 20 minutes in 60-minute video) – Navigation through video can be confusing. As player was altered to better support navigation, playback becomes less like video.

29 Generating Hypervideo Summaries Locating content in video is time consuming. – Much effort into generating “good” video summaries. – But what is good for one task is not good for another. Generate hypervideo summaries that allow users to determine the level of detail viewed.

30 Process for Generating Summaries Determine number of summary levels – Based on length of source video Select clips to include in each summary level – Clips found by subdividing takes by camera motion – Select clips via clip distribution, take distribution, or best- first algorithms Add links between summary levels – Group clips by takes – Links between clips from the same take using simple take- to-take or take-to-take with offsets algorithms

31 Selecting n clips out of m candidate clips – Evenly distribute in among candidate clips – Selects more clips from takes with many recognized clips. – Good when take includes more than one topic/activity. – Bad when lots of clips are for a single topic/activity. Clip Distribution Selection

32 Evenly distribute in time and takes Evenly distribute in time and takes Divide video duration into n time segments Divide video duration into n time segments Select clip nearest center of segment in take not already represented. Select clip nearest center of segment in take not already represented. Bad when take includes more than one topic/activity. Bad when take includes more than one topic/activity. Good when lots of clips are for a single topic/activity. Good when lots of clips are for a single topic/activity. Take Distribution for Clip Selection

33 Best-first Clip Selection Assumes human or automated ordering of value of clips Assumes human or automated ordering of value of clips Simply selects n best clips Simply selects n best clips Good in cases of edited video – not currently applicable for unedited video. Good in cases of edited video – not currently applicable for unedited video. Best can be introductory material Best can be introductory material Best can be highlights of material Best can be highlights of material

34 Links in Video Summary All clips from a take are grouped into composite Single clip from take or take composite is used as link anchor and destination Simple take-to-take algorithm – Links take composite to take composite – Best when single activity divided into multiple clips. Take-to-take with offsets algorithm – Each clip from take links to take composite in next level with offset to temporally closest clip. – Best when clips portray multiple activities in take. Complete takes for entire video

35 Automatically Generated Summary 0:33 3:35 14:44 60:42 A four-level hypervideo summary of a one-hour source video. Lower levels provide more detail. Time

36 Hypervideo in Practice

37 Experience with Use Nine hypervideos each authored by 1-2 students in a Computers and New Media class Recorded up to 1 hour using a DV camera then authored hypervideo in HH Students had about 1 week for authoring activity.

38 Riding Down University Drive Most directly maps geographic structure Links are choices to stop in at sites along road.

39 “In Danger” Buildings Visits to buildings identified for demolition on TAMU campus. Links are to seeing interior and details.

40 Perspectives on Bridges Shows bridges between College Station and Austin. Presents different perspectives (roadway, construction, wildlife) in order. Links are to seeing different perspective.

41 International Dance Festival Shows bits of performances. Brief “how-to use hypervideo player” at start. Links are to seeing more and to next performance.

42 Game Walkthrough Shows what happens while playing game. Links represent choice points in game.

43 Music Hypervideo public-domain video, remixed audio

44 Home Hypervideo idiosyncratic structure

45 Roles of Links Detail Links Prerequisite Links Related Information Links Alternate View Links Action Choice Links

46 Summary Hyper-Hitchcock used to author documentary, how-to, music, and home hypervideos. Links in hypervideos used for: details, prerequisites, alternate views, action choices, and related information. Structures in hypervideo were impacted by the inclusion of return behaviors.

47 Conclusions Detail-on-demand video well-suited for training and home video – Simple interaction style appropriate for DVD-player interfaces – Enables wide range of authors due to emphasis on ease of learning and use over richness of interaction Hypervideo summaries – Remove need for single context-free video summary – Multiple clip selection and link generation algorithms – Act as starting point for human-authored summary Hyper-Hitchcock used to author documentary, how-to, music, and home hypervideos Links in hypervideos used for: details, prerequisites, alternate views, action choices, and related information

48 Today’s Topics Image and video processing – Color-oriented representations – Region and temporal segmentation – Foreground-background separation – Edge and face detection Image and video applications – MediaGlow – image selection – DOTS – surveillance – HyperHitchcock – interactive video


Download ppt "Today’s Topics Image and video processing Image and video applications."

Similar presentations


Ads by Google