Presentation is loading. Please wait.

Presentation is loading. Please wait.

Towards real-time camera based logos detection Mathieu Delalandre Laboratory of Computer Science, RFAI group, Tours city, France Osaka Prefecture Partnership.

Similar presentations


Presentation on theme: "Towards real-time camera based logos detection Mathieu Delalandre Laboratory of Computer Science, RFAI group, Tours city, France Osaka Prefecture Partnership."— Presentation transcript:

1 Towards real-time camera based logos detection Mathieu Delalandre Laboratory of Computer Science, RFAI group, Tours city, France Osaka Prefecture Partnership meeting Tours city, France Friday 9 th of September 2011 1

2 Towards real-time camera based logos detection 1.Introduction 2.Devices synchronization for 3D frame tagging 3.Frame partitioning and selection 2

3 Towards real-time camera based logos detection “Introduction” (1) Logo detection from video capture using some handled interactions, to display context based information (tourist check points, bus stop, meal, etc.). This constitutes a hard computer vision application, due to the complexity of the recognition task and the real time constraints. To support the real time, two basic paths could be considered 1.To reduce complexity of the algorithms 2.To reduce the amount of data CameraSelection Pattern Recognition Frames 3

4 Towards real-time camera based logos detection “Introduction” (2) Static object: without motion and appearance modification Dynamic object: with motion then with appearance modification “With static objects, one capture (in time and space) could be enough for recognition, if recognition is perspective, scale and rotation invariant and if occlusions neither appear” “Capture instance could be detected if the embedded system can track its own positioning, and if objects are static” “Then, self-tracking embedded system can be set for single capture of static objects. It can support real-time recognition by reducing the amount of data to process, without miss-case (i.e. one capture is here, at least)” is object is camera t0t0 t1t1 t2t2 4

5 Towards real-time camera based logos detection 1.Introduction 2.Devices synchronization for 3D frame tagging 3.Frame partitioning and selection 5

6 Towards real-time camera based logos detection “Device synchronization for 3D frame tagging” (1) Camera device, to capture images Accelerometer device, that measures proper acceleration. Gyroscope device, for measuring or maintaining orientation The combination of these devices allows to tag frames in 3D space. 6 x, y, z the frame Embedded system orientation Embedded system positioning (from root) Frame coordinates d

7 Towards real-time camera based logos detection “Devices synchronization for 3D frame tagging” (2) Most of the commercial wearable systems (e.g. smartphones) can support frame tagging, but the multimodality is designed in a separate way, not in the sense of combination of these modalities. The device synchronization at hardware level is not done, and must achieved at the operating system level. How to do it ? Device controller CPU data control Memory data control Polling exchange with device (accelerometer, gyroscope) DMA exchange with device (camera) Device controller DMA data control Memory data control CPU controlinterrupt real-life event (t E ) memory writing (t w ) t  value depends of the device, considering -Acquisition delay of the device -Data transference time on bus -Execution time of control instruction -Interrupt execution time -Etc. value is an estimation, it depends of -Mean access bus rate -operating scheduling and interrupt queuing -Etc. 7

8 Towards real-time camera based logos detection “Devices synchronization for 3D frame tagging” (3) is the “root” and interrupt based device, every device will synchronize itself with it The device to be synchronized with the root device T i0 The “coarse” timer, in charge of the “root” device at level 0 T0T0 Period of timer 0 T i1 The “finer” timer, in charge of the device to synchronize at level 1 T1T1 Period of timer 1, with L1 is frame length for T1, N the whished synchronization precision,  the bounded parameter I0I0 Is the first interrupt time Root Device D 0 Device D 1 e.g. At I 0, run T i0 Every T0, run T i1 Synchronization will be done using a two timers framework - The “coarse” timer will be scheduled on the root device - The “finer” timer will be used within a “upstream” frame, to be opened previously to the next “coarse” timer period. It will allow to catch events of the device to be synchronized t E0 t E1 I0I0 T0T0 L1L1 t I 0 +T 0 T i0 T i1 I 0 +2T 0 8

9 Towards real-time camera based logos detection “Devices synchronization for 3D frame tagging” (4) t s+(k=1)T 1 s=I 0 +T 0 s+(k=2)T 1 s+(k=3)T 1 I1I1 T i1 General synchronization algorithm of the Ti1 timer k = 0 Every T 1 period k = k+1 When Ii occurs is the “root” and interrupt based device, every device will synchronize itself with it The device to be synchronized with the root device T i0 The “coarse” timer, in charge of the “root” device at level 0 T0T0 Period of timer 0 T i1 The “finer” timer, in charge of the device to synchronize at level 1 T1T1 Period of timer 1, with L1 is frame length for T1, N the whished synchronization precision,  the bounded parameter I0I0 Is the first interrupt time Root Device D 0 Device D 1 t E0 t E1 9

10 Towards real-time camera based logos detection 1.Introduction 2.Devices synchronization for 3D frame tagging 3.Frame partitioning and selection 10

11 Towards real-time camera based logos detection “Frame partitioning and selection” (1) Device synchronization can support 3D image tagging The open problems now are how to detect overlapping between frames, how to achieve the frame selection in case of overlapping,and how to access the obtained partition. 11 is the set of frame is the intersection polygon and set of regions, such as,obtained next to overlapping Is the selection method F1F1 F2F2 F3F3 F4F4 P1P1 P3P3 P4P4 P5P5 P6P6 P7P7 P2P2 e.g. x, y, z the frame orientation Positioning Frame coordinates d

12 Towards real-time camera based logos detection “Frame partitioning and selection” (2) 12 To detect the overlapping, frames can be projected into a plane D to be computed with line intersection and closed polygon detection algorithms at complexity k  O(n  log(n)). P To do it, it is necessary to fix the position of P in the 3D space and define an updating protocol F1F1 F2F2 P can be obtained by meaning positioning of frames Updating of positioning is not necessary at any frame capture, only when important differences start to appear between the current plan and recent frame captures. D1 D2 t t1t1 t2t2 At t 1, D1 is computed from the current frames At t 2, differences between D1 and D2 (corresponding to recent frame captures) is too important, D1 is shifted to D3 D3

13 Towards real-time camera based logos detection “Frame partitioning and selection” (3) 13 F1F1 F2F2 P1P1 P3P3 P2P2 Once overlapping are detected, at every overlap a region (coming from the overlapping frames) must be selected using a selection method e.g. This selection can be done using a spatial criterion d1d1 d2d2 c1c1 c2c2 c 1, c 2 are projected gravity centers of frames Video frame processing is a producer/consumer synchronization problem, where producer (i.e. frame capture) are blocked on memory constraint, and consumer (i.e. image process) are blocked when the frame stack is empty. Here, we are working “up” to the frame with partition object. Intelligent access must be driven with RAG (Region Adjacency Graph) structure and graph coloring techniques. R1F1R1F1 R2F1R2F1 R3F1R3F1 R4F2R4F2 R5F2R5F2 e.g. adjacent side F1F1 F2F2 to process together

14 Towards real-time camera based logos detection 1.Introduction 2.Devices synchronization for 3D frame tagging 3.Frame partitioning and selection 14


Download ppt "Towards real-time camera based logos detection Mathieu Delalandre Laboratory of Computer Science, RFAI group, Tours city, France Osaka Prefecture Partnership."

Similar presentations


Ads by Google