Presentation is loading. Please wait.

Presentation is loading. Please wait.

TETRA-project: SMART DATA CLOUDS (2014 – 2016)

Similar presentations


Presentation on theme: "TETRA-project: SMART DATA CLOUDS (2014 – 2016)"— Presentation transcript:

1 TETRA-project: SMART DATA CLOUDS (2014 – 2016)
TETRA-project: SMART DATA CLOUDS (2014 – 2016) . Flemish Agency for Innovation by Science and Technology At this moment we have worked out a new Technology Transfer-project founded by the Flemish Agency for Innovation by Science and Technology. We build up a nice group of companies to investigate further on new industrial applications in which the data fusion from different camera types lead to synergetic opportunities. We like to thank these companies and organizations here for their faith in our research activities. If other companies like to join us, we will consider this in the group. If there aren’t too much economic effects I think extended cooperation will give no problems. Contact persons: Website: https://www.uantwerpen.be/op3mech/ Contactpersonen: Website: https://www.uantwerpen.be/op3mech/

2 TETRA: Smart Data Clouds (2014-2016): industrial cases
a. Security, track and trace [Haven van Antwerpen]. b. Traffic control and classifications [Macq, LMS, Melexis] c. (Smart) Navigation [Wheelshairs, PIAM, mo-Vis, Automotive LMS] d. Dynamic 3D body scan [RSscan, ICRealisations, ODOS] What we have in mind can be summarized as follows. For security control we focus on observations on a longer distance range (e.g. 80 m and more). Especially the bad weather conditions must be tackled by means of specific cameras ranging from Infrared, Near Infrared to RGB and Hyperspectral. The recognition of and classification of persons, bicycles, cars and foreign objects on longer distances makes an important focus. The same problems but on a medium distance will be used for Traffic control and classification. We wonder if we can make progress in better classification methods by using different cameras working together on the same job. The point clouds of our observations will be joined together in such a way that Smart Data Clouds become available on the level of computer aided design and computer aided engineering. The previous applications use a fixed camera or a camera that moves on a controlled way but in a second group of applications we like to deal with free moving cameras or motion that comes from world objects. AGV-control and Smart Navigation are typical situations. Also the detailed measurements of moving bodies (e.g. in fitness center conditions) are in the scope. Based on special camera types we hope to get position information of the body parts in the millisecond range. The multi body approach also will be handled on Point Cloud level in appropriate software. At the end we like to reach the state of freedom over six degrees by mounting and controlling cameras which make part of a quadrocopter set up. We realize that a lot of work must be delivered but step by step we will hope to make progress in these innovating fields. Data Fusion: ind. Vision linked with CAE Data from RGB, IR, ToF and HS cameras.

3 ‘Data Fusion’ usable for future Healthcare .
TETRA: Smart Data Clouds ( ): healthcare applications   a. Wheelchair control - navigation: [mo-Vis, PIAM, Absolid] b. Gesture & Body-analysis: [RSScan, ICrealisation, Phaer, ODOS] ‘Data Fusion’ usable for future Healthcare . Parallel with the previous considerations we will work out applications that can be used in future healthcare applications. Navigations of AGV can directly be used for other navigation circumstances like automated or semi automated wheelchair control. Since we think that our society will be confronted with a hugh group of elderly people it is also wise to set up systems that can help in following them up at home. If we succeed in keeping people at home for a longer time (e.g. a half year or one year) than I think we did a good job. For such application the interpretation of gestures is very important in the context. The gaming industry proves us that a lot of possibilities became possible. I think that the contribution of SofKinetic will give us a lot of details about this topic.

4 Time of Flight: cameratypes:
03D2xx-cameras PMD[Vision]CamCube Fotonic RGB_C-Series IFM-Electronics x 288 pix! x 120 pixel 64x50 pix Recent: pmd PhotonICs® 19k-S Previous models: P – E series. Now, why do we think we can handle such applications ? Well, since three years now we are familiar with the data delivered by Time of Flight cameras. In our lab we have a lot of systems. We always look out for the newest types since every next generation proofs to be better than the previous ones on the level of resolution, frame rate and accuracy. Since different cameras have different working principles it is good to explain the major measurement principles we looked at. Some types use the principle of phase shift calculation by means of properties that come from sine waves; while other camera types use some correlation methods between the send and received signal. A third group uses the time of flight of light pulses as a direct measuring technique. Let ‘s bring these methods in line with each other. Melexis: EVK x60 pix. Near Future: MLX75023 automotive QVGA ToF sensor . Swiss Ranger 4500 MESA 176 x 144 pix. DepthSense 325 ( 320 x 240 pix ) BV-ToF 128x120 pix. ODOS: 1024x1248 pix. Real.iZ-1K vision system Swiss Ranger SR4000 (MESA) Optrima OptriCam ( 176 x 144 pix. ) ( 160 x 120 pix. )

5 x, y, z, Nx, Ny, Nz, kr, kc, R, G, B, NR , NG , NB , t° , t ...
ToF VISION: world to image, image to world - conversion P = [ tg(φ) tg(ψ) 1 d/D ] , P’ = [ tg(φ) tg(ψ) 1 ] . ( f can be chosen to be the unit.) 1 j 1 J/2 i j0 J φ i0 R horizon ψ I/2 (0,0,f) A u.ku f = 1 D r kr ,kc v.kv N d z I D/d = x/uk = y/vk = z/f u = j – j0 ; uk = ku*u v = i – i0 ; vk = kv*v tg φ = uk/f tg ψ = vk/f r = √(uk²+f²) d = √(uk²+vk²+f²) x y Every world point is unique with respect to a lot important coordinates: x, y, z, Nx, Ny, Nz, kr, kc, R, G, B, NR , NG , NB , t° , t ... The basis of our TETRA-project: ‘Smart Data Clouds’

6 D/d-related calculations. (1)
For navigation purposes, the free floor area can easily be found from: di/Di = e/E = [ f.sin(a0) + vi.cos(a0) ] / E = [ tg(a0) + tg(ψi) ].f.cos(a0)/E . Since (d/D)i = f /zi this is equivalent with: zi . [ tg(a0) + tg(ψi) ] = E/cos(a0) . Camera sensor Camera inclination = a0 . f Camera bounded parallel to the floor. a0 e di ψi E zi vi Di Floor.

7 D/d-related calculations. (2) Fast calculations !!
1 D The world normal vector n , at a random image position (v,u) . 4 3 d n v d² = u² + v² + f². 2 u f O nx ~ f.(D4/d4 - D3/d3)/(D4/d4 + D3/d3) = f.(z4 – z3)/(z4 + z3) ny ~ f.(D2/d2 - D1/d1)/(D2/d2 + D1/d1) = f.(z2 – z1)/(z2 + z1) nz ~ - (u.nx + v.ny + 1 ) ▪

8 Coordinate transformations
yPc Camera x // World x // Robot x World (yw , zw) = Robot (yr , zr) + ty yPw = - (z0 – zPc).sin(a) + yPc.cos(a) = yPr + ty zPw = (z0 – zPc).cos(a) + yPc.sin(a) = zPr f vt a zw P zPc A camera re-calibration for ToF cameras is easy and straightforward !! zr a yw Work plane = reference. yr ty

9 Pepper handling First image = empthy world plane
Next images = random pepper collections. Connected peppers can be distinguished by means of local gradients. Gradients can easily be derived from D/d-ratios. Thickness in millimeter Calculations are ‘distance’ driven, x, y and z aren’t necessary Fast calculations !

10 Bin picking & 3D-OCR YouTube: KdGiVL
Analyse ‘blobs’ one by one. Find the centre of gravity XYZ, the normal direction components Nx, Ny, Nz , the so called ‘Tool Centre Point’ and the WPS coordinates.

11 Beer barrel inspection.
IDS uEye UI-1240SE-C O3D2xx MESA SR4000 x y x y z ToF - RGB correspondency vc,P/F – kv.vt/f = tx.√(z²P+y²P) uc/F = ku.ut/F .

12 RGBd Packed… DepthSense 311 Find the z –discontinuity
…with a plastic warp DepthSense 311 Find the z –discontinuity Look for vertical and forward oriented regions Check the collineraity Use geometrical laws in order to find x, y, z and b.

13 ToF Packed IFM O3D2xx. 1. Remove weak defined pixels.
…with a foil IFM O3D2xx. 1. Remove weak defined pixels. 2. Find the z –discontinuity 3. Look for vertical and forward oriented regions 4. Check the collineraity 5. Use geometrical laws in order to find x, y, z and b. CamBoard Nano

14 Basic tasks of ToF cameras in order to support Healthcare Applications:
Guide an autonomous wheelchair along the wall of a corridor. Avoid collisions between an AGV and unexpected objects. Give warnings about obstacles (‘mind the step’…, ‘kids’, stairs…) Take position in front of a table, a desk or a TV screen. Drive over a ramp at a front door or the door leading to the garden. Drive backwards based on a ToF camera. Automatic parking of a wheelchair (e.g. battery load). Command a wheelchair to reach a given position. Guide a patient in a semi automatic bad room. Support the supervision over elderly people. Fall detection of (elderly) people.

15 Imagine a wheelchair user likes to reach 8 typical locations at home:
1. Sitting on the table. 2. Watching TV Looking through the window Working on a PC Reaching the corridor 6. Command to park the wheel- chair Finding some books on a shelf Reaching the kitchen and the garden.

16 ToF guided navigation for AGV’s. Instant Eigen Motion: translation.
D2 = DP ? Instant Eigen Motion: translation. Random points P! (In contrast: Stereo Vision must find edges, so texture is pre-assumed )

17 ToF guided Navigation of AGV’s. Instant Eigen Motion: planar rotation.
D2 = DP ? Instant Eigen Motion: planar rotation. Image data: tg(β1) = u1/f ; tg(β2) = u2/f . Task: find the correspondence β1  β2 ; Procedure: With 0 < |α| < α0 With |x| < x0 Projection rules for a random point P : Next sensor position P Here: x < 0 R > 0 . z2 D2 D1 DP z1 β1 β2 Previous sensor position α R+ δ1 α D2² = xP2² + zP2² x Parallel processing possible!

18 ToF guided navigation of AGV’s. Instant Eigen Motion: planar rotation.
e.g. Make use of 100 Random points. Result = Radius & Angle D2,i = DP,i ?

19 ToF driven Quadrocopters
Research FTI Master Electromechanics Info: ; Research: ToF driven Quadrocopters Combinations with IR/RGB Security flights over industrial areas. Conrad: DJI Phantom RTF Quadrocopter Beste collega’s, Voordat we de inhoud van het EM-onderzoek beschouwen wil ik eerst de betrokken onderzoekers zelf bedanken. Onderzoek, is immers geen ‘nine to five job’ en daarom is het fijn te zien hoe ieder van ons zijn zaken gedreven behartigt. Onderzoekers kan je best vergelijken met bergbeklimmers. Hun doel bestaat er in, ‘vlaggen te plaatsen ’op die bergtoppen, waardoor ze zich aangetrokken voelen. Jonge klimmers worden eerst nog wat begeleid maar al gauw zal blijken dat zij eigen sporen trekken in het terrein dat zij ontginnen. De juiste blik op het juiste ogenblik, leidt immers tot innovatie. De rest is moeizame arbeid om alles netjes op een rij te brengen. Vanuit die gedachte overlopen we, wat EM onderneemt tijdens haar dagelijkse klimpartijen in het hooggebergte van de Elektromechanica. TETRA-project ‘Smart Data Clouds’

20 Quadrocopter navigation based on ToF cameras
BV-ToF 128x120 pix. The target is to ‘hover’ above the end point of a black line. If ‘yaw’ is present it should be compensated by an overall torque moment. Meanwhile the translation t can be evaluated. The global effect of the roll and pitch angles repre-sent themselves by means of the points P and Q. The actual copter speed v is in the direction PQ. At the end P and Q need to come together without oscillations, while |t| becomes oriented in the y-direction. An efficient path can be followed up by means of Fuzzy Logic principals. t


Download ppt "TETRA-project: SMART DATA CLOUDS (2014 – 2016)"

Similar presentations


Ads by Google