Presentation on theme: "TETRA-project: SMART DATA CLOUDS (2014 – 2016)"— Presentation transcript:
1TETRA-project: SMART DATA CLOUDS (2014 – 2016) TETRA-project: SMART DATA CLOUDS (2014 – 2016) . Flemish Agency for Innovation by Science and TechnologyAt this moment we have worked out a new Technology Transfer-project founded by the Flemish Agency for Innovation by Science and Technology. We build up a nice group of companies to investigate further on new industrial applications in which the data fusion from different camera types lead to synergetic opportunities. We like to thank these companies and organizations here for their faith in our research activities. If other companies like to join us, we will consider this in the group. If there aren’t too much economic effects I think extended cooperation will give no problems.Contact persons:Website: https://www.uantwerpen.be/op3mech/Contactpersonen:Website: https://www.uantwerpen.be/op3mech/
2TETRA: Smart Data Clouds (2014-2016): industrial cases a. Security, track and trace [Haven van Antwerpen].b. Traffic control and classifications [Macq, LMS, Melexis] c. (Smart) Navigation [Wheelshairs, PIAM, mo-Vis, Automotive LMS] d. Dynamic 3D body scan [RSscan, ICRealisations, ODOS]What we have in mind can be summarized as follows. For security control we focus on observations on a longer distance range (e.g. 80 m and more). Especially the bad weather conditions must be tackled by means of specific cameras ranging from Infrared, Near Infrared to RGB and Hyperspectral. The recognition of and classification of persons, bicycles, cars and foreign objects on longer distances makes an important focus. The same problems but on a medium distance will be used for Traffic control and classification. We wonder if we can make progress in better classification methods by using different cameras working together on the same job. The point clouds of our observations will be joined together in such a way that Smart Data Clouds become available on the level of computer aided design and computer aided engineering.The previous applications use a fixed camera or a camera that moves on a controlled way but in a second group of applications we like to deal with free moving cameras or motion that comes from world objects. AGV-control and Smart Navigation are typical situations. Also the detailed measurements of moving bodies (e.g. in fitness center conditions) are in the scope. Based on special camera types we hope to get position information of the body parts in the millisecond range. The multi body approach also will be handled on Point Cloud level in appropriate software. At the end we like to reach the state of freedom over six degrees by mounting and controlling cameras which make part of a quadrocopter set up. We realize that a lot of work must be delivered but step by step we will hope to make progress in these innovating fields.Data Fusion: ind. Vision linked with CAE Data from RGB, IR, ToF and HS cameras.
3‘Data Fusion’ usable for future Healthcare . TETRA: Smart Data Clouds ( ): healthcare applications a. Wheelchair control - navigation: [mo-Vis, PIAM, Absolid]b. Gesture & Body-analysis: [RSScan, ICrealisation, Phaer, ODOS]‘Data Fusion’ usable for future Healthcare .Parallel with the previous considerations we will work out applications that can be used in future healthcare applications. Navigations of AGV can directly be used for other navigation circumstances like automated or semi automated wheelchair control.Since we think that our society will be confronted with a hugh group of elderly people it is also wise to set up systems that can help in following them up at home. If we succeed in keeping people at home for a longer time (e.g. a half year or one year) than I think we did a good job. For such application the interpretation of gestures is very important in the context. The gaming industry proves us that a lot of possibilities became possible. I think that the contribution of SofKinetic will give us a lot of details about this topic.
4Time of Flight: cameratypes: 03D2xx-cameras PMD[Vision]CamCube Fotonic RGB_C-Series IFM-Electronics x 288 pix! x 120 pixel64x50 pix Recent: pmd PhotonICs® 19k-S Previous models: P – E series.Now, why do we think we can handle such applications ? Well, since three years now we are familiar with the data delivered by Time of Flight cameras. In our lab we have a lot of systems. We always look out for the newest types since every next generation proofs to be better than the previous ones on the level of resolution, frame rate and accuracy.Since different cameras have different working principles it is good to explain the major measurement principles we looked at. Some types use the principle of phase shift calculation by means of properties that come from sine waves; while other camera types use some correlation methods between the send and received signal. A third group uses the time of flight of light pulses as a direct measuring technique. Let ‘s bring these methods in line with each other.Melexis: EVK x60 pix.Near Future: MLX75023 automotive QVGA ToF sensor .Swiss Ranger 4500 MESA 176 x 144 pix.DepthSense 325 ( 320 x 240 pix )BV-ToF128x120 pix.ODOS: 1024x1248 pix.Real.iZ-1K vision systemSwiss Ranger SR4000 (MESA) Optrima OptriCam ( 176 x 144 pix. ) ( 160 x 120 pix. )
5x, y, z, Nx, Ny, Nz, kr, kc, R, G, B, NR , NG , NB , t° , t ... ToF VISION: world to image, image to world - conversionP = [ tg(φ) tg(ψ) 1 d/D ] ,P’ = [ tg(φ) tg(ψ) 1 ] . ( f can be chosen to be the unit.)1j1J/2ij0Jφi0RhorizonψI/2(0,0,f)Au.kuf = 1Drkr ,kcv.kvNdzID/d = x/uk = y/vk = z/fu = j – j0 ; uk = ku*uv = i – i0 ; vk = kv*vtg φ = uk/f tg ψ = vk/fr = √(uk²+f²)d = √(uk²+vk²+f²)xyEvery world point is unique with respect to a lot important coordinates:x, y, z, Nx, Ny, Nz, kr, kc, R, G, B, NR , NG , NB , t° , t ...The basis of our TETRA-project: ‘Smart Data Clouds’
6D/d-related calculations. (1) For navigation purposes, the free floor area can easily be found from:di/Di = e/E = [ f.sin(a0) + vi.cos(a0) ] / E = [ tg(a0) + tg(ψi) ].f.cos(a0)/E . Since (d/D)i = f /zi this is equivalent with: zi . [ tg(a0) + tg(ψi) ] = E/cos(a0) .Camera sensorCamera inclination = a0 .fCamera bounded parallel to the floor.a0ediψiEziviDiFloor.
7D/d-related calculations. (2) Fast calculations !! 1DThe world normal vector n , at a random image position (v,u) .43dnvd² = u² + v² + f².2ufOnx ~ f.(D4/d4 - D3/d3)/(D4/d4 + D3/d3) = f.(z4 – z3)/(z4 + z3)ny ~ f.(D2/d2 - D1/d1)/(D2/d2 + D1/d1) = f.(z2 – z1)/(z2 + z1)nz ~ - (u.nx + v.ny + 1 ) ▪
8Coordinate transformations yPcCamera x // World x // Robot xWorld (yw , zw) = Robot (yr , zr) + tyyPw = - (z0 – zPc).sin(a) + yPc.cos(a) = yPr + tyzPw = (z0 – zPc).cos(a) + yPc.sin(a) = zPrfvtazwPzPcA camera re-calibrationfor ToF cameras is easy and straightforward !!zraywWork plane = reference.yrty
9Pepper handling First image = empthy world plane Next images = random pepper collections.Connected peppers can be distinguished by means of local gradients. Gradients can easily be derived from D/d-ratios.Thickness in millimeterCalculations are ‘distance’ driven, x, y and z aren’t necessary Fast calculations !
10Bin picking & 3D-OCR YouTube: KdGiVL Analyse ‘blobs’ one by one. Find the centre of gravity XYZ, the normal direction components Nx, Ny, Nz , the so called ‘Tool Centre Point’ and the WPS coordinates.
12RGBd Packed… DepthSense 311 Find the z –discontinuity …with a plastic warpDepthSense 311Find the z –discontinuityLook for vertical and forward oriented regionsCheck the collineraityUse geometrical laws in order to find x, y, z and b.
13ToF Packed IFM O3D2xx. 1. Remove weak defined pixels. …with a foilIFM O3D2xx.1. Remove weak defined pixels.2. Find the z –discontinuity3. Look for vertical and forward oriented regions4. Check the collineraity5. Use geometrical laws in order to find x, y, z and b.CamBoard Nano
14Basic tasks of ToF cameras in order to support Healthcare Applications: Guide an autonomous wheelchair along the wall of a corridor.Avoid collisions between an AGV and unexpected objects.Give warnings about obstacles (‘mind the step’…, ‘kids’, stairs…)Take position in front of a table, a desk or a TV screen.Drive over a ramp at a front door or the door leading to the garden.Drive backwards based on a ToF camera.Automatic parking of a wheelchair (e.g. battery load).Command a wheelchair to reach a given position.Guide a patient in a semi automatic bad room.Support the supervision over elderly people.Fall detection of (elderly) people.
15Imagine a wheelchair user likes to reach 8 typical locations at home: 1. Sitting on the table. 2. Watching TV Looking through the window Working on a PC Reaching the corridor 6. Command to park the wheel- chair Finding some books on a shelf Reaching the kitchen and the garden.
16ToF guided navigation for AGV’s. Instant Eigen Motion: translation. D2 = DP ?Instant Eigen Motion: translation.Random points P! (In contrast: Stereo Vision must find edges, so texture is pre-assumed )
17ToF guided Navigation of AGV’s. Instant Eigen Motion: planar rotation. D2 = DP ?Instant Eigen Motion: planar rotation.Image data: tg(β1) = u1/f ; tg(β2) = u2/f .Task: find the correspondence β1 β2 ;Procedure:With 0 < |α| < α0With |x| < x0Projection rules for a random point P :NextsensorpositionPHere:x < 0R > 0 .z2D2D1DPz1β1β2PrevioussensorpositionαR+δ1αD2² = xP2² + zP2²xParallel processing possible!
18ToF guided navigation of AGV’s. Instant Eigen Motion: planar rotation. e.g. Make use of 100 Random points.Result = Radius & AngleD2,i = DP,i ?
19ToF driven Quadrocopters Research FTI Master ElectromechanicsInfo: ;Research:ToF driven QuadrocoptersCombinations with IR/RGBSecurity flightsover industrial areas.Conrad: DJI Phantom RTF QuadrocopterBeste collega’s,Voordat we de inhoud van het EM-onderzoek beschouwen wil ik eerst de betrokken onderzoekers zelf bedanken. Onderzoek, is immers geen ‘nine to five job’ en daarom is het fijn te zien hoe ieder van ons zijn zaken gedreven behartigt. Onderzoekers kan je best vergelijken met bergbeklimmers. Hun doel bestaat er in, ‘vlaggen te plaatsen ’op die bergtoppen, waardoor ze zich aangetrokken voelen. Jonge klimmers worden eerst nog wat begeleid maar al gauw zal blijken dat zij eigen sporen trekken in het terrein dat zij ontginnen. De juiste blik op het juiste ogenblik, leidt immers tot innovatie. De rest is moeizame arbeid om alles netjes op een rij te brengen. Vanuit die gedachte overlopen we, wat EM onderneemt tijdens haar dagelijkse klimpartijen in het hooggebergte van de Elektromechanica.TETRA-project ‘Smart Data Clouds’
20Quadrocopter navigation based on ToF cameras BV-ToF128x120 pix.The target is to ‘hover’ above the end point of a black line. If ‘yaw’ is present it should becompensated by an overall torque moment. Meanwhile the translation t can be evaluated. The global effect of the roll and pitch angles repre-sent themselves by means of the points P and Q.The actual copter speed v is in the direction PQ.At the end P and Q need to come together without oscillations, while |t| becomes oriented in the y-direction.An efficient path can be followed up by means of Fuzzy Logic principals.t