Presentation is loading. Please wait.

Presentation is loading. Please wait.

3D acoustic image processing for underwater visual inspection and navigation Umberto Castellani, Andrea Fusiello, Vittorio Murino Dipartimento di Informatica,

Similar presentations


Presentation on theme: "3D acoustic image processing for underwater visual inspection and navigation Umberto Castellani, Andrea Fusiello, Vittorio Murino Dipartimento di Informatica,"— Presentation transcript:

1 3D acoustic image processing for underwater visual inspection and navigation Umberto Castellani, Andrea Fusiello, Vittorio Murino Dipartimento di Informatica, Università di Verona Vision, Image Processing, and Sound lab Integrated Systems for Marine Environment

2 VIPS lab - ISME meeting2 Outline Introduction Fast registration and tracking Model acquisition Online and offline Online and offline Model-based tracking Gas/oil leakage detection Accurate Gas leakage localization Conclusions

3 VIPS lab - ISME meeting3 3D acoustic imaging Data are acquired by the EchoScope A stream of 3D cloud points is provided by the sensor Data provided are noisy: speckle noise is typically present to the acoustic signals. speckle noise is typically present to the acoustic signals. Resolution is low and depends on the frequency of the acoustic signal (it is about 5 cm at 500 KHz): the higher the frequency, the higher the resolution, the narrower field of view. (it is about 5 cm at 500 KHz): the higher the frequency, the higher the resolution, the narrower field of view. We are forced to operate with a limited field of view.

4 VIPS lab - ISME meeting4 Fast registration and tracking The final aim is to reconstruct progressively the scene while the sensor is moving is necessary A data processing pipeline has been introduced from data acquisition, and produces and visualizes a geometric model of the observed scene. A data processing pipeline has been introduced from data acquisition, and produces and visualizes a geometric model of the observed scene. Registration is the main task the new frame is brought to the same coordinate system of the 3D mosaic built on all previous frames. the new frame is brought to the same coordinate system of the 3D mosaic built on all previous frames.

5 VIPS lab - ISME meeting5 Pairwise registration Given two set of points V and W, that represent the same object from different point of view The aim consists in aligning (or registering) the two sets by estimating the (unknown) sensor motion Classic algorithm: ICP 1.For each point in V, compute the closest point in W 2.With the correspondence from step 1, compute the incremental transformation and bring V to W 3.Repeat until convergence This version of ICP is not useful for real time applications

6 VIPS lab - ISME meeting6 Fast Registration A variation of ICP based on reverse calibration aiming at reducing the time for closest point computation Motion tracking based on a Kalman filter to smooth the trajectory and to carry out a pre-alignment of data

7 VIPS lab - ISME meeting7 Reverse calibration Closest point computation: given two set of points V and W, for each v i  V find the closest point w i  W Acoustic sensor provides: The set of points: V={v 1 … v n } The set of points: V={v 1 … v n } A range image: r V (i,j), where i corresponds to elevation and j corresponds to azimuth A range image: r V (i,j), where i corresponds to elevation and j corresponds to azimuth Cartesian coordinates  Spherical coordinates Cartesian coordinates  Spherical coordinates

8 VIPS lab - ISME meeting8 Reverse calibration Therefore, v  V has range coordinate r v (i,j), with respect to view 1 (or data image). What are the range coordinates of v with respect to view 2 (or model image) ?  Range coordinates of v, with respect to view 2, can be computed by using the sensor information  r w (k,l) The closest point to v is computed only among the points of the 3D neighborhood of r w (k,l) Range image preserves connectivity information

9 VIPS lab - ISME meeting9 Motion tracking Kalman Filter: It provides an estimation of sensor position It provides an estimation of sensor position It removes noise It removes noise It can integrate different position estimates It can integrate different position estimates Estimates from motion sensorsEstimates from motion sensors Estimate from ICP registrationEstimate from ICP registration ICP Kalman filter Predicted sensor attitude at time t+1 3D points Sensor attitude at time t Motion sensors data Output attitude

10 VIPS lab - ISME meeting10 Registration: Results Harbor wall: 3 frames before registrationHarbor wall: 3 frames after registration

11 VIPS lab - ISME meeting11 Registration: Results Harbwall: Registration of 30 frames

12 VIPS lab - ISME meeting12 Experiments Harbor wall in Florida (USA): 212 frames. The scene is composed of the harbor wall and several pillars 212 frames. The scene is composed of the harbor wall and several pillars Pier in La Spezia (Italy): 160 frames. The scene is composed of a some big vertical pillars with several small columns on their sides. 160 frames. The scene is composed of a some big vertical pillars with several small columns on their sides. Bay in Bergen (Norway): 4500 frames. The scene is composed of the quayside and the seabed 4500 frames. The scene is composed of the quayside and the seabed

13 VIPS lab - ISME meeting13 Harbor Wall Selected frames of Harbor wall while mosaic is growing

14 VIPS lab - ISME meeting14 Harbor wall Final reconstruction

15 VIPS lab - ISME meeting15 La Spezia The structure, approximately 200 m long, is dedicated to the unloading of the coal ships for the purchasing of fossil fuel to the Enel Produzione power plant.

16 VIPS lab - ISME meeting16 La Spezia 15 m 36 m 20 m The pier consists of a closed root of 20 m followed by 5 modules like the one here represented.

17 VIPS lab - ISME meeting17 La Spezia La Spezia: movie of the pilot interface while the mosaic is growing

18 VIPS lab - ISME meeting18 Bergen Bergen bay

19 VIPS lab - ISME meeting19 Bergen Motion sensors devices

20 VIPS lab - ISME meeting20 Bergen 3D reconstruction of the whole bay in Bergen

21 VIPS lab - ISME meeting21 Bergen Detail of Bergen quayside

22 VIPS lab - ISME meeting22 Bergen -Sensor motion -ICP motion -ICP&Sensor motion Tracking performance movie

23 VIPS lab - ISME meeting23 Performance 16.8303830.130076Bergen 10.8207420.117299La Spezia 11.4320550.103432Florida Accuracy (cm.)**Speed (sec.)* *Timings have been computed by software profiling, on a P4 2.8Ghz - 2Gb Ram ** The accuracy is given by computing the mean distance between the corresponding points of the last ICP iteration

24 VIPS lab - ISME meeting24 Model acquisition A synthetic model of the object(s) in the scene will be obtained by fitting geometric primitives to triangulated 3D data. The primitives are generic deformable models, i.e., superquadrics.

25 VIPS lab - ISME meeting25 Off-line model acquisition

26 VIPS lab - ISME meeting26 Model fitting gives both the shape and the position of the objects on the scene. Extracted models are visualized with the mosaic by obtaining an augmented representation. Off-line Model acquisition

27 VIPS lab - ISME meeting27 Bergen Superquadrics fitting

28 VIPS lab - ISME meeting28 La Spezia CAD model of the reconstructed scenario

29 VIPS lab - ISME meeting29 La Spezia Final reconstruction

30 VIPS lab - ISME meeting30 On-line model acquisition Ransac approach for on-line model fitting The geometric primitives (i.e., the model) are a-priori known: planes and cylinders. The geometric primitives (i.e., the model) are a-priori known: planes and cylinders. The Random Sample Consensus (RANSAC) algorithm is a fitting technique which is robust to outliers. The Random Sample Consensus (RANSAC) algorithm is a fitting technique which is robust to outliers. The plane is fit first (i.e., the bottom) and then cylinders. The plane is fit first (i.e., the bottom) and then cylinders. At each step all the points associated to the extracted model are removed from the cloud point and the process is repeated until no other points remain. At each step all the points associated to the extracted model are removed from the cloud point and the process is repeated until no other points remain.

31 VIPS lab - ISME meeting31

32 VIPS lab - ISME meeting32

33 VIPS lab - ISME meeting33 Model-based tracking The overall aim is to support the navigation in known regions composed of known objects. Proposed approach: The pose of the ROV is estimate from a single frame. The pose of the ROV is estimate from a single frame. The scene is filtered and segmented. The scene is filtered and segmented. The Region of interest is automatically detected. The Region of interest is automatically detected. The ‘CAD-like’ model of a known object is fit to the current frame. The ‘CAD-like’ model of a known object is fit to the current frame.

34 VIPS lab - ISME meeting34 Example 1 CAD models of the observed scenario Raw data from EchoScope

35 VIPS lab - ISME meeting35 Example 1 CAD models of the observed scenario ROI detection

36 VIPS lab - ISME meeting36 Example 2 CAD models of the observed scenario Raw data from EchoScope

37 VIPS lab - ISME meeting37 Example 2 CAD models of the observed scenario ROI detection and CAD fitting

38 VIPS lab - ISME meeting38 Gas/oil leakage detection A method to detect gas/oil leakage has been developed, based on acoustic data. We have focused on some typical situation: Static ROV position: Static ROV position: A method for robust background modeling has been developed. The gas/oil are detected when foreground is detected (i.e., only gas/oil can move) Motion parallel to the bottom surface: Motion parallel to the bottom surface: A method for the detection of blobs going upward has been developed. We train the method on the detection of blobs characterized by size and frequency observed on the given data High frequency of blobs: High frequency of blobs: A method on shape analysis of blobs has been developed.

39 VIPS lab - ISME meeting39 Gas leakage Results CASE 2: rotation, gas, big size

40 VIPS lab - ISME meeting40 Gas leakage Results CASE 3: static motion, light oil, small size

41 VIPS lab - ISME meeting41 Gas leakage Results CASE 5: static motion, heavy oil

42 Umberto Castellani42 Accurate Gas leakage localization The bottom is modeled by a plane. The gas flow is modeled by either a cone or a 3D straight line. The leakage is localized as the intersection between the plane and the axis of the cone (or the straight line). Everything is on-line. The accuracy is about 10 cm.

43 Umberto Castellani43 Accurate Gas leakage localization

44 VIPS lab - ISME meeting44 Conclusions Real-time registration has been achieved by : fast registration by reverse calibration approach fast registration by reverse calibration approach motion tracking by combining ICP outcome and sensor motion measurements (using a Kalman filter). motion tracking by combining ICP outcome and sensor motion measurements (using a Kalman filter). Model fitting improves both the scene perception and the navigation Gas/oil leakage can easily detected and localized by our system Good results in relation to the quality of the input data Future work will address bio-marine applications


Download ppt "3D acoustic image processing for underwater visual inspection and navigation Umberto Castellani, Andrea Fusiello, Vittorio Murino Dipartimento di Informatica,"

Similar presentations


Ads by Google