Presentation is loading. Please wait.

Presentation is loading. Please wait.

Stereoscopic PIV.

Similar presentations


Presentation on theme: "Stereoscopic PIV."— Presentation transcript:

1 Stereoscopic PIV

2 Stereoscopic PIV Theory of stereoscopic PIV
Dantec Dynamics’ stereoscopic PIV software Application example: Stereoscopic PIV in an automotive wind tunnel (used as example throughout the slide show)

3 Fundamentals of Stereo Vision
Displacement True seen from left displacement Displacement Focal plane = seen from right Centre of light sheet Left Right camera camera Stereoscopic PIV is based on the same fundamental principle as human eye-sight: Stereo vision. Our two eyes see slightly different images of the world surrounding us, and comparing these images, the brain is able to make a 3-dimensional interpretation. With only one eye you will be perfectly able to recognize motion up, down or sideways, but you may have difficulties judging distances and motion towards or away from yourself. As with 2D measurements, stereo-PIV measures displacements rather than actual velocities, and here cameras play the role of “eyes”. The most accurate determination of the out-of-plane displacement (i.e. velocity) is accomplished when there is 90° between the two cameras. In case of restricted optical access, smaller angles can be used at the cost of a somewhat reduced accuracy. For each vector, we extract 3 true displacements (dX, dY, dZ) from a pair of 2-dimensional displacements (dx, dy) as seen from left and right camera respectively, and basically it boils down to solving 4 equations with 3 unknowns in a least squares manner. Depending on the numerical model used, these equations may or may not be linear. True 3D displacement ( X,  Y,  Z) is estimated from a pair of 2D displacements ( x,  y) as seen from left and right camera respectively

4 Stereo Recording Geometry
Focusing an off-axis camera requires tilting of the camera sensor (Scheimpflug condition) Stereoscopic evaluation requires a numerical model, describing how objects in space are mapped onto the sensor of each camera Parameters for the numerical model are determined through camera calibration When viewing the light sheet at an angle, the camera backplane (i.e. the camera sensor) must be tilted in order to properly focus the camera’s entire field of view. It can be shown that image plane, lens plane and object plane must cross each other along a common line in space for the camera images to be properly focused in the entire field of view. This is referred to as the Scheimpflug condition, and is used in most stereoscopic PIV setups. Performing the stereoscopic evaluation requires a numerical model describing how objects in 3-dimensional space are mapped onto the 2-dimensional image recorded by each of the cameras. The pinhole camera model is based on geometrical optics, and leads to the so-called direct linear transformation. -where uppercase symbols X, Y & Z are used for world coordinates, while lowercase symbols x & y represent image coordinates. The DLT is an idealised model, not accounting for lens distortion, and cannot properly handle refractions when for example measuring from air into water through a window. A multitude of alternative camera models exist, but for measuring airflows the DLT is in most cases sufficient.

5 Camera Calibration Images of a calibration target are recorded.
The target contains calibration markers in known positions. Comparing known marker positions with corresponding marker positions on each camera image, model parameters are adjusted to give the best possible fit. With the DLT model, coefficients of the A-matrix can in principle be calculated from known angles, distances and so on for each camera. In practice however this approach is not very accurate, since, as any experimentalist will know, once you are in the laboratory you cannot set up the experiment exactly as planned, and it is very difficult if not impossible to measure the relevant angles and distances with sufficient accuracy. Instead an experimental camera calibration is performed: Images of a calibration target are recorded. The calibration target contains calibration markers (for example dots or crosses), the true (X,Y,Z)-position of which are known. Comparing the known marker positions with the positions of their respective images on each camera image, model parameters can be estimated.

6 Overlapping Fields of View
Stereoscopic evaluation is possible only within the area covered by both cameras. Due to perspective distortion each camera covers a trapezoidal region of the light sheet. Careful alignment is required to maximize the overlap area. Interrogation grid is chosen to match the spatial resolution. Obviously stereoscopic reconstruction is possible only where information is available from both cameras. Due to perspective distortion each camera covers a trapezoidal region of the light sheet, and even with careful alignment of the two cameras, their respective fields of view will only partly overlap each other. Within the region of overlap interrogation points are chosen in a rectangular grid. In principle, stereoscopic calculations can be performed in an infinitely dense grid, but the 2D results from each camera have limited spatial resolution, and using a very dense grid for 3D evaluation will not improve the fundamental spatial resolution of the technique.

7 Left / Right 2D Vector Maps
Left & Right camera images are recorded simultaneously. Conventional PIV processing produces 2D vector maps representing the flow field as seen from left and right. The vector maps are re-sampled in points corresponding to the interrogation grid. Combining left / right results, all three velocity components are calculated. The actual stereoscopic measurements start with conventional 2D-PIV processing of simultaneous recordings from left and right camera respectively. This produces two 2-dimensional vector maps representing the instantaneous flow field as seen from the left and right camera respectively. Using the camera model including parameters from the calibration, the points in the chosen interrogation grid are now mapped from the light sheet plane and onto the left and right image plane (camera sensor) respectively. The 2D vector maps are re-sampled in these new interrogation points using for example bilinear interpolation to estimate 2D vectors in these points based on the nearest neighbours. With a 2D displacement seen from both left and right camera estimated in the same point in physical space, the true 3D particle displacement can now be estimated by solving 4 equations with 3 unknowns in a least-squares manner.

8 Stereoscopic Reconstruction
Overlap area with interrogation grid Resulting 3D vector map Left 2D vector map Right 2D vector map With Dantec Dynamics’ DynamicStudio a large number of matching 2D vector map pairs can quickly be recorded, yielding a corresponding number of 3D vector maps after a bit of post processing. A large number of vector maps are required to calculate reliable statistics such as 3D mean velocities, RMS values and cross-correlation coefficients.

9 Dantec Dynamics Stereoscopic PIV System Components
Seeding PIV-Laser (Double-cavity Nd:YAG) Light guiding arm & Lightsheet optics 2 cameras on Scheimpflug mounts Calibration target DynamicStudio PIV software DynamicStudio stereoscopic PIV Add-on Issues common to 2D and stereoscopic PIV: Seeding Energy budget (Laser pulse energy vs. measuring area and light sheet thickness) Light sheet thickness (Significant flow through through the light sheet may require a thicker light sheet, and in turn this may require a more powerful laser). Issues specific to stereoscopic PIV: Calibration target for camera calibration 2 (off axis) cameras on Scheimpflug mounts to allow focusing Software: Dantec Dynamics’ DynamicStudio system has always included support for multiple cameras. The routines for camera calibration and stereoscopic evaluation is added to the DynamicStudio software package.

10 Recipe for a Stereoscopic PIV Experiment
Carefully align the light sheet with the calibration target Record calibration images in the desired measuring position using both cameras (Target defines the co-ordinate system!) Perform camera calibration based on the calibration images Record particle images with the laser turned on Perform a Calibration Refinement to correct for the residual misalignment between calibration target and laser light sheet Record particle images from your flow using both cameras Calculate 2D-PIV vector maps Calculate 3D vectors based on the two 2D PIV vector maps and the (refined) camera calibration For camera calibration images of a calibration target are required. The plane target must be parallel with the light sheet, and the target is traversed perpendicular to its own surface to acquire calibration images covering the full thickness of the light sheet (Images are recorded in at least 3, but typically 5 or 7 positions). Calibration markers on the target identifies the X- & Y-axes of the coordinate system, and the traverse moving the target identifies the Z-axis. Since the calibration target and traverse identifies the coordinate system, care should be taken in aligning the target and the traverse with the experiment. Subsequent stereoscopic evaluation assumes implicitly that the coordinate system is cartesian, so it is also important that the traverse is normal to the calibration target surface. The stereoscopic evaluation assumes also that the central calibration image corresponds to the centre of the light sheet, so proper alignment of the laser and the calibration target is essential for reliable results. The imperfections of the alignment of the calibration target with the light sheet can be overcome by doing a Calibration Refinement using actual particle images.

11 Camera Calibration Each image pair must be related to a specific Z-co-ordinate, and the software will look in the log-field of the set-up for this. If no Z-co-ordinate is found there, it will look for co-ordinates in the log-fields of each individual image, and if a Z-co-ordinate can still not be found, the user is prompted to specify one. If the calibration images appear OK, the user can proceed to the actual calibration. This is a two-step procedure handled automatically by the software: First image processing is used to derive the position of calibration markers on the camera images with subpixel accuracy. This produces a list of nominal marker positions and corresponding image co-ordinates on the sensor of left and right camera respectively. This list is then used to estimate parameters in the chosen numerical imaging model, to give the best possible fit between nominal calibration marker positions and their corresponding pixel-positions on the camera images. A single project may include several sets of calibration images, and for each set, several calibrations can be performed using different imaging models for example.

12 Calibration Refinement
Calibration Refinement improves the accuracy of an existing stereo calibration by using particle images acquired simultaneously from both cameras. Each of the original Imaging Model Fits (IMF's) refer to a coordinate system defined by the calibration target used. When using the imaging models for later analyses, it is generally assumed that the X/Y-plane where Z=0 corresponds to the center of the lightsheet, but in practice this assumption may not hold since it can be very difficult to properly align the calibration target with the light sheet. Provided the calibration target was reasonably aligned with the light sheet it is however possible to adjust the imaging model fits by analyzing a series of particle images acquired simultaneously by each of the two cameras. This adjustment is referred to as Calibration Refinement and changes the coordinate system used, so Z=0 does indeed correspond to the center of the lightsheet as assumed by subsequent analyses using the camera calibrations (IMF's). The particle images can be from actual measurements, they need not be acquired specifically for the purpose of calibration refinement. You may benefit from preprocessing the particle images f.ex. to remove the background, in which case the processed images can be chosen as input for the calibration refinement. Likewise preprocessing of the calibration images may be applied to get the best possible initial camera calibrations. The red and blue polygons illustrate the part of the lightsheet visible from each of the cameras according to the present calibrations. Analysis is only possible in the overlap area visible from both cameras, so in the example above the cameras fields of view could have been aligned better.

13 Calibration Refinement
The resulting vector map represents the misalignment between the light sheet and the calibration target and is referred to as a 'Disparity Map'. Each vector must be distinct in the sense that the correlation peak must rise significantly above the noise floor. Vectors that do not fulfill this SNR criteria will be shown in red and excluded from further calculations. Interpreting correlation maps If your disparity vectors do not look as nice as in the example above inspection of the correlation map can be very helpful in understanding what the problem is. It is quite normal for the correlation peak to be elongated, it is a consequence of the light sheet having a finite thickness and the cameras looking at it from different angles. If the peak appears to be fragmented (multiple peaks along a common ridge) it is probably because too few particle images went into the calculation. This can be simply because there were not enough images or because you're at or near the edge of the light sheet where light intensity is low and only a few of the images contain particles big/bright enough to be detected by both cameras. If the correlation map contains no peaks at all it is possible that the misalignment between light sheet and calibration target was simply too big for the calibration refinement to recover the true position of the light sheet. In that case the peak we're looking for will be outside the interrogation area and you may try using a bigger one. Alternatively check if by mistake you're working on the basis of particle images that are not acquired simultaneously in which case they do of course not correlate at all. Finally you may consider the possibility that the cameras and/or the light sheet moved during acquisition; If for example mirrors are involved in bringing the light sheet to the intended measuring area small mechanical vibrations can cause the light sheet to move several mm. Similarly mechanical vibrations may cause the cameras to move, which will of course also have serious impact on the calibration and subsequent analysis of images. Interpreting Disparity Vectors Each disparity vector can be interpreted as a measure of the distance from the nominal Z=0 to where the lightsheet really is. The vectors will in general point in one direction if the light sheet is closer to the cameras than assumed and in the opposite direction if it is further away. Assuming the lightsheet is plane we can make a least squares fit of a plane through all of the disparity vectors and use the fitted plane as an estimate of where the light sheet really is. We can then define a transformation (rotation and translation) that will be able to move points back and forth between the original (target) coordinate system and a modified (lightsheet) coordinate system where Z=0 corresponds to the center. Applying this transformation to the calibration markers from the original calibration images we obtain a new list of corresponding image and object coordinates which can then be fed into the normal camera calibration routines to generate modified calibrations.

14 Calculating 2D Vector Maps

15 Stereoscopic Evaluation & Statistics
Once calibration has been performed, and 2D vector maps calculated, stereoscopic evaluation can be performed on pairs of 2D vector maps. Using different calibrations several 3D vector maps can be derived from the same set of 2D vectors. Please note that the user is free to combine any calibration with any set of 2D vectors; There is no way to check if for example calibration images were recorded in the same position as the 2D vector maps. For a set of 3D vector maps, the user can finally calculate statistics corresponding to statistics calculation in DynamicStudio, but expanded from 2D to 3D. The statistics calculation thus produces: Mean velocities Standard deviations Cross-correlation coefficients Number of valid vectors included in each position (Invalid vectors in the 2D vector maps are inherited by the 3D vectors affected by them, and these invalid 3D vectors are excluded from the statistics calculation). Both individual 3D vector maps and 3D statistics can be exported as Tecplot or plain ASCII-files, and all graphics can be exported or copied via the clipboard.


Download ppt "Stereoscopic PIV."

Similar presentations


Ads by Google