Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sensing for Hydrographic Autonomous Surface Vehicles

Similar presentations


Presentation on theme: "Sensing for Hydrographic Autonomous Surface Vehicles"— Presentation transcript:

1 Sensing for Hydrographic Autonomous Surface Vehicles
Coral Moreno 2019 US Hydro Conference Hello everyone My name is Coral Moreno ASVs are becoming a popular tool in hydrography, but the reality is that they are not truly autonomous. Today my talk is going to be a prospective analysis of what is going to be required to make ASVs autonomous enough to operate safely in coastal environments while conducting a mapping mission. Center for Coastal and Ocean Mapping / Joint Hydrographic Center University of New Hampshire

2 Coastal Mapping with ASVs
Complexity of Coastal Environments Readiness Level of an ASV We at CCOM are interested in coastal mapping with ASVs So let’s see what this interest is involved with: 1- what is the complexity level of coastal environments 2- what is the readiness level of an ASV to operate autonomously in such environments on its own or with minimal supervision and intervention. _______________________ ASV for hydrography – The availability of smaller manageable size and more sophisticated ASVs made their usage more realistic for coastal surveys.

3 Coastal environments are challenging for ASV operations
It is hard to navigate in coastal areas Because there are rocks, lobster pots, peirs, heavy boat traffic and people in the water. All those things that in the deep ocean the ASV is less likely to encounter. Close to shore, the ASV has to operate reliably and safely in the presence of humans and objects.

4 Autonomous Situational Awareness
ASV Autonomy OPERATOR ONLY FULLY AUTONOMUS HUMAN SUPERVISED HUMAN DELEGATED 1 Level 2 3 4 Guidance & Control Safe Navigation Safe Navigation Mission Existing operational modes: Remote-control Auto heading WPT navigation Auto speed Path following Autonomous Situational Awareness Where does ASV autonomy stand now? ***************** Autonomy can be think of from 3 aspects: make the boat drive on its own make the boat aware of its environment and make navigational decisions, similarly to what a captain of a ship or a boater do; carry out the mission autonomously. (in hydrography that would be make the hydrographer job during data acquisition autonomous, e.g. supervising data acquisition, adjusting sonar parameters etc… There are different divisions of autonomy levels in the literature, but to simplify it I adopted this rough scheme of 4 levels that span from fully under human control (level 1) to fully autonomous (level 4). 1st level is the remote control mode. 2nd level is “human delegated”. This is all the low level automatic controls, like auto-heading and auto-speed. (similar to the cruise control we have in our cars) 3rd level is human supervised. ASV can perform a wide variety of more sophisticated behaviors when given top-level permission. Such as an execution of a survey plan. Human and system can initiate behaviors but ASV can do so only within the scope of its task. 4th level is full autonomy. - ASV receives mission goals, translates them into tasks to perform, and it makes all decisions. Human can enter the loop in case of an emergency To see where we are at with ASV autonomy let’s look at What are the current operational modes ? And those are: remote control, auto heading and speed, and WPT navigation. This means that now we are at levels 2-3 of autonomy. There is a huge gap between levels 3 and 4, and a critical component to bridge that gap will be to increase the autonomous situational awareness of the ASV. This is exactly what is happening in the self-driving car regime, and my thesis in part will check how well those approaches can be applied for the marine environment. For the remainder of the talk I will discuss how autonomous situational awareness can be achieved. _______________________________________________________________________________________________________ Source: “Unmanned Systems Integrated Roadmap FY ” by the DoD

5 Obstacles Static Dynamic
Providing the ASV an environmental awareness means that the ASV identifies the objects in its surroundings, understands whether they are static or dynamic , their current location, speed and heading, and predict their future behavior All of this information may come in a form of a map that is updated close to real time and is centered around the ASV and aligned with its reference frame. To review very quickly what those obstacles are: Some of them are static (landscape, manned structures, different kinds of floats and submerged things like seaweed) and some are dynamic ( vessels of different kind and size, humans, and drifting things like floating kelp). __________________________________________________ i. Static •Landscape: rocks, shoreline (iceberg, cliffs – Val said to not mention these for the conference) •Manned structures: oil/gas platforms, piers/ports •Navigation buoys, lobster pots, floats •Submerged/underwater: seaweed, shoal zones ii. Dynamic (size and maneuverability) •Large: commercial ships, research vessel •Medium: super yachts , tag and burge etc •Small size: fishing boats, motor boats, sailboats (,manned/unmanned) •Humans: surfers, paddlers, kayakers, swimmers floating ice, floating kelp

6 Charts Static Dynamic Source: Sam Reed
How can we gather information about those obstacles? One way would be to use charts. Charts contain information about: 1) most of the static obstacles 2) the depth 3) where it is safe to navigate. And we had a project on chart-aware navigation of an ASV but: 1) not all obstacles appear on the chart (e.g. dynamic obstacles, anchored fishing gear) 2) charts may be outdated, for example the coast line may have changed or a new pier was built. 3) Not always available. Sometimes we want to use an ASV to map uncharted waters. So in addition to charts we can use the ASV sensors for situational awareness. Source: Sam Reed

7 ASV Sensors for Perception: Automatic Identification System (AIS)
Lawrance Radar Velodyne LIDAR FLIR camera RGB camera Automatic Identification System (AIS) For my analysis I will focus on the Sensors available on our development system, which is a CW4. Above water: AIS - AIS sends and receives the following information of a vessel within radio range: name, size, speed, heading, CPA, TCPA, LAT/LONG location Radar and Lidar – measure range and bearing to an obstacle LiDAR Color camera FLIR camera Underwater: Sonar systems for underwater detection: MBES (we have EM2040) , Forward looking sonar, and side-scan sonar I am going to focus more on the above water domain Multi-Beam Echo-Sounder: Kongsberg EM2040P

8 Static Dynamic Radar Let’s see what kind of obstacles each sensor can detect: Radar can detect most of the above water obstacles except for things like floating kelp.

9 Static Dynamic AIS AIS good only for vessels with working AIS system onboard

10 Static Dynamic LIDAR Lidar can be good for detection of above water objects. (Might work for kelp since it may not be fully absorbed by the water)

11 Static Dynamic Camera Camera is the sensor with the richest data. It can be used to identify all objects.

12 Static Dynamic FLIR Camera
FLIR camera can detect well things with big temperature difference. So It may be good for detection of a man in the water, but not so much for floating kelp. It won’t work well underwater but will be good for most of the above water objects. ______ Time should show 5:30 or 6:00 min.

13 Radar AIS 10km/20km 200m 100m 100m-1km 50m-40km
Camera LIDAR FLIR 20ox25o (30o-50o)x (50o-90o) 200m 100m 100m-1km Now let’s examine the range and FOV of the various sensors: the range of each sensor AIS – about 10/20 km Radar – 50m-24NM (~44km) [Spec] *depends on range settings and location of antenna Lidar - 100m Hard to specify max range for camera systems because it depends on many different factors. (It depends on object’s contrast, projected size on camera sensor, and camera focal length and algorithm used.) Color Camera- some papers claims for m, but the range can reach 500m or maybe even 1km if the weather is nice and the algorithm can identify objects of the size ~30 pixels FLIR – 50m-200m (high performance systems can detect even small 30’ outboard vessel at beyond 5NM) Zoom system can improve camera system performance and extend the detection range. The FOV of each sensor AIS radar and lidar have FOV of 360 Camera – (30-50)x(50-90) [VxH] FLIR – 20x25 [VxH] AIS is great as long as other vessel has such system working onboard. For close range (<100m) lidar and cameras provide the best solution resolution: camera>lidar>radar (radar has the lowest resolution) _____________________ Resolution V H Radar 25o±5o 5.2o±0.5o LiDAR 1.33o 0.1o-0.4o RGB Camera 448pxl 800pxl FLIR 480pxl 640pxl 50m-40km

14 Sensor Applicable in Bad Weather Poor Light Conditions High Waves and
Water Reflectivity Poor Light Conditions Radar LIDAR Color Camera FLIR Camera AIS The sensors applicability in different conditions: 4. In Bad weather and sea state: radar and AIS work better than lidar, a color camera and a flir camera but this means that close-range detection and identification is limited 5. In poor light conditions: object detection can still be done using radar, lidar, a FLIR camera and AIS. This means visual identification is limited and means must be invested in object identification algorithms from lidar and a flir camera data. ______________________________________________________________________ NOT SAYING!! Radar – pros and cons - • Less effective picking up distant storm cells. While conventional, 4kW radar will help track approaching storms from 30 miles or more, the BR24 will see strong cells at miles. Simrad says the new latest “3G” version can see cells at 17 nautical miles or more. • Less effective at detecting difficult shorelines. The BR24 will not detect sloping beaches and shorelines as well as some conventional radar. FLIR allows to see clearly in total darkness, solar glare, and through light fog and smoke. Bad weather and sea conditions degrade the detection LIDAR sensitive to sun glare Liu, Zhixiang, et al. "Unmanned surface vehicles: An overview of developments and challenges." Annual Reviews in Control41 (2016):

15 Object Identification
Sensor Functionality Object Detection Object Identification Occupancy Grid Mapping Radar LIDAR Color Camera FLIR Camera AIS The functionality of the sensors: 6. Object detection can be done with all sensors , but is it easier with an AIS, radar and lidar since they are measuring it directly. And it is harder with flir and color cameras because they must employ appropriate algorithms. 7. Object identification can be done with an AIS and with lidar, vision and FLIR cameras using appropriate algorithms. These are good for relatively close ranges. 8. Occupancy grid map shows probabilistic locations of obstacles in a geo-spatial context as it was measured by sensors that provide range. Those are – an AIS, radar and lidar. Historically (Naturally), a camera does not provide range information, but new algorithms may allow us to restore range from a single camera. ______________________________________________________________________________________________________ NOT SAYING!! Other challenges for camera systems: image stabilization, water splash * Object detection and identification requires algorithms. Modern algorithms relies on machine learning (ML) and deep learning (DL). Not accurate without a lot of data Not explainable – if something goes wrong it is hard to point of where was the problem Requires strong processing units Liu, Zhixiang, et al. "Unmanned surface vehicles: An overview of developments and challenges." Annual Reviews in Control41 (2016):

16 Complementary nature of sensors
Objectives Complementary nature of sensors Sensor Fusion So we can see that the sensors have a complementary nature. The weak points of one sensor can be compensated by other sensors Currently, there are no algorithms that fuse all of the ASV sensors together. And there are some (partial) algorithms that were developed for non-marine environments, such as self-driving cars and drones. In my PhD I am going to adjust those algorithms to the marine environment and integrate them in order to offer a solution to the problem.

17 Examples of Algorithms
2D Object Identification (YOLOv3, 2018) 3D Object Identification (PointNet ,2016) Monocular Range Image (MonoDepth, 2017) Here are some examples of algorithms that can be used 1) 2D object detection – this algorithm takes photos from the camera and draws boxes around detected objects and provides classification of those objects 2) 3D object detection – this algorithms accepts point cloud from the lidar , and classify points that belongs to the same object (This is called semantic segmentation, since it gives semantic meaning to segments.) Those can contribute to object identification from the cameras and the lidar. The third algorithm shows that there may be a way to extract range from an image. _______________________________________________________________ 3) the 3rd algorithm accepts an image from a camera as an input, and outputs a depth image, that is an image that shows distance to every object in the image - here we can see that it has some troubles dealing with splashes on the camera and reflections from water surface. - it can come as reinforcement to radar measurements in mid range (beyond the lidar range). _____________________________________________ NOT SAYING!! What is DL? Sub-field of ML. It is about learning representations from data. This is done by learning successive layers of representations that are more and more meaningful. What are NNs? Those successive layers stacked on top of each other are called Neural Networks. 3D Object Detection: - PointNet classification network on 3D point clouds taken from LIDAR, Depth cameras or CAD models They trained an MLP on each point separately (with shared weights across points). Each point was ‘projected’ to a 1024 dimension space. Then, they solved the order problem using a symmetric function (max-pool) over the points. This yielded a 1 x 1024 global feature for every point cloud which they fed into a nonlinear classifier.   They also solved the rotation problem using a ‘mini-network’ they called T-net.   It learns a transformation matrix over the points (3 x 3) and over mid-level features (64 x 64).  A multilayer perceptron (MLP) is a class of feedforward artificial neural network. It is basically a fully-connected network. *challenges include compensation from motion and tracking Depth Estimation monocular: MonoDepth: Unsupervised Monocular Depth Estimation with Left-Right Consistency Sensitive to splashes and reflection from water surface

18 1) Sensor Fusion for 3D Object Identification:
3D Point Cloud (nx3) RGB Image (RoI cropped) PointNet ResNet Dense Fusion 3D box corner offsets (nx8x3) Score (nx1) Global Fusion 3D box corner locations (1x8x3) Input Output Once the objects are detected and ranges are known, it is time to fuse the information. I will review 3 different types of sensor fusion algorithms: 1- Sensor Fusion for 3D object detection. This algorithm accepts 3D point cloud from a lidar and an image from the camera and outputs 3D bounding boxes around the identified objects with classification hypothesis in the point cloud and image data streams. ___________________________________________________________ NOT SAYING!! Xu, Danfei, Dragomir Anguelov, and Ashesh Jain. "Pointfusion: Deep sensor fusion for 3d bounding box estimation." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Kocić, Jelena, Nenad Jovičić, and Vujo Drndarević. "Sensors and Sensor Fusion in Autonomous Vehicles." th Telecommunications Forum (TELFOR). IEEE, 2018.

19 2) Sensor Fusion for Occupancy Grid Mapping
Grid Mapping from Sensor Measurements 2D Occupancy Grid Xu, Danfei, Dragomir Anguelov, and Ashesh Jain. "Pointfusion: Deep sensor fusion for 3d bounding box estimation." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Occupancy grid is used for navigation of autonomous vehicles in dynamic environments Sensors such as radar and lidar can be used to generate occupancy grid. The addition of camera data and object classification of 3D point cloud can add semantic information to the occupancy grid At the top we can see a car driving on a road and the generated grid map from sensors measurements. Bottom left corner – this is a 3D map of the environments that Rolls Royce produced with a Lidar and they use it for navigation of an autonomous ferry Bottom right corner is from the robotX competition of ASV. Here we can see the simulated marine environments, the 2D occupancy grid that shows the lidar readings and the interpretation of it. LIDAR Thompson, David John, "Maritime Object Detection, Tracking, and Classification Using Lidar and Vision-Based Sensor Fusion“ (2017). Dissertations and Theses Kocić, Jelena, Nenad Jovičić, and Vujo Drndarević. "Sensors and Sensor Fusion in Autonomous Vehicles." th Telecommunications Forum (TELFOR). IEEE, 2018.

20 3) Sensor Fusion for Moving Object Detection and Tracking
Radar Camera LiDAR AIS Vehicle State IR Camera Object Detection Image Fusion Moving Object Tracking List of Moving Objects IR Image Last kind is sensor fusion for moving object detection and tracking. This is one of the most challenging aspects of autonomous vehicle domain. There are different ways to do this, but in general, an object detection is done on each sensor separately. Then all the information is fused together , followed by tracking of moving objects And the Output is a list of moving objects that is transferred to some kind of obstacle avoidance (COLREGs compliant) algorithm. _______________________________________________________________________ NOT SAYING!! Xu, Danfei, Dragomir Anguelov, and Ashesh Jain. "Pointfusion: Deep sensor fusion for 3d bounding box estimation." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition The most common is sensor fusion of camera, lidar and radar. Kocić, Jelena, Nenad Jovičić, and Vujo Drndarević. "Sensors and Sensor Fusion in Autonomous Vehicles." th Telecommunications Forum (TELFOR). IEEE, 2018.

21 ASA for ASVs - Research Workflow
Semantic Occupancy Grid Chart AIS Radar Cameras LIDAR Fusion for Identification and Tracking Obstacle Avoidance SENSORS The goal is to build a semantic occupancy grid map around the ASV that updates near real time, and contains information on obstacles locations and states with semantic information of what those objects are. So first we can use the chart as a start point. Next we can utilize the sensors to extract range to objects, and to identify them according to what we discussed earlier. The sensors here are arranged according to range hierarchy. Next comes fusion for identification and tracking of the obstacles. Lastly, this information passes to an obstacle avoidance algorithm that takes into account the mapping mission. In my research I will try to build such an occupancy grid. Since the data from a chart, AIS and radar can be used directly, my work efforts will focus on object detection/identification from a camera and lidar, and then fuse it all together to obtain the semantic occupancy grid.

22 Conclusions Coastal areas provides unique challenges for ASVs.
Sensor fusion is the key for the improvement of ASV autonomy because no single sensor provides complete solution for all obstacles. The way to achieve this is by improving algorithms for object detection and identification, and occupancy grid mapping. This is what my Ph.D. is going to be about. This topic has been under development in the academy, navy and industry in recent years but is still in early stages.

23 This work is supported by NOAA Grant NA15NOS4000200
Coral Moreno PhD Student, Ocean Engineering CCOM/JHC University of New Hampshire Chase Ocean Engineering Lab 24 Colovos Road Durham, NH This work is supported by NOAA Grant NA15NOS


Download ppt "Sensing for Hydrographic Autonomous Surface Vehicles"

Similar presentations


Ads by Google