We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byBerenice Waugh
Modified over 3 years ago
© Crown copyright 2006 06/0181 Met Office and the Met Office logo are registered trademarks Met Office Fitzroy Road Exeter EX1 3PB United Kingdom Tel: +44 (0)1392 885635 Fax: +44 (0)1392 885681 Email: email@example.com Introduction The Met Office in the United Kingdom uses Vaisala FD12P “Present Weather” Sensors to report the weather at some of its stations. Since the introduction of this equipment in 2001 it has been used with software known as the “Arbiter”. It is the function of this software to modify the FD12P output when there is sufficient evidence from other sensors. For this reason, the present weather (PWx) output can be purely from the FD12P or modified by the arbiter using other sensors, including an Eigenbrodt RS85 precipitation detector, enabling very light precipitation to be included in the output, and a Belfort Visiometer. Since 2002 we have been running the system at several manned stations, where a human observer can override the automatic observation if necessary. The automatic observation from the FD12P and the FD12P/Arbiter system is stored on the station’s PC, and the manual observation in the central database. It is therefore possible to compare the human and automatic present weather at these sites. CS13 Simple measures to score automatic against human observations include Probability of Detection (POD), False Alarm Rate (FAR) and Heidke Skill Score (HSS) which work well for yes/no contingencies, but the PWx results can have many contingencies. Therefore, work was carried out with our Forecasting staff to develop a skill score that covered the full set of contingencies (Met Office Skill Score: MOSS). It operates by allocating a mark derived from how closely the automatically derived PWx code matches the human observers observation, and how much impact the PWx type has on the forecasters decisions. This mark is then compared to a perfect score, to judge how the system performs as a whole, with the most important situations given the greatest weighting. The overall score for the Arbiter is shown below: MOSS All stations - 2006 analysis: 66.3% All stations - 2002 analysis:68.0% This shows that our system, after several years of operational use, is still performing at a level comparable with the first limited analysis made in 2002. The charts below show the distribution of PWx Code types during the analysis period for manual observations, the Vaisala FD12P and the Arbiter respectively. Scoring the results Microwave radiometer at Linkenholt 16:00 to 17:00 29/06/05, 0–1 km The microwave radiometer at Linkenholt shows the atmospheric conditions leading to the convective plume, shown to initiate at 16:28 UTC by the wind profiler. The surface temperature gradually increases from 18.2 to 19.3 ºC, triggering convection when the updraught commences. The temperature profile shows unstable conditions near the surface during the updraught. The temperature profile then shows stable conditions during a warm downdraught, with low relative humidity. The conditions remain stable until the end of the downdraught. Microwave radiometer Measurements of convective conditions on 29 June 2005 The Arbiter did well in determining showers. Some errors present could be due to the observer missing some short duration light shower events, due to time differences between the observers and the PWx sensor’s ‘observation window’ for showers. Wind profiler measurements for the storm event on 24 June 2005 Snow reporting There was considerable variation in skill score results for snow between stations. Aberporth had low scores, whereas Church Fenton scored higher. HSSPODFARMOSS Aberporth Arbiter0.150.090.441.0% Aberporth FD12P0.110.060.022.2% Church Fenton Arbiter0.510.340.074.0% Church Fenton0.290.170.053.6% The Arbiter reported snow events three times more often than the PWx sensor alone, but only reported 36% of human observed snow events at Church Fenton, and 24% at Aberporth. The reasons are: PWx sensor data unavailable when the observer reported snow, so the Arbiter could not report snow codes, except when PWx sensor data was available within 60 minutes of the observation time (when “snow in past hour” may have been reported). Often rain or unidentified precipitation were reported instead. Sensitivity issues in the PWx and precipitation sensors resulted in snow events being incorrectly reported (particularly at Aberporth). Missing PWx sensor data meant that the Arbiter PWx code was derived solely using the other sensors in the suite. Also, the precipitation sensor appeared insensitive during light snow events. With neither PWx or precipitation sensor data, the Arbiter logic used wind speed and visibility to allocate an Arbiter PWx code of 1-4, 10 or 18. Results of using Present Weather instruments in the United Kingdom Darren Lyth Met Office, Exeter, UK. The Arbiter did add value to the automatic codes by reassigning erroneous mist codes, and using the Eigenbrodt precipitation detector to reassign many codes to either ‘drizzle’ or ‘drizzle in past hour’ codes. The overall skill scores for all stations for performance in drizzle are shown below. HSSPODFARMOSS Arbiter0.270.250.6657.5% Vaisala FD12P only0.180.190.7730.1% The PWx sensor performed poorly in identifying light drizzle events, with an 18% hit rate. Sensitivity issues with the PWx sensor’s own precipitation detector are evident in that for 83% of the occasions that a human observer reported light drizzle, the PWx sensor reported no precipitation at all, reporting primarily mist and clear conditions (codes 10 & 0 respectively, in chart below). Drizzle Reporting This poster contains results from a comparison of these datasets showing the quality of the automatic data in real, operational conditions over 3 years and at 12 stations. The map to the right shows the network of Vaisala FD12P’s in use across the UK. The stations highlighted in yellow were used in the analysis, due to hourly manual observations being available. The code type distribution for the Arbiter is similar to that for the human observer. However, “precipitation during the preceding hour” codes occur nearly 40% more often in the Arbiter reports than for manual observations. The percentage of “unidentified ppn types” is five times higher for the Arbiter report than the FD12P alone. This occurs where the human observer is reporting light, intermittent events. Where, for whatever reasons, automatic PWx sensor data are unavailable at the same time (for a period exceeding 60 minutes), the logic of the Arbiter decides that PWx codes of drizzle, rain or snow are not available. Instead, the logic passes to an ‘unknown precipitation type’, which is set according to temperature and precipitation sensor data. The FD12P appears to under- report precipitation events, especially rain, even allowing for the fact that the FD12P does not report precipitation in the past hour. The PWx sensor scored poorly in sleet compared to the observer, although human sleet reporting is more subjective than for other weather types. The sensor did not report any sleet events when a human observer did, and the sleet observations it did report did not correspond to any human observed sleet event. This severely limited the effectiveness of the Arbiter in trying to improve the sleet score. However, it removed many spurious PWx sensor sleet reports, and reported a PWx code which was closer to the “truth” than from the PWx sensor alone (higher MOSS score), but could not effectively improve the hit rate. This was partly due to a tendency for the Arbiter to assign an unknown precipitation type (auto codes 40-48) to events in which it was unsure of what kind of precipitation to report (6 occasions of auto code 41 in the chart below). Sleet reporting HSSPODFARMOSS Arbiter0.020.010.6769.8% Vaisala FD12P only --- --- ---59.2% These situations occurred when the PWx sensor was reporting snow, sleet and rain, varying on a minute-by-minute basis. This often occurs during sleet observations. It is important to improve the performance of automatic PWx sensors in sleet, as it is important to the users.. Fog Reporting There was some variation in the skill score results for fog between stations. The best and worst performing stations are shown below. HSSPODFARMOSS Coltishall Arbiter0.790.870.2790.0% Coltishall FD12P0.660.620.2864.7% Aviemore Arbiter0.290.180.1275.5% The PWx sensor performed quite well during human observed fog events, especially where the human observer reported that fog had recently formed, or thickened in the past hour. The PWx sensor MOSS score was low however compared to the Arbiter, due to the large number of missing PWx sensor visibility data, for which the Arbiter had used data from the Belfort visibility sensor. There also seemed to be a tendency for the PWx sensor visibility to read higher ranges than the Belfort, leading to nearly three times as many reports of mist from the PWx sensor than the Arbiter, when the human observer reported fog (below). The PWx sensor was also found to be over-sensitive to droplets with low optical density during certain fog conditions, leading to erroneous reports of snow grains. The majority of reports occurred at a single station and could therefore be due to either a fault, or incorrectly setup parameters on this sensor. In either case, this would highlight the case for both improved monitoring and an increased number of routine service visits to the sensor network. However, the Eigenbrodt precipitation detector itself was often found to be in error, either due to insensitivity, or not resetting itself after precipitation had ceased, due to contamination. Rain Reporting HSSPODFARMOSS Arbiter0.650.640.2779.5% Vaisala FD12P only0.560.450.1260.4% The results for rain (both PWx sensor and Arbiter) are better than for all other weather types investigated, except for fog. The Arbiter rainfall event hit rate was 65%, compared to 56% for the PWx sensor alone. The PWx sensor’s performance improved for moderate/ heavy rain, compared to light rain events. The Arbiter did add value to the PWx sensor scores through use of an Eigenbrodt precipitation detector. However, sensitivity and contamination issues with the Eigenbrodt led to limited success in improving the skill score. Recommendations Regular, routine inspection will maintain PWx sensor operability. These visits must be well planned and documented. Housekeeping and cleaning procedures should also be implemented for the precipitation detector. If PWx sensor data is missing for more than 60 minutes, the Arbiter should return an unknown PWx code. This is especially true if the precipitation detector returns a zero code in light precipitation events, leading to inaccurate Arbiter reporting. A future PWx system should report its message quality status. At present, the Arbiter will report a PWx code based on missing data, which users perceive as valid. More information is needed for users to have confidence in the system, and to extract extra meteorological data to support and enhance the observation. Role of the Arbiter: Should the algorithm’s role be merely to verify the code reported by a PWx sensor, and assign an uncertainty to its observation? Or is its purpose to override the output from a PWx sensor? Any PWx system needs to report sleet and drizzle well. The system must monitor data collection rates as part of housekeeping, in order to determine the amounts of missing data values.
TURKEY AWOS TRAINING 1.0 / ALANYA 2005 TRAINING COURSE ON AUTOMATED WEATHER OBSERVING SYSTEMS ( AWOS ) MODULE D: DATA PROCESSING SYSTEM SONER KARATAŞ ELECTRONIC.
Running a model's adjoint to obtain derivatives, while more efficient and accurate than other methods, such as the finite difference method, is a computationally.
Kewen Xu Guixiang Zhao ( National Climate Observation Station of Taiyuan ， No568,pingyang Road, Taiyuan , Shanxi,P.R.China ) Quality Control and.
SIPR Dundee. © Crown copyright Scottish Flood Forecasting Service Pete Buchanan – Met Office Richard Maxey – SEPA SIPR, Dundee, 21 June 2011.
Commission for Instruments and Methods of Observation Fourteenth Session Geneva, 7 – 14 December 2006 INSTRUMENTS AND METHODS OF OBSERVATION FOR SURFACE.
© Crown copyright /0XXX Met Office and the Met Office logo are registered trademarks Met Office FitzRoy Road, Exeter, Devon, EX1 3PB United Kingdom.
Weather and X/Q 1 Impact Of Weather Changes On TVA Nuclear Plant Chi/Q ( /Q) Kenneth G. Wastrack Doyle E. Pittman Jennifer M. Call Tennessee Valley Authority.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
© Crown copyright Met Office UK Met Office investigations into laser disdrometers Present Weather Trial at Eskdalemuir, Scotland: Winter 2007/8 Darren.
The Comparison of the Software Cost Estimating Methods
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
Paul Fajman NOAA/NWS/MDL September 7, NDFD ugly string NDFD Forecasts and encoding Observations Assumptions Output, Scores and Display.
By Dwayne Scott Electronic Technician. The National Meteorological Service (NMS) of Belize is a small department within the Government of Belize that.
© Crown copyright Met Office Enhanced rainfall services Paul Davies.
Operational Quality Control in Helsinki Testbed Mesoscale Atmospheric Network Workshop University of Helsinki, 13 February 2007 Hannu Lahtela & Heikki.
SIM5102 Software Evaluation
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Chapter 19 Visibility & Visibility reducing phenomena.
Weather Station Models Meteorologists use a system of assignment and coding to report a variety of weather conditions at a single location on a weather.
WEATHER AND CLIMATE Class 8- GEOGRAPHY.
© 2018 SlidePlayer.com Inc. All rights reserved.