The possible flaw in Harrabin’s BBC ‘Weather Test’

Regular readers will be familiar with the Weather Test project that BBC environment analyst Roger Harrabin is trying to construct.  This blog has previously speculated on the Weather Test and asked how likely it is to be impartial when the key players behind it have commercial and academic partnerships with each other.

But there is another question to ask about the Weather Test, and that is how likely it is to provide any value.  After discussions with some meteorologists a scenario has emerged that has the capacity to render the whole project worthless.

In the UK there are typically around four or five major weather events per year.  The problem with a project like Weather Test (if it ever sees the light of day) is how to weight the forecasts appropriately.  If a competing forecaster was able to produce a forecast accuracy rate for, say, 75% of the days in the test period when there are no major weather events, but completely miss major events, how would that be weighted to demonstrate that when it comes to forecasts that really matter their accuracy was found wanting?

Such weighting before any such test commenced would by definition be arbitrary – a bit like the adjustments and smoothing applied to temperature readings that always seem to increase the recorded temperature.  So what is the real value of such a project?

Perhaps a more effective guide to comparing the accuracy of forecasters would be to turn our eyes to the commercial sector and see who retains business because of their accuracy and who loses business for inability to pinpoint in good time what really matters – namely those major events that have the most bearing on commercial customers.  Is there really any value to Harrabin’s little endeavour?

5 Responses to “The possible flaw in Harrabin’s BBC ‘Weather Test’”


  1. 1 Dominic Allkins 16/02/2011 at 9:47 am

    It’s a very good point AM.

    By definition the test is set up to fail if it looks at how forecasters might predict future events over a given period and then verifies those forecasts for the reasons you mention.

    Certainly looking at who retains their business would be a good measure – I’m a great believer that ‘the market’ will generally give a better understanding. This however introduces the possibility of political/ethical pressure (whether direct or indirect) being introduced. For example, did the MO retain the BBC contract because of their accuracy, political pressure or because the MO forecasts better fitted the narrative the BBC wanted to put across. Likewise, do the MOD retain the services of the MO because it’s part of their structure or because they are the most accurate.

    Perhaps the better test will be to look backwards and see how well prior forecasts have predicted. To construct the test I’d suggest the following fairly structure:

    1. Collate three years of prior forecasts from the forecasters included in the test with an end date of, say, 31st Dec 2010

    2. Match these forecasts against the known weather (temp, rainfall, trends and events) for the test period looking at four set of forecasts – seasonal (i.e. 3 months ahead), monthly, weekly and two days out.

    3. Assign an accuracy score for each of the measures. It’s probably worth noting that the importance of the accuracy score will vary depending on who the forecast is for, i.e. national and local governments will need more accurate seasonal forecasts (to plan for major events such as heavy snowfall lets say) than would an individual (who needs to know whether to take an umbrella out in the morning).

    4. The above is then re-run for a different three year period.

    To mitigate against confirmation bias my suggestion would be that two teams are set the task of measuring the accuracy with each team consisting of members from both sides of the AGW/CC/CD debate (I’ll get the popcorn). Each team would then verify and validate each others scores.

    I’m sure there are some flaws in this so if you (or other commenters) can add/edit to this perhaps we could suggest this as an alternate approach to Harrabin’s.

    Cheers

    Dominic

  2. 2 Anoneumouse 16/02/2011 at 9:47 am

    Instead, why don’t we have an accuracy in ‘Reporting’ test?

    Discuss :-)

  3. 3 CalvinBall 16/02/2011 at 9:58 am

    Another great post AM. It is kind of ironic isn’t it that a key narrative of the BBC is the Bullingdon Boys club ensconced with their cabal of city friends and vested interests and their analysis of the hypocrisy of it all, whilst seeing no issue at all with this closed shop arrangement.

  4. 4 Jim 16/02/2011 at 7:01 pm

    I can tell how accurate the Met Office 5 day forecast is for my location – 0%. If you were to write down each day what they predict for the 5th day out from the time of prediction, and compare it to what actually happened I’d be surprised if they were ever right, other than by random chance. Their forecasts change regularly – what they predict for day five has changed when its day 4 and so on and so forth right up to 24 hours out, when they are reasonably accurate.

    I am farmer so am often glued to the 5 day forecast, especially around harvest time. Its got so bad that I now take it that whatever they predict for days 4 and 5 definitely WON’T happen!

  5. 5 artwest 16/02/2011 at 8:25 pm

    I’m afraid I have rather less faith in the wisdom of the market place. Homeopaths and “psychics” would have been consigned to the dustbin of history long ago if objective results were the only reason for success.
    How often do we see major companies with objectively mediocre or worse records thrive while smaller companies who provide a demonstrably better service struggle?
    How often is a poorer service or product bought because it is cheaper in the short run even though it might lead to a higher cost eventually? How often does a company choose a big-name supplier over a minnow because the managers making the decision know that they are less likely to be criticized if it all goes wrong?
    I really wouldn’t necessarily equate business success with doing the best job.


Comments are currently closed.



Enter your email address below

The Harrogate Agenda Explained

Email AM

Bloggers for an Independent UK

STOR Scandal

Autonomous Mind Archive


%d bloggers like this: