BT's Undertakings to Ofcom, accepted in September 2005, require BT to publish relevant key performance indicators (KPIs).
These play an important role in assuring all the stakeholders concerned with BT's performance that the Undertakings are being delivered.
Further KPI reports are available on the BT Wholesale and the Office of the Telecommunications Adjudicator sites.
We are interested in your views, particularly those of other communications providers. Please e-mail: firstname.lastname@example.org, heading your message "KPI Reporting".
We have been publishing for several years, and have built a useful base of information. In general, we have found little evidence of difference between the product performance BT receives and that provided to its communications provider (CP) customers. Where there have been minor differences these have been to BT’s relative disadvantage, converging over time or the result of known factors that we have addressed. KPI performance is an indicator of equivalence but must be viewed with other measures as evidence as to whether or not equivalence is being delivered. It must also be noted that BT does not serve all its end-users on an equivalent basis and the migration of the installed base is underway and scheduled to continue to 2010. The results so far, are encouraging and offer evidence that BT does not systematically favour its own businesses over those of its CP customers.
The report uses a statistical tool called a "z-test". A z-test allows us to understand whether or not any difference in performance between the service BT provides to itself and to others is likely to result from random variation. This is clearer than looking at raw performance data, where differences are not tested for significance. On the charts you will see a zone denoted by two blue lines. If the results fall within this zone it is likely they are random variations. If they fall outside the zone consistently, we investigate the difference.
Technically, the z-test uses the normal distribution of probabilities to define the zone where differences are most likely to be due to random factors. Actual results are then plotted against this. The z-tests are only reliable where there is adequate data for the normal distribution to be reliably calculated. For that reason, we do not perform the test for every product. There is further description of z-tests at: http://en.wikipedia.org/wiki/Z-test/ (note: external link)
The z-test is, therefore, a useful tool for identifying areas where there may be equivalence issues. It does not, in itself, prove whether BT is, or is not, providing service on an equivalent basis. That can only be established by looking at the detail of how BT is providing its services.
Using the z test is more objective than looking at raw performance data, where it is not possible to tell if any observed differences are statistically significant. BT therefore no longer publishes raw performance data as part of these KPIs.
We value your comments so if you have any views on this or any other aspect of these KPIs please e-mail to email@example.com.
Individual CP results
We have received comments that some individual CPs are interested in understanding how their individual results compare with BT's. This is not something that BT publishes as part of its equivalence activity, which generally looks at performance in aggregate. At an individual level many factors such as geographical distribution and product mix can skew the results. If however there are particular service concerns that mean you need better to understand the performance you are receiving please contact your Customer Business Manager.
Just what is "BBRD1" anyway?
In some of the product KPIs you will see specific abbreviations of the measurement we are using, for example BBRD1. We include them so that people with an interest can identify the statistics accurately when cross referring to other KPI publications, such as those supplied to Ofcom or the Office of the Telecoms Adjudicator.