martes, 13 de diciembre de 2011

An appraisal of indicators used to monitor the treated population in antiretroviral programmes in low-income countries


AIDS:
13 November 2010 - Volume 24 - Issue 17 - p 2603–2607
doi: 10.1097/QAD.0b013e32833dd0d3
Opinion

An appraisal of indicators used to monitor the treated population in antiretroviral programmes in low-income countries

Hoskins, Susana; Weller, Ianb; Jahn, Andreasc; Kaleebu, Pontianod; Malyuta, Ruslane; Kirungi, Wilfordf; Fakoya, Adeg; Porter, Kholouda

Free Access
Article Outline
Collapse Box

Author Information

aMedical Research Council Clinical Trials Unit, UK
bUniversity College London Medical School and Camden Primary Care Trust, London, UK
cMinistry of Health, Lilongwe, Malawi and International Training and Education Center on HIV, Malawi/Seattle, USA
dMedical Research Council Uganda Virus Research Institute, Entebbe, Uganda
ePerinatal Prevention of AIDS Initiative, Odessa, Ukraine
fMinistry of Health, Kampala, Uganda
gInternational HIV/AIDS Alliance, Brighton, UK.
Received 19 February, 2010
Revised 18 June, 2010
Accepted 30 June, 2010
Correspondence: Susan Hoskins, MRC Clinical Trials Unit, 222 Euston Road, London NW1 2DA, UK. Tel: +44 207 670 4608; fax: +44 207 670 4815; e-mail: sjh@ctu.mrc.ac.uk
Collapse Box

Abstract

Monitoring the progress of HIV programmes is vital, as services are scaled up to include increasing numbers in need of care. Globally, the presence of multiple donors at all levels of HIV care has produced vast monitoring systems. Within HIV-treatment programmes in low and middle-income countries, directly assessing long-term outcomes such as survival is problematic, so indicators are used to monitor the progress of the treated population. However, the internal, external, construct validity and predictive value of current indicators have never been evaluated. Although the burden on facility staff compiling routine monitoring reports is vast, there is uncertainty as to which indicators best monitor patient progress. This burden will grow as increasing numbers of life-cohorts are created for monitoring purposes leading to data inaccuracies and compromising the internal validity of reported indicators. Furthermore, a number of fundamental indicators, including survival and retention, may not capture the construct they intend to measure, compromising the ability of programme managers to obtain reliable estimates regarding the welfare of their population in care. It is not known which indicators can predict the longer-term outcome of the patient population, and as such, can enable managers to respond to predictors of failure early. An evaluation of current indicators is urgently needed to ensure that reported facility-level data accurately reflect the welfare of the treated population and comparisons of programme performance are meaningful.
Back to Top | Article Outline

Introduction

Programme managers and donors have a responsibility to monitor the progress of HIV programmes to inform best practice for ART roll-out in low and middle-income countries. Routine quantitative monitoring reports devised by funders, collate data into programmatic ‘indicators’ to evaluate the effectiveness of countrywide programmes. Within programmes, they can be used to detect immediate problems, and to inform resource allocation, assess compliance, and guide further funding decisions.
However, the presence of multiple donors supporting various aspects of HIV programmes has resulted in an anarchic system with numerous indicators. To provide a comprehensive source of HIV monitoring indicators, UNAIDS launched an Indicator Registry collating more than 200 programme indicators [1].
We discuss the challenges of monitoring the progress of the treated population in these settings by describing the lack of consensus on indicators, and the burden associated with compiling them. We question the validity of indicators within routine programmes and their predictive value for ART care.
Back to Top | Article Outline

Monitoring in the context of multiple donors

Upon entry into an ART programme, each patient is assigned one line in an ART register across which data at initiation and follow-up visits are entered to be extracted to create routine indicator reports. ART registers are usually paper-based and indicators calculated manually, as electronic systems are rare.
However, there is little consensus on priority indicators and, while the UNAIDS Registry collates more than 100 indicators relating to treatment, including 55 specific to ART care, only one specific to the outcome of patients is commonly recommended by UNGASS, WHO, GFATM and PEPFAR: the proportion alive and on treatment 12 months after ART initiation [2]. Other indicators relating to on-time drug pick-up, concurrent prophylaxis and treatment for opportunistic infections are prioritized differently by international organizations [3–6]. Of note, the data items required to compute indicators might far exceed the number of indicators themselves.
Back to Top | Article Outline

Life-long cohort reporting

To assess the progress of the treated population, a unique monitoring system has evolved through the creation of life-long reporting cohorts. A cohort is derived by grouping patients, for example, in one quarter and calculating indicators for it. Patients initiating ART in the following quarter form the next cohort, and so on. Comparing indicators from these cohorts enables an assessment of improvements in care. Clearly, life-cohort monitoring requires indicator reporting throughout the time for which a patient remains in care and the growing number of cohorts, as increasing numbers initiate ART, leads to ever-increasing levels of data collection and reporting.
Table 1 gives an overview of commonly collected indicators, the constructs they intend to measure and reasons why the validity or predictive value may be questionable. We expand on these issues below.
Table 1
Table 1
Image Tools
Back to Top | Article Outline

Internal validity

Data inaccuracies compromise the ability of programme managers to make correct inferences about the health of the treated population to inform patient care. An audit of routinely reported data in Malawi indicated that 28% of sites inaccurately reported on the number on ART or the number on first line regimen at end of quarter, resulting in a 5 and 12% undercount, respectively [7]. Furthermore, an examination of ART databases found nearly 11% are missing data for key variables which would contribute to inaccurate indicators [8].
Moreover, the ability of the denominator used for most indicators, number initiating ART at the beginning of the reporting period, to represent correctly those in care is questionable. The cohort grouping rarely, if ever, distinguishes between newly initiating therapy and drug-experienced patients transferring from other clinics. This leads to overestimations of the number initiating ART. In Malawi, 12% of patients who registered in the national ART programme during the second quarter in 2009 had transferred from another clinic [9]. This will likely considerably inflate the denominator, fundamentally biasing reported indicators. For donors, the resulting over-count in the number initiating ART may have limited consequences whereas, for programme managers, it may lead to underestimating survival.
Back to Top | Article Outline

Construct validity

Indicators need to measure the theoretical construct they intend to measure to obtain reliable estimates of the size and health of the population in care. However, a number of indicators may not do so.
To estimate survival, patients contribute different durations of follow-up but, given that many clinics lack the infrastructure to capture patient data electronically, calculating ‘survival’ is not feasible. Facilities instead report the number on ART at the beginning and end of a discrete time period, in effect, capturing the proportion retained in care, and the important detail of how long people live for after starting therapy is lost. Often this leads to over-estimating survival. In a review of 17 studies, 40% of patients lost to follow-up (LTFU), whose outcome could be ascertained through active follow-up, were found to have died[10]. Furthermore, facilities reporting a high number of ‘transfers out’ may underestimate true retention rate in a country aggregated monitoring system, for example, in Malawi, 90% of transfers-out were still on ART elsewhere [11]. Basing programme performance on ‘retention’ as a stand-alone indicator, gives equal weight to patients who have died and to those transferring-out or stopping ART [12]. It may also overstate performance as, for example, an evaluation based on median CD4 cell count increase fails to take account of those no longer in care, who may have lower CD4 cell counts than those retained [13]. Decisions based on data relating to retention, therefore, need to be made with caution.
Back to Top | Article Outline

Content validity

It is vital that the data content of an indicator represents what the indicator aims to measure. For example, the indicator proportion of a cohort whose functional status is working aims to measure increased productivity, and thus successful ART [14]. Therefore, the data classifying a patient's functional status as working, ambulatory or bedridden must truly represent productivity and health status. However, a patient's functional status may change without an associated health improvement. Furthermore, interpreting functional status classifications may differ. One site may record patients actively employed, whereas another records all patients able to work but not necessarily employed.
Even if an indicator has been validated in individual patient care, producing it routinely from paper-based systems may result in poor quality data used to calculate even a sound indicator.
Back to Top | Article Outline

External validity

As indicators are used to learn from more ‘successful’ programmes to improve patient care, and to compare programmes' performance to inform resource-allocation, indicators must make correct inferences about different populations.
A programme's mortality and retention indicators, cautiously interpreted, can provide vital information to managers to ensure high quality of care. However, using these indicators to assess performance across programmes is problematic, particularly if different inclusion criteria have been used. Enrolling patients based on demonstrated good adherence, for example, may result in a programme ‘out-performing’ another, which has no selection criteria [15]. Likewise, if sites invest vast resources into tracing patients LTFU ascertaining vital status, they may, as a consequence, report a high number of deaths. Comparing these estimates to a programme without the ability to trace patients is clearly inappropriate.
Stratifying indicators by age groups is important within a programme to evaluate the effectiveness of ART for infants, children, adolescents and adults. However, differences in age stratifications between programmes make cross-programme comparisons challenging. Moreover, reporting the same indicators for adults, children and adolescents may be inappropriate. For example, reporting the proportion of patients demonstrating more than 90% adherence is challenging in infants, given that their medication is liquid formulation[16]. Adolescents, a substantial epidemic of whom is emerging in sub-Saharan Africa, experience particular issues as a result of long-term infection and specific indicators to monitor their progress are required[17,18].
Back to Top | Article Outline

Predictive value over time

As directly assessing long-term outcome on ART is problematic in these settings, it is imperative that routine indicators are able to do this indirectly. For example, both tuberculosis (TB) incidence and mortality among HIV-infected patients attending ART programmes are higher during the initial months on ART [19,20]. An indicator measuring the proportion with active TB, for example, 3 months after ART initiation may closely predict the survival of the treated population 12 months after ART initiation. Knowledge of the value of an indicator, for example more than 20% of a cohort with active TB at 3 months, should enable managers to respond to predictors of failure early. However, the ability of early indicators to predict longer-term outcome, such as survival, has never been evaluated.
An understanding of the risk of death and morbidity over time will help inform the decision on the optimal time-points for indicator reporting and reduce the reporting burden. For example, given that children less than 5 years appear to have a more rapid disease progression compared with young adults [21], more frequent monitoring may be necessary in paediatric programmes.
Back to Top | Article Outline

Conclusion

We recommend, as priority, a scientific evaluation of the validity and predictive value of indicators currently collected at the programme level, against the survival of patients within the treated population. This will provide the evidence-base through which indicators can be refined, and will guide decisions on which indicators should be selected to monitor HIV programmes. A balance must be struck between indicators that are, in fact, valid whilst remaining feasible to collect within clinics.
Resources within health facilities are being stretched in order to provide the data for monitoring indicators. There is, therefore, an obligation to ensure that the effort is worthwhile, and the reported data adequately reflect the welfare of the treated population.
Back to Top | Article Outline

Acknowledgements

S.H. wrote the first draft paper and handled the revisions. All authors were involved in improving the intellectual content of the manuscript.
This manuscript is an output from a project funded by the Evidence for Action on HIV treatment and care systems (EfA) research consortium. EfA is funded by the UK Department for International Development (DFID), for the benefit of developing countries. The views expressed are not necessarily those of DFID.
The authors wish to acknowledge and thank Marie-Louise Newell and the Africa Centre (Kwa-Zulu Natal, South Africa), Norah Namuwenge (Ministry of Health, Kampala, Uganda), and Igor Semenenko (Perinatal Prevention of AIDS Initiative, Odessa, Ukraine) for their contributions to this work.
There are no conflicts of interest. This work was presented in part at the XVII International AIDS Conference, Mexico City, 3–8 August 2008, [MOAB0206].
Back to Top | Article

No hay comentarios:

Publicar un comentario