Tool for monitoring guidelines
- 1. Introduction
- 2. Objective of this tool
- 3. Step-by-step plan for drafting a monitoring plan
- 3.1. Determine the objective of the monitoring
- 3.2. Set the level at which monitoring will take place
- 3.3. Determine what needs to be evaluated or measured
- 3.4. Determine the measurement method(s)
- 3.5. Verify the necessary and available resources
- 3.6. Determine how the data are aggregated and analysed
- 3.7. Describe how the monitoring is implemented
- 3.8. Determine how the data is reported on and by whom
- 4. Literature
Every year, dozens of guidelines are developed or revised in the Netherlands. Health care professionals are expected to be aware of the guidelines in their field and to work in accordance with those guidelines, as part of the professional standard. It is important to keep an eye on this through continuous monitoring. There are various methods to monitor compliance with guidelines and which one is chosen depends on the objective of the monitoring1. There is no one single method of measurement that is appropriate for all objectives. This means that the person who decides to monitor a guideline must make a number of choices regarding objective, desired size, method, analysis and reporting of the monitoring. This can be described in a monitoring plan.
1 Monitoring includes measurement and/or evaluation (quantitative and/or qualitative) and continuing use (in time).
2. Objective of this tool
This tool is a checklist that helps develop a monitoring plan that suits the intended purpose of that plan.
3. Step-by-step plan for drafting a monitoring plan
3.1. Determine the objective of the monitoring
The following possible objectives can be distinguished:
- Evaluation of the substantive knowledge and acceptance of the guideline by the target group/end user. Sometimes this has already taken place in the commentary round, when the draft guideline is submitted to the target group.
- Evaluation of the guideline’s feasibility and practicability in the form of a trial implementation or test in daily practice.
- Evaluation of the guideline’s effect on structures and care processes (professionals’ actions). This can also include the cost of implementing the guideline.
- Measuring guideline compliance for quality improvement purposes as part of a quality improvement cycle. The reasons for deviating or not deviating from the guideline can also be explored.
- Measuring the care functions for the purpose of supervising, financing, contracting, accrediting/certifying or benchmarking (national or international).
- Measuring the guideline’s effect on the client/patient population. For example, does using the guideline lead to better disease diagnosis, a better quality of life or less damage to health?
- Explore problems related to guideline use in practice as part of the bottleneck analysis when revising the guideline.
- Evaluation of the guideline’s integration in regional transmural agreements and collaboration processes.
The objective is described partly on the basis of the description of the context and partly on the basic principles.
3.2. Set the level at which monitoring will take place
Here are some possibilities:
- National. This may involve measuring the national quality policies of the scientific associations, the procurement policies of health insurers, or monitoring by the Health Care Inspectorate (Inspectie voor de Gezondheidszorg) or on behalf of the selection information for patients/clients.
- Regional. This may be linked to a regional infrastructure such as a network of care pathways, a comprehensive cancer centre or a medical coordinating centre.
- Local. Internal quality improvement at the local level usually takes place in a health care facility or practice.
3.3. Determine what needs to be evaluated or measured
The guideline can be evaluated as a whole through qualitative methods that measure awareness, knowledge of and attitude to the guideline. The perceptions of the guideline’s users about the guideline’s application and applicability in practice can also be measured. An optimistic bias should be taken into account: guideline users may think they often apply the guideline while in reality they do not.
The knowledge and acceptance of, as well as compliance with specific guideline recommendations can be measured by the use of indicators (making a distinction between structure, process and outcome indicators). See the Guide for Indicator Development (Handleiding indicatorontwikkeling) from the Dutch Institute for Healthcare Improvement (CBO). In short, the following steps should be followed:
- Define, select and prioritise the core recommendations (usually no more than 10 per guideline). Sometimes the guideline’s authors have already done this.
- Translate the core recommendations into indicators. Measurability and feasibility must be taken into account, preferably, by extracting them from available electronic registration systems. Also take into account which indicators are already available and whether the indicators are feasible in the short or long term.
- (optional) Draft a minimal standard and/or target standard after which improvement points can follow. This is related to an estimate of the extent to which deviation from the guideline is permitted.
3.4. Determine the measurement method(s)
The use of a guideline can be qualitatively or quantitatively measured.
- Qualitative methods include focus groups, semi-structured interviews, questionnaire-based research, observations and video recordings. These can measure aspects such as attitude, identification of implementation obstacles, actions based on advice related to information, education, communication, consultation or collaboration.
- Quantitative methods include data collection from electronic client/patient records, self-registration lists, specific quality registries such as the Neonatal Intensive Care Evaluation (Neonatale Intensive Care Evaluatie, NICE) or the Netherlands Perinatal Registry (Perinatale Registratie Nederland), and specific registration systems such as the National Medical Registry (Landelijke Medische Registratie), the DBC Information System (DBC Informatie Systeem) or the Digital Dossier on Youth Health (Digitaal Dossier JGZ).
Involve people with sufficient registration experience.
3.5. Verify the necessary and available resources
Consider the following aspects:
- People with sufficient knowledge and expertise on collecting and registering data (health care professionals and support staff)
- Required and available time for registering, aggregating and sending the data
- Use of existing registrations
- Available space and resources: computers, adequate software, registration forms, communication resources
- Funding. This concerns the physical resources and the personnel costs in the form of attendance fees for participants.
If the required and available resources are inadequate, rethink the measurement method(s).
3.6. Determine how the data are aggregated and analysed
The health care professionals enter the client/patient-related data. If it is used for monitoring, then, ideally, the reliability and validity of a random sample of the data should be checked. When monitoring is done for external purposes (e.g. public reporting), there are more stringent requirements for data validity than when it is done for internal purposes (e.g. audit and feedback). The support of other parties is often needed for the aggregation and analysis. At a national level, these supporters can be scientific associations, health insurers or knowledge institutes. At the local level, they can be quality or registration staff of heath care providers, supporting organisations such as medical coordination centres or the laboratories used by primary care GPs. A plan for data analysis must be made that is based strongly on the objective (see 3.1). There are a number of ways to analyse monitoring outcomes:
- in comparison with a pre-determined absolute standard, for example, with respect to safety or volume
- a comparison of an organisation and/or health care professional with others (‘benchmark’) or
- a comparison of an organisation and/or health care professional with itself by examining whether work done according to a guideline is changing over time.
3.7. Describe how the monitoring is implemented
Attempts should be made to tie the monitoring in with existing systems and procedures as much as possible, such as quality visitations by medical specialists and feedback modules in GP information systems. If these are not available or fall short, a small-scale practical test can be chosen where the feasibility of data collection is especially considered. At the same time, factors that impact the guideline’s implementation positively or negatively can be looked at, such as the trial implementation processes from the JGZ guidelines. More information can be found in the (Tool for Implementation of Guidelines), which includes tips on achieving sufficient support and cooperation for the monitoring. This may be followed by revisions to the guideline and the monitoring plan.
3.8. Determine how the data is reported on and by whom
When benchmarking, a choice may be taken to make comparisons with colleagues or with other institutions. Both an average and a desirable standard can be set. In the case of multiple measurements over time, trends can be measured for the performance of a group or the performance of an individual care provider. The significance of the feedback reports depends on the previously defined targets. Who receives the feedback partly depends on who will have to act on the outcomes of the indicators.
- Beersen N, Kallewaard M, Van Croonenborg JJ, Van Everdingen JJE, Van Barneveld TA. Handleiding indicatorontwikkeling. Utrecht: Kwaliteitsinstituut voor de Gezondheidszorg CBO, 2007.
- Fleuren MAH. Essentiële activiteiten en infrastructuur voor de landelijke invoering en monitoring van het gebruik van de JGZ-richtlijnen. Leiden: TNO Kwaliteit van Leven, 2010.