Author: Claudia Mir Cervantes | Country: Mexico
A few years ago, I coordinated a team to carry out an evaluation to assess the supply of medicines for the care of chronic-degenerative diseases in health centers in Mexico. This was decided to be undertaken by a study with mixed methods. The study started with a structured observation process, in which an inventory of existing medicines was made at the time of the visit to the health centers. This was complemented by a survey of the staff of the centers carried out by the same team that generated the inventories. In addition, another team carried out a qualitative information survey with key actors from a sub-sample of the health centers surveyed and with authorities from the sector in central offices.
Due to time constraints, the agency that undertook the evaluation did not authorize the process of piloting the instruments. The evaluation team made sure to explain the importance of this process and finally agreed to conduct the survey under these conditions, under the responsibility of the client.
When processing the information, the team observed that the findings of the qualitative study differed significantly from the results of the quantitative study. The former showed high rates of drug shortages, while the latter showed adequate levels of supply. On closer examination, it was concluded that an error had been made in the collection of quantitative information: only those health centers reporting an electronic inventory system counted medicines one by one. This was not the objective! The idea was to count the medicines in all the health centers in the sample, regardless of the existence or absence of an inventory system.
Upon further exploration, it was determined that the error was not in the interviewers or the survey process, but a problem in the design of the questionnaire, as the lock to determine the transition from one question to another was wrong. That is, erroneously, the counting of medicines was subject to the existence of an inventory system. This type of error is usually easily detected during the piloting of the instruments, but since this was not done, it was overlooked at a huge cost. In addition, those responsible for supervising the process of collecting quantitative information did not identify this as an error, as they had not been involved in the design of the instrument and did not know the precise objectives of the study.
Fortunately, there were plans to collect information that would allow for triangulation. Thus, although there were problems with the direct observation survey (the inventory), the survey of health center directors took up the issue of drug supply, as did the interviews in the qualitative study. This allowed the evaluation questions to be answered in a general way, but it was impossible to adequately quantify the average supply in all the health centers studied.
What I intend to illustrate with this example is the following:
Claudia Mir Cervantes is an economist who has 16 years of experience in the evaluation of public policies. She is also a designer and trainer of monitoring and evaluation capacity building courses for a wide range of audiences.