Pathfinder A guide to Monitoring and Evaluating Adolescent Reproductive Heath Programs Part 1

Publication date: 2000

A Guide to FOCUS on Young Adults Monitoring and Evaluating Adolescent Reproductive Health Programs Tool Series 5, June 2000 Susan Adamchak Katherine Bond Laurel MacLaren Robert Magnani Kristin Nelson Judith Seltzer A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 2 © FOCUS on Young Adults, 2000 Any part of this publication may be copied, reproduced, distributed, or adapted without permission from the author or publisher, provided the recipient of the materials does not copy, reproduce, distribute, or adapt material for commercial gain and provided that the author and FOCUS on Young Adults are credited as the source on all copies, reproductions, distributions, and adaptations of the material. The FOCUS on Young Adults program promotes the well-being and reproductive health of young people. FOCUS is a program of Pathfinder International in partnership with The Futures Group International and Tulane University School of Public Health and Tropical Medicine. FOCUS is funded by USAID, Cooperative Agreement # CCP-A-00-96-90002-00. The opinions expressed herein are those of the authors and do not necessarily reflect the views of the U.S. Agency for International Development. Please send suggestions or comments to: FOCUS on Young Adults Attn: Communications Advisor 1201 Connecticut Avenue, NW, Suite 501 Washington, DC 20036, USA Tel: 202-835-0818 Fax: 202-835-0282 Email: focus@pathfind.org i Acknowledgements he authors are indebted to the many people who con- tributed to the development and review of this Guide. We wish to acknowledge the dedicated efforts made by several graduate research assistants working with FOCUS on Young Adults at the Tulane University School of Public Health and Tropical Medicine, Department of International Health and Development. Stephanie Mullen began the detailed project of compiling program indi- cators. Gwendolyn Morgan prepared the appendices listing recommended refer- ences and Internet Web sites, and provided formulae for the Indicator Tables. Emily Zielinski assisted with the Indicator Tables and appendices. Our FOCUS colleagues, Sharon Epstein, Lindsay Stewart, Barbara Seligman and Lisa Weiss, read early versions of this Guide and offered helpful suggestions. Their com- ments reminded us to keep in the forefront of our efforts the many program staff we hope will find this volume useful. The authors would like to express their appreciation to FOCUS staff member Christine Stevens for her critical review and recommendations for reorganizing several chapters of the Guide. We would also like to recognize Laura Sedlock, whose accom- plished editing did much to clarify concepts and blend the voices of the authors. Ideas and concepts that shaped the devel- opment of this Guide were discussed at a FOCUS Research and Evaluation working group meeting in April 1998. Those who participated in the discussion included Lisanne Brown (Tulane University), Nicola Bull (UNICEF), James Chui (UNFPA), Richard Colombia (Pathfinder International), Bruce Dick (UNICEF), Jane Ferguson (World Health Organization), Alix Grubel (John Snow International), Paula Hollerbach (Academy for Educational Development), Marge Horn (USAID), Merita Irby (International Youth Foundation), Lily Kak (CEDPA), Rebecka Lundgren (Georgetown Institute for Reproductive Health), Matilde Maddaleno (Pan American Health Organization), Leo Morris (Centers for Disease Control), Lisa Mueller (John Snow International), Ian Tweedie (Johns Hopkins University Center for Communications Programs), Stephanie Mullen (Tulane University), Phyllis Scattergood (Education Development Center, Inc.), Annetta Seecharan (International Youth Foundation), Linda Sussman (USAID), Katherine Weaver (Pan American Health Organization), Ellen Weiss (Population Council/Horizons) and Anne Wilson (PATH). Those who provided critical comments and feedback during the field review of this Guide include Jane Bertrand (Tulane University), Carlos Brambila (Population Council, Mexico), Eunyong Chung (USAID), Charlotte Colvin (The Futures Group International), Shanti Conly (USAID), Barbara deZalduondo (USAID), Joyce Djaelani (PATH Indonesia), Maricela Dura (Fundaci—n Mexicana para la Planeacin Familiar), Natalia Espinoza (CEMOPLAF Ecuador), Julie Forder (CARE Cambodia), Phyllis Gestrin (USAID), Evam Kofi Glover (Planned Parenthood Association of Ghana), Y.P. Gupta (CARE India), Lisa Howard-Grabman (Save the Children), Douglas Kirby (ETR Associates), Rekha Masilamani (Pathfinder International, India), Ruth Maria Medina (Population Council, Honduras), Dominique Meekers (Population Services International), Irene Moyo (JSI/SEATS), Nancy Murray (FOCUS on Young Adults), Mary Myaya (CARE Lesotho), Sonia Odria (Pathfinder International, Peru), Oladimeji Oladepo (Department of Preventive and Social Medicine, Nigeria), Anne Palmer (PATH T Philippines), Susan Pick de Weiss (Instituto Mexicano de Investigaci—n de Familia y Poblaci—n), Gabriela Rivera (Pathfinder International, Mexico), William Sambisa (PACT Zimbabwe), Jessie Schutt-Aine (International Planned Parenthood Federation), Alfonso Sucrez (Fundaci—n Mexicana para la Planeaci—n Familiar), Oswaldo Tanako (Pan American Health Organization), John Townsend (Population Council/Frontiers), Laelani L.M. Utama (Pathfinder International, Indonesia), Pilar Vigal (CEBRE, Chile), Amy Weissman (Save the Children), Anne Wilson (PATH) and Kate Winskell (Global Dialogues). Presentations and participant discussion at the YARH Measurement Meeting sponsored by the Centers for Disease Control (CDC) Division of Reproductive Health and FOCUS on Young Adults in September 1999 helped shape the discussion of data collection. In particular, presentations by Gary Lewis (Johns Hopkins University Center for Communications Programs), Paul Stupp (CDC Division of Reproductive Health) and Cynthia Waszak (Family Health International) were helpful in finalizing this Guide. Health and Human Development Programs staff of the Education Development Center, Inc. (EDC), managed the review process under the able direction of Phyllis Scattergood and Carmen Aldinger. EDCÕs Editing and Design Services, led by Jennifer Roscoe, was responsible for the production of this Guide, including design and coordination by Cathy Lee and revisions and copyediting by the editorial staff. Their creative input is very much appreciated. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs ii Acronyms and Abbreviations ARH adolescent reproductive health BCC behavior change communication CEA census enumeration area DHS Demographic and Health Survey FLE family life education IEC information, education and communication M&E monitoring and evaluation MIS management information system MOS measure of size NGO nongovernmental organization PPS probability-proportional-to-size RH reproductive health RTI reproductive tract infection STD sexually transmitted disease STI sexually transmitted infection USAID United States Agency for International Development iii About the Authors Susan E.Adamchak is president of Planning & Evaluation Resources, Inc. Her areas of expertise include population and health policy development, program assessment and eval- uation of reproductive health and public health programs. She holds a PhD in Sociology from Brown University. Katherine Bond is Research Assistant Professor at the Tulane University School of Public Health and Tropical Medicine, and Research and Evaluation Advisor at FOCUS on Young Adults. She has managed HIV/AIDS programs for youth in the United States and Thailand, and has trained governmental and nongovernmental organizations in Asia and Africa on the use of social research methods for program design and evaluation. She has a doctorate in international health from Johns Hopkins University. Laurel MacLaren was the Communications Coordinator at FOCUS on Young Adults. She founded and managed an adolescent sexual health program with the Indonesia Planned Parenthood Association in Yogyakarta and has provided technical assistance on adolescent reproductive health program design, monitoring and evaluation in South and Southeast Asia. She has a master’s degree in public policy from Harvard University. Robert J. Magnani, PhD, is currently an Associate Professor in the Department of International Health and Development of the Tulane School of Public Health and Tropical Medicine. He has worked in the international population and health fields in the areas of data collection systems and methodology, program/project monitoring and evaluation, and information systems support to program management and policy decisionmaking. He has worked in 27 developing countries in all regions of the world, with specialization in East/Southeast Asia and Latin America. Kristin Nelson is a DrPH candidate at Tulane University and has a master’s degree in medical anthropology from Case Western University. She has conducted extensive review of qualitative and quantitative instruments and youth programs for FOCUS on Young Adults. She lived and worked in Tanzania for two years and has experience working in AIDS education for youth in Ethiopia. Judith R. Seltzer is an independent consultant and population specialist with an empha- sis on population policy, family planning and reproductive health, and design and evaluation of international population assistance programs. She has a PhD from Johns Hopkins University. iv Table of Contents P H O T O : J H U /C C P v INTRODUCTION _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 1 Why Monitor and Evaluate Youth Programs? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2 Who Should Use This Guide? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 3 Origins of this Guide _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 4 What are Monitoring and Evaluation? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 4 PART I: THE HOW-TO’S OF MONITORING AND EVALUATION 1 CONCERNS ABOUT MONITORING AND EVALUATING ARH PROGRAMS _ _ _ _ _ _ _ _ _ 9 Fifteen Challenges in Monitoring and Evaluating Youth Programs _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 9 Thirteen Tips for Addressing the Challenges of Monitoring and Evaluating Youth Programs _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17 2 A FRAMEWORK FOR ARH PROGRAM MONITORING AND EVALUATION _ _ _ _ _ _ _ _ _ 23 Understanding Adolescence and Youth Decision Making _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 23 Three Strategies that Promote Youth Reproductive Health _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 26 Identifying Appropriate Program Activities _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 34 Learning from the International Experience with Youth Reproductive Health Programming _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 37 3 DEVELOPING AN ARH MONITORING AND EVALUATION PLAN _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 39 Establishing Goals, Outcomes and Objectives for Youth Reproductive Health Programs _ _ _ _ _ 39 Measuring Objectives _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 40 Defining the Scope of an M&E Effort _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 43 Determining the Type of M&E Effort You Undertake _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 56 What Is Involved in Carrying Out Each Type of Evaluation? (How to Use the Rest of This Guide) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 58 4 INDICATORS _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 61 What Is an Indicator? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 61 Types of Indicators _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 63 How Should Indicators Be Stated? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 65 5 EVALUATION DESIGNS TO ASSESS PROGRAM IMPACT _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 71 Why Should I Conduct an Impact Evaluation? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 71 Types of Study Designs for Impact Evaluations _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 72 Randomized Experiments _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 73 Quasi-Experiments _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 76 Non-Experimental Designs _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 79 Panel Studies _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 83 Mimimizing Threats to Evaluation Validity _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 84 Choosing a Study Design for Ongoing Programs _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 86 6 SAMPLING _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 91 What Is Sampling, and What Role Does It Play in Program Evaluation? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 91 Types of Sampling Methods _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 92 What Sampling Method Is Best? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 93 Cluster Sampling _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 94 Key Issues in Cluster Sampling _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 99 Determining Sample Size _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 101 Commonly Asked Questions About Sampling _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 105 A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs Table of Contents A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs vi 7 DATA COLLECTION AND THE M&E WORKPLAN _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 107 Preparing for Data Collection _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 107 Types of Data Collection Methods _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 112 Selecting Appropriate Data Collection Methods _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 118 Collecting Data _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 125 Developing a Workplan for Monitoring and Evaluation _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 127 8 ANALYZING M&E DATA _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 131 Processing M&E Data _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 131 Analyzing M&E Data _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 133 9 USING AND DISSEMINATING M&E RESULTS _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 149 Why Use and Disseminate M&E Results? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 149 Using M&E Results to Improve and Strengthen Your Program _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 150 Disseminating M&E Results to Others _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 151 Tailoring Dissemination of Results to Different Audiences _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 152 Common Dissemination Formats _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 152 10 TABLES OF ARH INDICATORS _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 155 Where Are the Indicators in the Tables From, and How Can I Use Them for My Program? _ _ _ 155 What Kinds of Indicators Will I Find in Each of the Four Tables? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 155 What Other Information Will I Find in the Indicator Tables? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 157 Indicator Table I: Program Design Indicators _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 159 Indicator Table II: Program Systems Development and Functioning Indicators _ _ _ _ _ _ 169 Indicator Table III: Program Implementation Indicators _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 183 Indicator Table IV: Program Intervention Outcome Indicators _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 193 GLOSSARY _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 219 BIBLIOGRAPHY _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 231 APPENDICES 1 SAMPLING SCHEMES FOR CORE DATA COLLECTION STRATEGIES _ _ _ _ _ _ _ _ _ _ _ _ _ 243 How to Choose a Systematic Sample of Clusters _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 243 Cluster Sampling for Household Surveys _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 247 Alternative Methods for Choosing Sample Households, Youth and Parents _ _ _ _ _ _ _ _ _ _ 250 Cluster Sampling for School-based Surveys _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 252 How to Allocate a Proportional Sample of Students to Schools _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 255 Cluster Sampling for Health Facility Surveys _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 257 Alternative Methods for Sampling Service Transactions and Clients for Exit Interviews_ _ _ _ 260 Sampling for Peer Education Program Evaluations _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 261 Sampling for Client Follow-up Surveys _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 262 Sampling for Focus Groups and Other ÒSmall GroupÓ Data Collection Efforts _ _ _ _ _ _ _ _ _ 263 Sampling for In-Depth Interviews _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 263 2 HOW TO CALCULATE SAMPLE SIZE REQUIREMENTS _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 265 3 REFERENCE SHELF _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 269 4 EVALUATION WEB SITES _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 271 vii Table of Contents PART II: INSTRUMENTS INSTRUMENTS AND QUESTIONNAIRES _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 275 Adapting Instruments to Meet Your M&E Needs_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 275 Developing Surveys_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 275 Developing and Leading Focus Group Discussions_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 278 Using Mystery Clients _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 279 1 CHECKLISTS _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281 1A Program Design Checklist _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281 1B Checklist of Stakeholder Involvement _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 283 1C Training Course Checklist for ARH Program Staff _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 285 1D Checklist for ÒYouth-FriendlyÓ Service Characteristics_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 286 1E Checklist of Selection Criteria for Peer Educators _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 288 2 TALLY SHEETS _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 289 2A Monthly Tally Sheet for Counseling _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 289 2B Tally Sheet for Communication Products _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 290 2C Tally Sheet for Stakeholder Involvement _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 291 2D Tally Sheet on Number and Characteristics of Youth Counseled _ _ _ _ _ _ _ _ _ _ _ _ _ 292 2E Institutional Infrastructure Tally Sheet_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 294 3 REPORTING FORMS _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 297 3A Reporting Form for Counseling _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 297 3B Peer EducatorsÕ Reporting Form_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 298 4 ARH COALITION QUESTIONNAIRE _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 299 5 COMPOSITE INDICES _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 303 5A Index on Quality of Counseling (for Individual Counseling Sessions) _ _ _ _ _ _ _ _ _ 303 5B Policy Environment Score: Adolescents_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 304 6 INVENTORY OF FACILITIES AND SERVICES _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 309 Background Characteristics _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 309 Section 1: Equipment and Commodities Inventory _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 311 Section 2: Conditions of Facility _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 317 Section 3: IEC Materials and Activities _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 318 Section 4: Supervision _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 320 Section 5: Protocols and Guidelines _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 321 Section 6: Use of Information in Facility Management _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 322 Section 7: Service Statistics _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 323 Section 8: Staffing _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 324 Section 9: Fees for Services _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 325 7 OBSERVATION GUIDE FOR COUNSELING AND CLINICAL PROCEDURES _ _ _ _ _ _ _ _ _ 327 Counseling Observation _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 328 Contraceptive Methods _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 330 Discussion of STIs and Other Health Issues _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 332 Medical Procedures _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 333 Interviewer Impressions of Consultations _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 334 8 INTERVIEW GUIDE FOR STAFF PROVIDING RH SERVICES _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 335 Background Characteristics _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 335 Experience and Training in Reproductive Health Services _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 336 Contraceptives _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 340 Other Reproductive Health Practices _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 343 Socio-Demographic Characteristics _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 345 A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs viii 9 GUIDE FOR CLIENT EXIT INTERVIEW _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 347 Background Characteristics _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 347 Section 1: Basic Features_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 348 Section 2: Information About Services _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 349 10 QUESTIONNAIRE FOR DEBRIEFING MYSTERY CLIENTS _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 357 Background Characteristics _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 357 Questions for Mystery Clients _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 358 11 COMMUNITY QUESTIONNAIRE _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 363 Section 1: Community Information _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 364 Section 2: Reproductive Health Services in the Community _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 366 Section 3: Identification of the Facility _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 368 12 COMPREHENSIVE YOUTH SURVEY _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 373 Table of Contents _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 373 Introduction _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 374 Module 1: Background and Related Information _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 375 Module 2: Reproductive Health Knowledge _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 379 Module 3: STI/HIV/AIDS_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 385 Module 4: Attitudes, Beliefs and Values _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 391 Module 5: Social Influences _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 399 Module 6: Sexual Activity, Contraception, and Pregnancy _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 408 Module 7: Skills and Self-Efficacy _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 423 Module 8: Leisure Activities and Concerns_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 427 Module 9: Media Influence _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 434 Module 10: Drugs and Alcohol _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 438 Module 11: Health-Seeking Behaviors _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 440 References _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 443 13 FOCUS GROUP DISCUSSION GUIDE FOR IN-SCHOOL ADOLESCENTS _ _ _ _ _ _ _ _ _ _ 445 14 ASSESSING COALITION EFFECTIVENESS WORKSHEET _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 447 I. Collaborative Structure and Community Context _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 447 II. Collaboration Staffing and Functioning _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 448 15 PARENTS OF YOUTH QUESTIONNAIRE _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 451 1 Part I: The How-To’s of Monitoring and Evaluation Introduction How adolescence is experienced and affects reproductive health has largely to do with the timing and sequence of sexual ini- tiation, marriage and childbirth; the degree to which the timing and sequence of these events are socially sanctioned or forbidden; and the number and availability of options regarding education, job training and employment. There is a great deal of varia- tion worldwide, and even within countries, in the social and cultural values that shape these events. Close relationships between youth and their parents and extended fami- ly are particularly important in influencing youth development. Access to preventive and curative services, including contracep- tion and treatment for sexually transmitted infections, are also important in ensuring the reproductive health of youth. Youth development programs designed to help young people reduce their reproduc- tive health risks reflect that variation. Many of these programs regard young people as a critical resource for the future, and use cre- ative strategies to tackle their complex problems. But many programs face limited funding, community resistance, nonsup- portive laws and policies or lack of experi- ence. By knowing more about what works in youth programs and services, we can build strong programs that accomplish what they intend. Reproductive health refers to the health and well-being of women and men in terms of sexuality, pregnancy, birth and their related conditions, diseases and illnesses. Many programs reaching youth are trying to achieve reproductive health goals that relate to critical sexual and reproductive health outcomes, such as: ä fertility: the number of pregnancies a woman has in her lifetime ä abortion: as it relates to fertility and to health complications for women who have unsafe or clandestine abortions round the world, young people are growing up in an environment of dynamic change. For some, this complexity provides opportunity and choice; for others, it means a struggle for survival. Many young people have stamina and energy, curiosity, a sense of adventure and invulnerability. They are resourceful and resilient even under the most difficult conditions. The period of adolescence is, however, a life phase in which young people are particularly vulnerable to health risks, especially those related to sexuality and reproduction: HIV/AIDS, unwanted pregnancy, unsafe abortion, too-early marriage and childbearing, sexually trans- mitted infections and poor nutrition. A A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 2 ä illness: caused by sexually transmitted infections, reproductive tract infections, HIV and/or nutritional status ä mortality: primarily related to pregnancy and childbearing, including infant and maternal mortality, and also including AIDS-related deaths ä nutritional status: which impacts both womenÕs health and that of their infants Why Monitor and Evaluate Youth Programs? Monitoring and evaluation shows if and how youth programs are working. Monitoring and evaluation (M&E) can tell us if and how program activities are working. Program managers and donors want to be able to demonstrate results, understand how their programs are working and assess how the programs interact with other events and forces in their communities. M&E can be used to strengthen programs. Program managers and staff can assess the quality of activities and/or services and the extent to which the program is reaching its intended audience. With adequate data, you can compare sites, set priorities for strategic planning, assess training and supervisory needs and obtain feedback from the target audience or program participants. You can prioritize resource allocation, improve infor- mation for fund-raising, provide information to educate and motivate staff, provide infor- mation for advocacy and argue for the effec- tiveness of your program approach. M&E results can help institutionalize programs. M&E results can help stakeholders and the community understand what the program is doing, how well it is meeting its objectives and whether there are critical needs inhibit- ing your progress. M&E results can be used to educate your board of directors, current and prospective funding agencies, local government officials and key community membersÑsuch as local leaders, youth and parentsÑwho can help ensure social, financial and political support for youth programs. Sharing results can help your pro- gram establish or strengthen the network of individuals and organizations with similar goals of working with young people. It can also give public recognition and thanks to stakeholders and volunteers who have worked to make the program a success, and may attract new volunteers. M&E shapes the decisions of funding agencies and policymakers. Funding agencies and policymakers are interested in monitoring and evaluation results for a variety of reasons. They need to make strategic choices about how to spend resources and to prove that the expenditure produces quality results. M&E results also help with decisions about identifying and supporting the replication or expansion of particular program strategies. M&E findings often reveal unmet needs or barriers to pro- gram success and can be used to lobby for policy or legislative changes. M&E results can raise awareness of youth programs among the general public and help build positive perceptions about young people and youth programs. Note What do we mean by “youth”? Programs reaching young people use different terminology to refer to youth.“Adolescents” is often used to refer to young people ages 10–19,“young adults” generally refers to those ages 15–24 and “youth” may refer to all young peo- ple ages 10–24.This guide encompasses each term and uses the phrase “adolescent reproductive health” (ARH) to cover each type of program. 3 Introduction M&E results contribute to the global understanding of “what works.” The dissemination of M&E resultsÑboth those that show how your program is working and those that find that some strategies are not having the intended impactÑcontributes to our global under- standing of what works and what doesnÕt in improving young peopleÕs reproductive health. This advances the field by building a body of lessons learned and best practices that can strengthen ARH programs around the world. M&E mobilizes communities to support young people. Monitoring and evaluation results enable communities and youth to inform local leaders about youth needs and to advocate for funding. Results point to ways in which we can develop new and better systems of support for young people and identify addi- tional community resources. They can increase the communityÕs understanding of the potential and actual benefits of the pro- gram and its accomplishments, develop a sense of ownership through participation, improve coordination and mobilize support for youth and the array of programs that foster their health and development. Who Should Use This Guide? This Guide is designed for program man- agers who monitor and evaluate adolescent reproductive health programs. Some exam- ples of the people who might find this guide useful include the following: ä Community-level program managers: A manager of a community youth centerÕs peer education program can use this Guide to set up a system to monitor implementation of program activities. ä District-level program directors: A director of a school-based family life education (FLE) program can use this Guide to track progress in the programÕs implementation. ä Municipal-level health managers: A manager of a clinicÕs pregnancy and sexually transmitted infection (STI) reduction program can use this Guide to set up an evaluation that will track changes in the incidence of pregnancy and STIs among youth in the entire municipality. ä State- or provincial-level health officials and managers of nongovernmental organizations (NGOs): An official at the state level in a health system can use this Guide to compile data across districts, municipalities or other geographic areas or population groups to develop a picture of the current status of youth health, as well as changes over time. ä Managers or technical staff of private voluntary or donor agencies: A manager of a private voluntary agency can use this Guide to advise other organizations on how to improve their programs and how to set up a monitoring and evaluation system for youth programs. Note Seeking outside help Monitoring and evaluation is an essential aspect of youth reproductive health program development. However, many programs do not have the expertise to carry out some aspects of program evaluation, especially when evaluating large, complex programs. After reading this Guide, you may choose to seek technical assistance from local universities and research institutes who have the expertise to help you design and conduct an effective and efficient evaluation. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 4 Origins of This Guide This Guide draws on the expertise and experience of professionals in a variety of disciplines. The family planning field has laid an impor- tant foundation for considering how to develop service delivery systems for adults and how to measure inputs, quality, access and program results. This Guide draws heavily on the contributions of USAIDÕs The EVALUATION Project, which approaches evaluation with a focus on a programÕs sys- tems and delivery and an extensive menu of reproductive health outcome indicators. This Guide also draws lessons from the field of HIV/AIDS prevention, with its open view of sexuality and sexual behavior and its understanding of the value of social and behavioral change theory in designing effective programs for young people. The youth development field, which has identified a range of developmental needs and assets, urges us to measure social influences beyond individual knowledge, attitudes and practices, such as building healthy relationships and supportive com- munities and fostering skills development. The FOCUS on Young Adults programÕs own contributions in reviewing youth program experiences in developing country settings are incorporated in this Guide. Those reviews have contributed to our presentation of Òkey elementsÓ of program design and possible criteria for establishing measures of program quality and access. What Are Monitoring and Evaluation? Monitoring and process evaluation measure how a program is working. Monitoring is the routine tracking of a programÕs activities by measuring on a regular, ongoing basis whether planned activities are being carried out. Results reveal whether program activities are being implemented according to plan, and assess the extent to which a programÕs services are being used. Process evaluation should be done along with monitoring. Process evaluations collect information that measures how well pro- gram activities are performed. This informa- tion is usually collected on a routine basis, such as through staff reports, but it may also be collected periodically in a larger-scale process evaluation effort that may include use of focus groups or other qualitative methods. Process evaluation is used to measure the quality of program implemen- tation and to assess coverage; it may also measure the extent to which a programÕs services are being used by the intended target population. M&E results can help stakeholders and the community understand what the program is doing, how well it is meeting its objectives, and whether there are critical needs inhibiting your progress. Outcome and impact evaluation measure a program’s result and effects. Outcome and impact evaluation measure the extent to which program outcomes are achieved, and assess the impact of the program in the target population by measuring changes in knowledge, attitudes, behaviors, skills, community norms, utiliza- tion of health services and/or health status. Outcome evaluation determines whether outcomes that the program is trying to influence are changing in the target population. Impact evaluation determines how much of the observed change in outcomes is due to the programÕs efforts.1 This Guide has two parts, which are described below. PART I: THE HOW-TO’S OF MONITORING AND EVALUATION Chapter 1: Concerns About Monitoring and Evaluating ARH Programs ä Reviews challenges to and offers tips on measuring the effectiveness of youth programs ä Discusses how to be sure that your results are attributable to the program effort ä Previews ways this Guide can provide information and offer support Chapter 2: A Framework for ARH Program Monitoring and Evaluation ä Considers the multiple factors that shape adolescence ä Introduces three major strategies used to improve youth reproductive health ä Discusses the Logic Model, an approach to designing an effective strategy Chapter 3: Developing an ARH Monitoring and Evaluation Plan ä Defines program goals, outcomes and objectives ä Helps you define the scope of your monitoring and evaluation effort ä Offers guidance on how to plan and conduct a monitoring and evaluation effoct, using the rest of this Guide Chapter 4: Indicators ä Defines and explains indicators ä Provides examples of how to select and modify indicators to match your program objectives and activities 5 Introduction What Can You Determine Using Monitoring and Evaluation? Monitoring & Process Evaluation Outcome & Impact Evaluation ä Whether program is being implemented according to plan ä Quality of program ä Coverage of program ä Changes in outcomes, such as: Ð changes in behavior Ð changes in knowledge and attitudes Ð changes in interactions with parents Ð changes in community norms ä Whether outcomes are due to program efforts or other factors 1 Outcome evaluations often measure short-term changes, such as changes in knowledge, attitudes and behaviors. Impact evaluations are often conducted over a longer period and are able to identify changes in sexual and reproductive health outcomes in the target population, such as rates of STIs. This Guide is designed for program managers who monitor and evaluate adolescent reproductive health programs. Chapter 5: Evaluation Designs to Assess Program Impact ä Offers guidance on and considerations around the need for impact evaluation ä Reviews study designs you can use to carry out an impact evaluation ä Outlines the technical requirements and resources needed for each type of evaluation ä Presents options for initiating evaluations after a program is underway Chapter 6: Sampling ä Describes types of sampling methods and ways to determine which one is appropriate for your program ä Focuses on one commonly used sampling method: cluster sampling ä Reviews how to determine and calculate the sample size you need for your program Chapter 7: The M&E Workplan and Data Collection ä Reviews data collection steps ä Addresses ethical concerns ä Presents options for data collection methods ä Discusses tasks involved in developing an M&E workplan Chapter 8: Analyzing M&E Data ä Details how to process both quantitative and qualitative data ä Reviews mechanics of data analysis ä Discusses how to analyze and interpret data to draw conclusions about program design, functioning, outcomes and impact Chapter 9: Using and Disseminating M&E Results ä Reviews reasons to use and disseminate M&E results ä Describes how to use M&E results to improve your programÕs interventions ä Offers tips on how to disseminate results to priority target audiences ä Presents different formats for dissemination of results Chapter 10: Tables of ARH Indicators ä Presents four tables of ARH indicators ä Features indicators for each phase of a program (program design, program systems development and functioning, program implementation and program intervention outcomes) ä Describes how to use the Indicator Tables A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 6 The information you collect through monitoring and process evaluation will also help you build the case that the changes were a result of your program, even if an impact evaluation is not feasible. Glossary Bibliography Appendices ä Sampling schemes for core data collection strategies ä Calculating sample size requirements ä Reference shelf of useful books ä Relevant Internet sites PART II: INSTRUMENTS AND QUESTIONNAIRES ä Offers guidance on adapting instruments for your M&E effort ä Provides sample data instruments ä Gives tips for collecting data through a variety of methods 7 Introduction 8 A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs P H O T O : H ar ve y N el so n 9 Part I: The How-To’s of Monitoring and Evaluation Concerns About Monitoring and Evaluating ARH Programs 1 Fifteen Challenges in Monitoring and Evaluating Youth Programs 1. Some MIS are not set up to track the special characteristics of youth programs. Some MIS are part of a larger program or service delivery intervention. For example, a family planning program that has a youth component may be set up to track the distribution of contraceptives; it may not be set up to track services that are more likely to be utilized by youth, such as counseling or distribution of information, education and communication (IEC) materials. Adapting your MIS to monitor an ARH program may require only minor modification, such as adding the specification of age in program utilization reporting. However, for larger-scale programs that reach groups other than youth, adding even one new component to the system may be difficult to institutionalize. 2. Tracking services does not guarantee that you will know how many youth you are reaching. All programs need to determine how they will count the youth they are reaching and how knowing the number of youth reached will improve performance. Many programs count services, such as the number of meetings held or the number of condoms distributed. However, if all you know is that you distributed 1,000 condoms, you will not know whether 100 youth received 10 condoms each or 500 youth received 2 condoms each. Your information tracking system should try to collect key characteristics of program participants to help assess whether the program is reaching the number and type of youth it was designed to reach. Collecting information about target population characteristics will also help you understand how your program participants change over time. For example, in the beginning, your program may target older youth, but as word spreads about the services available, your program may find Chapter at a Glance ä Reviews challenges to and offers tips on measuring the effectiveness of youth programs ä Discusses how to be sure that your results are attributable to the program effort ä Previews ways this Guide can provide information and offer support A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 10 itself working with younger adolescents and need to adjust its approach accordingly. 3. You may be unsure whether general standards or implementa- tion strategies are applicable in the country you work in. Quality refers to the appropriateness of a specific set of professional activities in relation to the objectives they are intended to serve.1 Standards of quality for the design of health education programs have been drawn from a variety of youth programs demonstrated to be effective in changing specific behaviors2 and include factors such as: ä a minimum of 14 hours of instruction, ä small groups and an interactive environment, and ä models of and practice in communication, negotiation and other skills. However, we do not know the extent to which these standards apply in a more diverse set of developing country settings. The recommendations in this Guide, such as the Logic Model described in Chapter 2, are designed to help you implement your program strategy, based on 1 Green and Lewis, 1986. 2 Kirby et al., 1997. 15 Challenges in Monitoring and Evaluating Youth Programs 1. Some MIS are not set up to track the special characteristics of youth programs. 2. Tracking services does not guarantee that you will know how many youth you are reaching. 3. You may be unsure whether general standards or implementation strategies are applicable in the country you work in. 4. Little is known about whether standards for adult programs are appropriate for youth. 5. The elements of successful youth programs have not been well-documented or disseminated. 6. Programs may have trouble developing systems that understand and respond to the needs of youth. 7. Measuring the quality of a program requires understanding complex meanings and addressing sensitive issues. 8. Measuring a program’s access and coverage can be complex. 9. Assessing individual reactions to a program can be difficult. 10. Measuring influences on behaviors that didn’t occur is difficult. 11. Measuring behaviors at a variety of developmental levels can be problematic. 12. Showing the link between health outcomes and youth development can be complex. 13. Some changes may not be measurable for a long time, and others may be hard to measure at all. 14. Attributing changes in outcomes to a particular program’s strategy and activities is difficult. 15. Some types of evaluation are costly and may require funds beyond a youth program’s resources. 11 Chapter 1: Concerns About Monitoring and Evaluating ARH Programs assumptions about the social and behavioral factors that influence the health outcomes you hope to produce. The theories these recommendations draw on are well-developed and have been through a rigorous process to test how well their measurements capture the processes of change they propose. Yet most of these theories have not been tested in developing country settings and need to be adapted to the particular needs of youth in each locale. Since program activities drive the design of any evaluation effort, our lack of understanding about how these theories apply in different contexts can also affect our ability to undertake solid outcome and impact evaluations. 4. Little is known about whether standards for adult programs are appropriate for youth. After years of developing contraceptive service delivery systems for adults, there are now more or less accepted standards of quality. For example, there is wide consensus that the delivery of quality clinical contraceptive services entails: ä technical competence of service providers, ä respectful treatment of clients, ä effective communication with clients, ä choice of methods, ä mechanisms to encourage continuity, and ä cultural appropriateness and acceptability of services.3 However, we still do not know how comprehensive these standards are for younger age groups. Some of these quality standards are listed in the Indicator Tables as examples of criteria to include in indicators of quality, especially at the design stage. 5. The elements of successful youth programs have not been well- documented or disseminated. Youth program staff in developing countries often must rely on intuition and experience to design their programs when they donÕt have access to documented research. However, much is known about the standards that produce effective programs. For example, the FOCUS on Young Adults program has identified the following Òkey elementsÓ:4 ä baseline assessment conducted to identify issues, needs and target audiences; ä existence of a clearly defined mission statement that contributes to the achievement of program goals; and ä local stakeholders involved in program planning. 6. Programs may have trouble developing systems that understand and respond to the needs of youth. Program systems and their functioning will influence factors such as staff performance, service delivery and program utilization. Program systems must be set up to respond to the special needs of young people. For example, the staff recruitment and training system must ensure that staff hold the characteristics and skills to which youth respond well. A program system will help identify whether program materials are being updated often enough to respond to the changing language and trends of youth culture. A training system must ensure that the necessary components of youth programming are included in the curricula. 3 Bruce, 1990. 4 Birdthistle and Vince-Whitman, 1997; Israel and Nagano, 1997; Senderowitz, 1997a; and Senderowitz, 1997b. Note that these key elements reflect the experiences of programs that are con cerned more with reproductive health outcomes than with youth development outcomes. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 12 7. Measuring the quality of a program requires understanding complex meanings and addressing sensitive issues. To determine program quality, you will probably have to elicit subjective interpretations, perspectives and meanings from young people and others in the community. These are each complex because they are based on: ä cultural beliefs and values, ä personal interactions within a community, ä interactions between the young people and the programÕs staff, and ä opinions and views of people carrying out the program. Programs that are concerned with youth empowerment, community mobilization, changing social norms and influencing youth culture will need to explore the meanings of such issues as feelings of self- worth, the value of community connectedness and the interpretation of culture. These reflections may be difficult to elicit and harder still to quantify. For example, you may be able to count the number of community members at a meeting, but have more difficulty assessing their substantive contribution to the meeting, increased concern as a result of the meeting or proposed strategy for social change. Substantive changes in meanings and perceptions are extremely important for youth programs and should not be minimized. They play an important role in the quality of a youth program. To capture these nuances, we need to first employ qualitative approaches to data collection. Once we understand the relevant meanings, values and beliefs we can then collect data about changes in the number of participants who share those meanings, values and beliefs, i.e., a quantitative approach. There are numerous obstacles to measuring the outcomes of youth development and reproductive health programs, which helps explain why we have such a limited body of evidence as to Òwhat works.Ó First, many of the intended outcomes are regarded as personal and private. In some societies, talking about sexual behavior and personal relationships may be socially prohibited. Second, evaluators may face parental and community resistance to asking young people questions. Community leaders or other key stakeholders may believe that the young people in their communities do not engage in risky behaviors, and therefore there is no need to ask questions. They may also find it socially or politically dangerous to uncover the truth about young peopleÕs sexual behavior, and make an attempt to block data collection. However, there are many examples of programs that asked sensitive questions and found young people who were eager to discuss issues of sexuality and reproductive healthÑviewing the discussions as an opportunity for Your information tracking system should try to collect key characteristics of program participants to assess whether the program is reaching the number and type of youth it was designed to reach. learning and for sharing their own concerns and needs. 8. Measuring a program’s access and coverage can be complex. Access to reproductive health programs concerns the extent to which youth can obtain appropriate reproductive health services at a level of effort and cost that is both acceptable to and within the means of a large majority of youth in a given population.5 We can define access in a variety of ways: ä Geographic/Physical: Convenient hours and location, wide range of necessary services ä Economic: Affordable fees ä Psycho-social: Perception of privacy; perception that both males and females, married and unmarried youth, are welcome; feeling of safety and confidentiality; perception that providers are interested in, informed about and responsive to youth needs ä Administrative: Specially trained staff with respect for young people, adequate time for interactions, youth involvement in design and continuing feedback, short waiting times Coverage refers to the extent to which your programÕs servicesÑsuch as educational or clinical servicesÑare being used by your intended target population. Coverage can be measured by: ä determining the proportion of the target population you are reaching, or ä determining the characteristics of the population you are reaching. Some aspects of accessibility and coverage can be measured by the absence or presence of something and may be relatively straightforward. For example, finding out whether your program has convenient hours and affordable fees may be easily determined with a short survey of your target population. However, measuring more subjective issues that involve judgmentsÑsuch as whether staff have respect for young peopleÑcan be more difficult because many youth may be reluctant to give their true opinions about program staff for fear of negative consequences, such as having services withheld. Similarly, determining some characteristics of youth may be simple, such as asking participants about their age, sex and place of residence. However, if your program is reaching specific groups of youth, especially those who are marginalized, it may be more difficult to collect these data. For example, if your program is attempting to reach youth who have been sexually abused, the subject may be too sensitive for participants to respond easily to questions. You may have to ask questions repeatedly and to reassure participants that it is safe to talk. 9. Assessing individual reactions to a program can be difficult. One measure of quality is how your program is received by stakeholders, staff and youth participants. Assessing how the program is received by these groups will contribute to your understanding of how to overcome social resistance to youth programs. It will also help you determine if your program is headed in the right direction and identify problems in time to correct them. However, eliciting and analyzing individual reactions to programs is difficult to do. For example, you may want to engage youth and community members to think critically about their needs and to consider how the program could best reach them. Yet, some individuals may have trouble articulating their needs, or their opinions 13 Chapter 1: Concerns About Monitoring and Evaluating ARH Programs 5 Bertrand et al., 1994. may defy what we know about the factors that influence health outcomes. Some community members think it is dangerous to give reproductive health information to youth, and they may want to censor the media in order to produce positive health outcomes among youth. Others may automatically express views that are in line with social norms and values, even if these views do not reflect the true needs of the community. Youth, in particular, may be reluctant to express negative feedback about the program to evaluators, who are often older and carry more authority. Similar tendencies may be found in the reactions of program staff and volunteers. Process evaluations encourage staff to reflect on their work, to see its strengths and weaknesses and to consider alternative strategies. Yet, while most people working with youth are deeply concerned and committed, some have a more ideological approach. They may assume that their strategies are working, even if there is little evidence to suggest that this is true. For example, some staff may insist that increasing access to contraceptive services is the best way to produce results, ignoring the fact that for youth who are abstinent, a more important service may be support in reflecting on and supporting a decision not to have sex. Others may think that their commitment and hard work should pay off in results, and find it demoralizing to discuss how their efforts may be misguided. Staff will need a trusting environment and a supportive process to allow for the kind of reflection in which they can admit that program strategies might need modification. 10. Measuring influences on behaviors that didn’t occur is difficult. Many ARH programs are concerned with preventing unhealthy behaviors and influencing developmental pathways. They are often concerned with measuring events that did not occur because of the program intervention. For example, some programs may aim to delay the onset of sexual activity or prevent unwanted sex. Others may try to prevent early marriage, thus attempting to delay young womenÕs first sexual experience and increase the age at first birth to a time when delivery will be safer. Obviously, measuring the absence of certain behaviors is complex. It requires estimating what level of behavior would have existed had there not been an intervention, then explaining why an intervention caused behaviors not to occur. 11. Measuring behaviors at a variety of developmental levels can be problematic. Although youth programs are concerned with reaching young people throughout a developmental transition, we are not always sure what outcomes should be expected at specific ages. For example, we may be unsure of what the average age at first sex in our target population is. However, measuring outcomes on sexual behavior can be problematic. Some young people may not have heard about certain sexual behaviors and therefore have problems answering questions about them. This could bias results (e.g., when a girl who has held hands with a boy reports that she has engaged in Òsexual activityÓ). Community A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 14 Measuring the social and cultural context of youth development is difficult and may require time and resources that many programs do not have. members, and sometimes program staff themselves, may believe it is not appropriate to introduce youth to new topics, such as sexual behavior or illegal behaviors, through a data collection effort. 12. Showing the link between health outcomes and youth development can be complex. Many programs are increasingly concerned with linking health outcomes to youth development. For example, a program may want to demonstrate that increasing girlsÕ education helps to delay first sex and thus has a positive health outcome. However, what aspects of youth development influence health outcomes may be difficult to predict. We cannot assume that developmental factors would have the same influence on health in different settings, as outcomes are embedded in specific and local contexts, each with their own social and cultural values. Measuring the social and cultural context of youth development is difficult and may require time and resources that many programs do not have. 13. Some changes may not be measurable for a long time, and others may be hard to measure at all. It may be several years before you can observe changes in the health status of young people, as opposed to the relatively short amount of time it takes to observe such outcomes as changes in levels of knowledge. Moreover, some changes in outcomes may occur long after the program is over; for example, a program that promotes delay of first sex among youth ages 10 to 12 may not be able to observe its results for several years after participants take part in the program. It is therefore important to track trends in such behaviors. For many of the outcomes we are concerned with, we do not know how long it will take to bring about changes. Yet, many youth programs are expected to demonstrate changes in longer-term outcomes in a very short period of time. Some programs define their objectives unrealistically and then falsely conclude that the program did not succeed, when, in fact, more time was required to demonstrate the changes. Similarly, some program strategies, particularly those that deal with social change, are difficult to measure in numerical or quantifiable terms. For example, measuring complex social processes, such as community mobilization and empowerment, can be difficult because conceptually we are not exactly sure how to define these processes, nor articulate how they are occurring. 14. Attributing changes in outcomes to a particular program’s strategy and activities is difficult. How can you conclude that the changes you observe in your target population occurred as a result of your program activities? Measuring changes in outcomes alone is not enough to conclude that the changes occurred as a result of your 15 Chapter 1: Concerns About Monitoring and Evaluating ARH Programs Community leaders or other key stakeholders may believe that young people in their communities do not engage in risky behaviors, so they feel there is no need to ask questions. program. Other events, like shifting economic or social conditions, could have affected the outcomes you are measuring. There may also have been other program activities directed at your target audience, such as a mass media campaign, going on at the same time. Finally, your program could have attracted participants who were predisposed to the positive outcomes you were trying to encourage. The primary way to determine that an observed change in outcome indicators is attributable to your program is to use a strong study design (see Chapter 5). However, planning and implementing a strong study design requires a high level of resources and skills and may not be feasible for some programs. 15. Some types of evaluation may require funds beyond a youth program’s resources. Outcome and impact evaluations can be costly, especially when measuring numerous outcomes or those that are more difficult to assess. If programs cannot rely on existing data sources, they may need to collect quite a bit of new information about the youth populations they reach. Developing survey instruments, conducting correct sampling procedures and collecting data from individuals can all be expensive. Programs that do not have in-house evaluation expertise may also have the added cost of technical assistance or hiring external evaluators. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 16 13 Tips for Addressing the Challenges of Monitoring and Evaluating Youth Programs 1. Monitor what your system is set up to deliver: programs for youth. 2. Base your program activities, and thus your evaluation effort, on theory. 3. Review what is known about the factors that influence health outcomes. 4. Test and document the elements that contribute to your program’s effectiveness. 5. Engage in a genuinely participatory process. 6. Ensure that your data collection effort addresses ethical concerns. 7. Be creative in asking sensitive questions. 8. Define your objectives realistically and provide enough time to measure changes. 9. Use a combined qualitative-quantitative approach. 10. Use monitoring and process evaluation data to support the outcome and impact evaluation. 11. Learn by trial and error. 12. Limit evaluation costs when possible. 13. Build on the advantages of evaluating youth programs. Thirteen Tips for Addressing the Challenges of Monitoring and Evaluating Youth Programs Program staff and evaluators around the world are honing their skills and developing creative solutions to the tough challenges of monitoring and evaluation. Below are tips from practitioners in the field and suggestions on how to use this Guide to address measurement challenges. 1. Monitor what your system is set up to deliver: programs for youth. Monitor the elements of your programÕs system that respond specifically to the needs of youth. In the Indicator Tables in Chapter 10, we provide some notes on how you can develop a new system or adapt an existing system to capture the needs of youth programs. 2. Base your program activities, and thus your evaluation effort, on theory. Basing program strategies on theory helps articulate how programs are working and, if they are successful, aids in their replication and adaptation. The Logic Model introduced in Chapter 2 is an example of how a program can plan its activities based on theories of health behavior and social change. Increased understanding of how these and other theories apply in different contexts will strengthen our ability to undertake scientifically sound outcome and impact evaluations. 3. Review what is known about the factors that influence health outcomes. To help you demonstrate the link between health outcomes and development needs, you should first review what is known about the influences that you assume will affect outcomes. As Chapter 2 suggests, the best way to do this is to review the existing research and literature about your target population. However, if you are unable to access the published literature, or if it is not well-developed in your setting, you can review your staffÕs experience or talk to colleagues from other organizations. You may also find ways to assess these influences through creative data collection, such as asking questions about a particular behavior in a number of different ways, or modifying language and terminology on your survey instruments to reflect the most important issues in your setting. 4. Test and document the elements that contribute to your program’s effectiveness. A number of elements contribute to a youth programÕs effectiveness. For example, the design elements proposed in the Indicator Tables reflect the current state of knowledge about the design features that are key to program success. We suggest that these elements be tested (i.e., used and evaluated to find out whether they are appropriate and effective) or modified, according to your specific setting and program priorities. 17 Chapter 1: Concerns About Monitoring and Evaluating ARH Programs Some programs define their objectives unrealistically, and then falsely conclude that the program did not succeed, when, in fact, more time is required to demonstrate the changes. The systems you set up in order to implement a youth program are also key to program success and may be more complex than the systems used to implement other reproductive health service programs. Moreover, the criteria for assessing the quality of youth program system functioning have not been systematically tested. Possible criteria for assessing the quality of your system and its operation are suggested in the Indicator Tables, such as: ä recruitment of staff with appropriate skills, ä components of the training program, ä training program participants who have mastered skills, ä content of reproductive health curricula, and/or ä staff performance. Incorporating the elements identified by the international experience of youth programs, lessons from the field of family planning and your own intuition and experience is the best way to establish quality programs and services for young people. 5. Engage in a genuinely participatory process. Evaluation that engages and involves stakeholders and staff is more likely to produce reactions that are critical and honest than those conducted exclusively by external experts. A participatory process also encourages the community and staff to utilize the information from process evaluations and have a sense of ownership of evaluation results. Giving youth and adults the opportunity to discuss and analyze their concerns, and to suggest and enact solutions, may also increase your programÕs effectiveness in reaching its objectives.6 Tips for engaging youth, community members and other stakeholders are provided throughout this Guide. 6. Ensure that your data collection effort addresses ethical concerns. Professional standards of conduct as well as moral principles and values should be exercised in conducting research and evaluation studies. Ethical reviews are designed to consider and mediate the potential risks and negative consequences to participants as a result of their participation in a study or evaluation. Responding to ethical concerns will improve your relationship with the community and enhance your ability to collect quality data. The more ethical your data collection effort, the more honest and reliable the information you collect, which ensures that your M&E results are valid. Strategies for ethical data collection among young people are discussed in Chapter 7. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 18 6 The literature on participatory process evaluation is well-developed. One resource specific to the context of young adult reproductive health programs is Shah et al., 1999. Listening to Young Voices: Facilitating Participatory Appraisals on Reproductive Health with Adolescents. Washington, DC: CARE International in Zambia and FOCUS on Young Adults. A participatory process also encourages the community and staff to utilize the information from process evaluations and have a sense of ownership of evaluation results. 7. Be creative in asking sensitive questions. Asking questions of a sensitive nature, while difficult, can be done successfully in many different settings. First, you may need to get support from a broad range of community organizations, to whom you will need to make clear why these questions must be asked. Second, you will need to obtain parental consent, particularly for youth who are legally minors. Third, you can employ Òskip patternsÓ to avoid sexually explicit questions about contraceptive use or other sexual practices if youth have not had sex. Additional tips are provided in the discussion on data collection in Chapter 7. 8. Define your objectives realistically and provide enough time to measure changes. Programs that define objectives unrealistically may lead to false conclusions. You should budget plenty of time before attempting to measure changes in outcomes, and ensure that your objectives clearly state the outcomes that you expect to produce. 9. Use a combined qualitative- quantitative approach. Qualitative methods can be used to define social and cultural contexts and develop vocabularies for health education programs, each of which contributes to the formulation of instruments to be used during quantitative surveys. Quantitative methods ensure standardized data collection over time and enable definitive measurement of changes in outcomes that can be generalized to the larger population. They can also be used to show that changes are due to your program activities. Qualitative data can then be used to interpret the findings of quantitative surveys and may reveal program results not discovered through quantitative methods. Qualitative methods can also be used to assess program goals that are difficult to measure quantitatively, such as empowerment and social change. For example, one qualitative approach asks staff and participants to describe the evolution of the program. Employing this method can help us understand what changes were brought about and why. The results of this approach can then be used to develop a quantitative approach to measure whether those changes are producing the intended outcome in the larger community. In Chapter 5, we suggest using a combination of qualitative and quantitative approaches to develop indicators and collect data, which will help you address some of these concerns in more detail. 10. Use monitoring and process evaluation data to support the outcome and impact evaluation. Conducting outcome and impact evaluations requires resources and time, and even those that are well-designed may not show conclusive results. Using monitoring and process evaluation data can strengthen the results of your outcome and impact evaluations. For example, your MIS may collect information about exposure to the programÕs services, such as contact with peer educators. If you are trying to demonstrate such outcomes as the increased use of condoms, you may want to measure whether youth received condoms or referrals from a peer educator. While this will not give you conclusive evidence about 19 Chapter 1: Concerns About Monitoring and Evaluating ARH Programs Measuring the absence of certain behaviors is complex. your programÕs effect on the entire target population, it may help you demonstrate associations between positive outcomes and exposure to your programÕs activities. The information you collect through monitoring and process evaluation will also help you build the case that the changes were a result of your program, even if an impact evaluation is not feasible. For example, your MIS may show that certain activities were carried out more frequently than others. Your process evaluation may determine that young people liked certain messages better than others and became more involved in the program as a result. It may also document that community leadersÕ support for specific activities resulted in increased participation or the addition of new activities. Documenting factors such as service utilization, program participation and reaction to program strategies will strengthen the case that your program produced the desired outcomes. 11. Learn by trial and error. For other measurement concerns, we need to learn by trial and error. For example, we are learning that we can ask questions about sexual behavior, even in settings with very traditional values. Who asks the questions, how we ask the questions, or the place in which we ask the questions may need to be modified in each setting. We also need to be creative about generating valid self-reports of risk behaviors, as we often get the Òsocially desirableÓ response rather than an accurate account. At this stage, many of the suggested measurements in this guide have not yet been tested. We will build on what we know as we collect more evidence and as programs like yours undertake more systematic approaches to monitoring and evaluation. 12. Limit evaluation costs when possible. While outcome and impact evaluation can be costly, there are ways for programs to limit costs and still produce valid results. For example, an evaluation can examine only those outcomes most important to your program. Measuring outcomes that require less costly data collection methods or utilize already-existing data can also reduce costs. Training and utilizing staff to conduct some parts of the evaluation may be feasible for some programs. A sound sampling strategy can help you limit the amount of data collected while not compromising the validity of your evaluation results. 13. Build on the advantages of evaluating youth programs. While the challenges described in this chapter are many, the task of evaluating youth reproductive health programs can be very gratifying. There may be other fairly simple ways to avoid challenges in measuring outcomes, including randomly assigning youth in school settings, either individually or by classrooms, and following them. You may find communities where the demand for the program A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 20 While the challenges described in this chapter are many, the task of evaluating youth reproductive health programs can be very gratifying. outpaces the supply. You may also choose to evaluate the impact of selected programs, rather than all programs, or to use a delayed treatment design, which is discussed in Chapter 5. If a few youth are in desperate need of the program, allow them to participate in the program but not the study. There are many advantages to working with youth. They are interested in learning, and changes in this population can occur relatively quickly. Youth are in a period of great vulnerability, and improving outcomes for them is an investment in our future. 21 Chapter 1: Concerns About Monitoring and Evaluating ARH Programs A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 22 22 A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs P H O T O : C at h ry n W ilc o x, J H U /C C P 23 Part I: The How-Tos of Monitoring and EvaluationPart I: The How-Tos of Monitoring and Evaluation A Framework for ARH Program Monitoring and Evaluation 2 Understanding Adolescence and Youth Decision Making Adolescence is not the same everywhere. The definition of adolescenceÑand even its existenceÑhas long been a subject of debate. Some argue that adolescence is a period in which children attain physical maturity but are not burdened with adult roles and responsibilities. Adolescence, they say, is a phenomenon of modern, industrial societies.1 Others theorize that adolescence exists in all cultures at all times, and define adolescence as a life phase that involves the management of sexuality among unmarried individuals, social organization and peer group influence among adolescents, and training in occupational and life skills.2 A recent modification of the latter definition notes that adolescence is a time of heightened vulnerability for girls and critical capability building for youth (ages 10Ð19) of both sexes, regardless of their marital and/or childbearing status.3 Adolescence is experienced differently in every society, and even within societies there may be vast differences in how some youth experience adolescence as compared to others. To develop program outcomes, objectives and interventions that will have the intended impact, you must first understand the specific context of the youth target population with whom you plan to work. Sociocultural factors influence how young people experience adolescence, and adolescent sexual behavior reflects a variety of norms and expectations. Particularly where there has been considerable social change in recent decades, young people struggle to balance mixed messages and try to sort out what is best for them. A broad range of social factors influence young peopleÕs reproductive health. The social factors that influence how young people experience adolescence fall broadly into five categories: ä The individual characteristics of young people, including their knowledge, attitudes, beliefs, values, motivations and experiences ä Sexual partners and peers ä Families and adults in the community Chapter at a Glance ä Considers the multiple factors that shape adolescence ä Introduces three major strategies used to improve youth reproductive health ä Discusses the Logic Model, an approach to designing an effective strategy 1 Caldwell, 1998. 2 Schlegel, 1995. 3 Mensch et al., 1998. ä Institutions that support youth and provide opportunities, such as schools, workplaces and religious organizations ä Communities, through which social expectations about gender norms, sexual behavior, marriage and childbearing, are transmitted These factors influence how much schooling a young person should receive, what the pattern of courtship and marriage is and when a young person is supposed to take on adult responsibilities, such as work and support for the family. Yet, these factors are also often in conflict with one another. For example, peer norms about the appropriateness of boy-girl relationships may be quite different from those of the family and community. Moreover, each of these factors is constantly changing as the world changes. Understanding and responding to these factors is an important part of developing effective ARH programs. Research reveals much about how these factors shape adolescent reproductive decision making. Researchers are increasingly turning their attention to antecedents, factors that precede and influence how adolescents make decisions about sexual and health behaviors. Antecedents can be positive, a protective factor, or negative, a risk factor. While research can show the relationship of antecedents to sexual decision making, it is more difficult to identify which antecedents most influence reproductive health outcomes. Following is a discussion of research findings in each of the five realms of influence. Individual characteristics In some cases, young people may calculate or negotiate risks before taking them. They may decide to take risks because they feel invincible, are unaware of consequences and/or want to experiment, or because engaging in risks brings them social status or monetary benefits. Research has found that the level of knowledge about reproductive health and sex as well as community and family norms and values about reproductive health and sex, influences adolescentsÕ reproductive health decisions. For example, young women in Ghana place a high value on early fertility, which is a risk factor for early pregnancy. Self- efficacy, academic performance and motivation to do well in school appear to protect youth from taking sexual risks. Youth who are actively engaged in learning, who place a high value on helping people and who accept and take responsibility are also less likely to take Note Research findings The synthesis of research findings presented here represents more than 350 studies, about 250 of which were undertaken in the United States and about 100 of which were undertaken in Asia,Africa and Latin America and the Caribbean. Each study, which was completed after 1975, had a sample size of more than 100 youth, used scientific criteria and reviewed the antecedents of age at first sex, frequency of sexual activity, number of sexual partners, and condom and contraceptive use. Research identified both protective factors and risk factors.The studies from the United States were synthesized by Doug Kirby of ETR Associates (Kirby, 1999b), and most of those from developing countries were reviewed by Ilene Speizer and Stephanie Mullen of Tulane University (Speizer and Mullen, 1997). Additional results are from papers forthcoming from FOCUS, including “Social Influences on Sexual Behaviors of Youth in Lusaka, Zambia,” “Protective Factors Against Risky Sexual Behaviors Among Urban Secondary Students in Peru” and “The Influences of Family and Peer Contexts on the Sexual and Contraceptive Behaviors of Unmarried Youth in Ghana.” A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 24 25 Chapter 2: A Framework for ARH Program Monitoring and Evaluation sexual risks. Behavioral intentions often shape adolescent risk; for example, young people who intend to avoid STI infection are less likely to take sexual risks. Other related risks have been associated with sexual behaviors among youth. The use of alcohol and drugs, smoking, depression and stress, loneliness and running away from home all enhance sexual risk behaviors among youth. Young people who have been the victims of sexual or physical abuse during childhood or adolescence are often more likely to be at risk. Biological factors also seem to contribute to adolescent risk behaviors. Early physical development and high testosterone levels increase risk-taking. Age and gender also influence sexual risk; in general, boys are more likely to take sexual risks than girls, as are older youth. Peers and sexual partners Researchers have found that if youth believe their friends have sex, smoke or use alcohol or drugs, they are more likely to engage in those behaviors. Power imbalances in a partnership, such as age and income differentials between partners, exchange of money or other goods for sex, and sexual pressure from a partner, also contribute to sexual risks. Conversely, a sense of commitment in a relationship seems to protect young people from undesired health outcomes. There is some evidence that males in same-sex relationships are also more likely to take sexual risks than their heterosexual peers are. Families Children of families with lower educational and economic levels have been found to be more likely to be at sexual risk. Families may also enhance risk by devaluing childrenÕs education, encouraging early marriage and childbearing or discouraging young people from getting information and services. However, families can also protect youth from behavioral risks. Living with both parents, having positive family dynamics, feeling supported by parents and other adult family members and experiencing proper supervision by adult family members all seem to protect young people from taking risks. Parental values also influence young people; parents and elders who communicate with young people about their values regarding sex have been found to protect the youth from a variety of risks. Research results are less conclusive about the impact of sexual and reproductive health communication between parents and youth on adolescent decision making. Institutions Connections to institutions that support and provide opportunities to youth seem to protect youth from making risky decisions. For example, youth who feel connected to a religious organization are less likely to take risks. School connectedness is also a protective factor, as is successful school performance and a supportive school environment. In contrast, some institutions Note Sexual risks Sexual risks are sexual behaviors that put an individual at risk for unplanned pregnancy, STIs, HIV infection or health problems related to pregnancy and childbearing. Specific sexual risks include: ä too-early initiation of sexual activity, ä sexual intercourse without the use of contraception, ä sexual intercourse without the use of a condom, ä sexual intercourse with more than one partner, and ä sexual intercourse with a partner infected with an STI or HIV. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 26 in the community may promote adolescent risk-taking. The presence of a sex industry and widespread access to entertainment venues such as bars and discotheques may enhance young peopleÕs risk-taking. Some evidence exists that connections to youth organizations also protect youth from risky behaviors. Access to organizations that provide leisure activities, counseling and services for sexually abused adolescents seem to protect youth from sexual risk- taking. Connections with other adults in the community through social institutions, such as neighborhood groups, are also generally found to be protective. Communities Disorganization or instability in a community often influences youth to take risks. High levels of unemployment and migration, low educational levels, poverty, crime, political instability and war all seem to enhance risk-taking. A lack of programs, health and contraceptive services, and educational and economic opportunities in a community also negatively affect young peopleÕs reproductive health decision making. Some social norms, while not as well- documented by research, also appear to influence youth to make decisions that result in negative reproductive health outcomes. Gender discrimination, community norms that do not value adolescent education, restrictions on girlsÕ mobility and cultural expectations to marry and bear children early in adolescence may negatively impact adolescent reproductive health outcomes. Supportive policies can also protect young people from sexual and reproductive health risks. For example, legalizing contraceptive sales to youth and enforcing a minimum legal age of marriage can be protective actions. Policies that support education and health services for adolescents are also protective. Illegality of abortion and weak enforcement of laws concerning rape and sexual abuse, conversely, may promote negative reproductive health outcomes among youth. Finally, the mass media influences community norms and values. Advertisements and media that provide positive role models and support responsible behavior can be protective factors. Conversely, exposure to pornography and sexually permissive or violent media may enhance risk-taking among youth. Three Strategies that Promote Youth Reproductive Health Globally, programs to prevent adolescent sexual behavior and disease have demonstrated limited results. Two realities largely account for this. First, many of the evaluations have been short-term and are thus unable to show changes in sexual behavior and other reproductive health outcomes, such as pregnancy and STI rates. Different strategies are needed to influence the many factors — individual, peers, partners, family, institutions, community — that shape young peoples’ behaviors. 27 Chapter 2: A Framework for ARH Program Monitoring and Evaluation In d iv id u a l ¥ A ge a nd G en de r ¥ P la ce o f r es id en ce ¥ K no w le dg e, a tti tu de s an d be lie fs ¥ R el ig io si ty ¥ S el f E ffi ca cy ¥ S ki lls : - m ot iv at io n to d o w el l i n sc ho ol - ac tiv el y en ga ge d i n le ar ni ng ¥ A lc oh ol a nd d ru g us e ¥ O th er r el at ed r is ky b eh av io r - de pr es si on , s tr es s - ru nn in g aw ay f ro m h om e ¥ S ex ua l a nd p hy si ca l a bu se P e e rs a n d P a rt n e rs ¥ P er ce pt io n of pe er b eh av io rs - pe rc ep tio n th at p ee rs a re s ex ua lly a ct iv e - pe rc ep tio n th at p ee rs a re u si ng a lc oh ol o r dr ug s ¥ R el at io ns hi p w ith pa rt ne r( s) - ag e an d in co m e d iff er en tia ls ¥ E xc ha ng e of m on ey o r go od s fo r se x - se xu al p re ss ur e ¥ S en se o f c om m itm en t w ith p ar tn er F a m il y a n d H o u s e h o ld ¥ Lo w e du ca tio na l a nd ec on om ic le ve ls ¥ F am ily a tti tu de s - de va lu in g ed uc at io n - su pp or tin g ea rly m ar ria ge a nd c hi ld be ar in g - di sc ou ra gi ng y ou ng p eo pl eÕ s ac ce ss to i nf or m at io n an d s er vi ce s ¥ H ar m on io us r el at io n- sh ip w ith fa m ily - qu al ity in te ra ct io n w ith fa m ily - fa m ily Õs v al ue s ar e c om m un ic at ed to y ou th - su pe rv is io n by a du lt f am ily m em be rs In s ti tu ti o n s ¥ C on ne ct ed ne ss w ith re lig io us o rg an iz at io ns ¥ C on ne ct ed ne ss w ith sc ho ol s - av ai la bi lit y of e du ca tio n - a sa fe s ch oo l e nv iro nm en t - ac ad em ic p er fo rm an ce a nd a sp ira tio ns ¥ A va ila bi lit y of y ou th pr og ra m s - le is ur e ac tiv iti es - co un se lin g - se xu al a bu se s er vi ce s ¥ R el at io ns hi p w ith o th er ad ul ts th ro ug h co m m un ity in st itu tio ns C o m m u n it ie s ¥ D is or ga ni za tio n (h ig h un - em pl oy m en t l ev el s, p ov er ty , lo w e du ca tio n le ve ls , p ol iti ca l in st ab ili ty , w ar , c rim e, h ig h m ig ra tio n) ¥ S oc ia l n or m s ¥ La ck o f o pp or tu ni tie s ¥ P ol ic y (le ga lit y of c on tr ac ep tiv es , ag e of le ga l m ar ria ge , h ea lth an d ed uc at io n se rv ic es fo r yo ut h) ¥ P ol ic y (il le ga lit y of a bo rt io n, w ea k en fo rc em en t o f r ap e la w s) ¥ M as s M ed ia ( m as s m ed ia th at pr ov id es r ol e m od el s an d ex am pl es o f r es po ns ib le b eh av io r) ¥ M as s M ed ia (p or no gr ap hy , s ex ua lly pe rm is si ve a nd v io le nt m ed ia ) Fa ct o rs t h at In flu en ce Y o u th R ep ro d u ct iv e H ea lth In st itu tio ns F am ily P ee rs In di vi du al C om m un iti es P ar tn er s H ou se ho ld Y ou th D ec is io n- M ak in g an d R ep ro du ct iv e H ea lth B eh av io rs R ep ro du ct iv e H ea lth O ut co m es : ¥ F er til ity ¥ A bo rt io n ¥ M or bi di ty - S T I/H IV - R T I - A ne m ia ¥ M or ta lit y ¥ N ut rit io na l S ta tu s Second, it seems that the programs most often evaluatedÑthose that provide information about sex and reproductive health, or those that provide reproductive health clinical services to youthÑare by themselves insufficient to reduce young peopleÕs risky sexual behavior. Some studies of adolescent reproductive health programs do, however, suggest directions for future ARH program planning. First, the identification of antecedent risk and protective factors has helped program planners identify and target youth who are at greatest risk of sexual coercion and abuse, unwanted sex, unintended pregnancy, STIs and unsafe childbearing. Evaluations have also found that programs that address a broader spectrum of antecedent influences tend to be more effective at reducing risky behavior or maintaining healthy behavior, and more likely to have a long-term impact.4 For example, some evaluations have shown that youth development programs that strengthen relationships with school and family result in a reduced age of sexual initiation and lower rates of unwanted pregnancy and STIs. Similarly, programs that develop specific skills related to partner negotiation and condom use have also resulted in desired reproductive health outcomes. Researchers are focusing on other developmental assets to predict and target risk behaviors, such as constructive use of time, presence of a caring community and commitment to learning.5 To ensure that they can effectively influence antecedents, programs should also initiate activities based on health promotion, social change and behavior change theories. The three broad strategies described below, when employed simultaneously, can have a maximum impact on young peopleÕs reproductive health: ä Increase knowledge, encourage healthy attitudes, develop skills and form or change youthÕs behaviors. ä Improve the social environment so that young people are supported in making healthy decisions and that programs and services are able to operate. ä Increase access to and utilization of youth programs and health services. STRATEGY 1: INCREASE KNOWLEDGE, ENCOURAGE HEALTHY ATTITUDES, DEVELOP SKILLS AND FORM OR CHANGE YOUTH’S BEHAVIORS. This strategy aims to influence individual- and interpersonal-level antecedents of adolescent decision making and risk-taking. By focusing on strengthening the individual characteristics of young people, we can help them make healthy decisions about reproductive health. This focus can also influence antecedents at other levels, for instance, by changing community norms, strengthening institutions that support youth and encouraging adults to communicate effectively with young people. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 28 4Kirby, 1999c. 5Leffert et al., 1998 Researchers have found that if youth believe their friends have sex, use alcohol or drugs or smoke, they are more likely to engage in those behaviors. The transition to adulthood requires specific knowledge and skills. To make the transition to adulthood, youth need to have the knowledge and skills that help them to: ä participate as citizens (as members of a household, the neighborhood and the larger community, and as workers),6 ä gain experience in decision making,7 ä make decisions based on reason, ä assess risks and consequences, ä assess costs and benefits of decisions and actions, and ä interact and communicate with peers, partners and adults. This knowledge and these skills should be developed from an early age, starting as young as pre-school. They should then be sharpened and strengthened during adolescence in order to make a healthy transition to adulthood. An increasing body of research indicates that youth development programs that promote the knowledge, skills and other individual assets needed to make a healthy transition to adulthoodÑcoupled with reproductive health information and opportunities to discuss sexualityÑcan result in a broad array of positive health outcomes.8, 9 Youth and adult caregivers need clear and accurate information about sex. As young people go through physical changes related to human reproduction, they need information and opportunities to discuss sexuality in a safe and open way. With the influence of global media and changing social values and norms, young people get inadequate, mixed and inaccurate messages about sex. In many societies, talking openly about sex is taboo, so young people resort to friends, movies, videos and pornographic materials for information. Parents and adult caregivers are often uncomfortable discussing sexual topics with their children. Taking the cultural context into account, clear and accurate information should be made available to young people and their adult caregivers through a variety of media and channels. Young peopleÕs attitudes, intentions and motivations to avoid pregnancy and STIs should be strengthened. Some young people have attitudes, intentions and motivations that encourage them to take sexual risks. For instance, some girls may desire to become pregnant because they think having a baby will bring meaning to their lives or motivate their partners to marry them. Other youth may intend to become sexually active without using condoms because they believe that condoms reduce sexual pleasure. Cultural expectations may encourage young people to marry early and have children soon after. Programs may be able to impact these attitudes, intentions and motivations by providing counseling or small-group discussions for young people, which can help them critically examine their attitudes and change their intentions. For example, a program can help young people examine traditional gender roles and help them make better decisions about what kind of relationship they want to be in, who and when to marry, how much education they want to achieve and how soon they want to 29 Chapter 2: A Framework for ARH Program Monitoring and Evaluation Note Sexuality Sexuality includes not only physical and sexual desires, but also issues of identity, societal and gender roles and human relationships, including those with family, peers and partners. 6 Blum, 1999. 7 Ibid. 8 Kirby, 1999c. 9 Leffert et al., 1998. have children. Programs can also increase young peopleÕs motivation to avoid pregnancy and STIs once they closely examine these consequences. Health education activities can affect many of the factors that influence youth decision making. A successful ARH program includes activities that influence how young people make decisions as well as the larger environment in which they operate. Some activities common to reproductive health education programs are: ä sexuality, reproductive health and family life education; ä skills training, including life skills, vocational skills and skills specific to sexual behavior, such as negotiation and condom use; ä counseling; ä peer education and outreach; ä communications and media outreach; and ä referrals to health and contraceptive services. These activities have interacting and overlapping effects; for instance, communications and media outreach may shape community norms about youth, and skills training may stimulate economic opportunities for young people. Research indicates that the following key programmatic elements of health education activities will lead to improved reproductive health outcomes for youth:11 STRATEGY 2: IMPROVE THE SOCIAL ENVIRONMENT SO THAT YOUNG PEOPLE ARE SUPPORTED IN MAKING HEALTHY DECISIONS AND THAT PROGRAMS AND SERVICES ARE ABLE TO OPERATE. Improving the social environment for youth reproductive health means influencing antecedents that occur among peers, partners, families, institutions and community members. This strategy aims to change social and cultural norms to support young peopleÕs healthy decision making, improve programs and policies that reach youth and support adults and institutions that interact with and support youth. A positive social environment supports healthy lifestyles. Relationships with friends, partners and family members, as well as the influence of community, school and other institutions, all play a role in shaping multiple health outcomes. Some programs aim to improve the social environment for ARH. This includes encouraging critical discussion of the social and cultural norms that may adversely impact ARH, such as norms A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 30 10 Kirby, 1999b. This table is based on the analysis of evaluations conducted of sexuality education programs in the United States, and may be more or less relevant in some developing country settings. 11 Kirby, 1997; Choi and Coates, 1994; McKaig et al., 1996; and Houvras and Kendall, 1997. Common Elements of Effective Sexuality Education Programs10 ¥ A clear focus on reducing one or more sexual behaviors that lead to unintended pregnancy, STIs or HIV infection ¥ A foundation in theoretical approaches that have been shown to be effective in influencing other health-related risks ¥ Ongoing reinforcement of clear messages on risky behaviors ¥ Basic, accurate information about the risks of unprotected intercourse and methods of avoiding unprotected intercourse ¥ Activities that address social pressure on sexual behaviors ¥ Modeling and practice of communication, negotiation and refusal skills ¥ A variety of teaching methods, designed to involve the participants and have them personalize information ¥ Incorporation of behavioral goals, teaching methods and materials that are appropriate to the age, sexual experience and culture of the youth ¥ A duration long enough to complete important activities ¥ Teachers and peer educators who believe in the program they are implementing related to gender roles. Other programs might attempt to strengthen institutions that reach and support youth, such as youth clubs and religious organizations, or develop policies and programs that provide the services youth need. Supportive and caring communities can make a difference. For example, community organizing builds communities and institutions in ways that enable members to identify and solve problems and respond to needs. It fosters ownership and participation, and engages community membersÑadults and youthÑin social action that considers and addresses young peopleÕs reproductive health needs. Family support plays a critical role in young peopleÕs decision making. Parents and other adult family members play a critical role in shaping young peopleÕs aspirations and values. Even when adult caregivers have difficulty discussing sex and reproductive health with youth, support from adults can positively influence a young personÕs reproductive health outcomes. Adult caregivers need to be encouraged to value the education of youth, provide supervision and support and communicate effectively with young people. Programs that reach parents might aim to help parents create a harmonious relationship with their children by practicing what they could say to effectively show support. Programs might also raise awareness among adult caregivers of how some cultural traditions, such as early marriage, have a detrimental effect on young peopleÕs lives. Programs must identify and address the dynamics of youthÕs social systems. Understanding and addressing these dynamics is also crucial to improving the environment for youth. Many programs work to improve our understanding of social systems and to strengthen and make more responsive those systems that support youth. For example, a program may find that some young people are at a disadvantageÑboth to adults and to other youthÑdue to differences in age and experience, gender, income and education. An adolescent reproductive health program may not only try to improve the knowledge and skills of those youth, but also attempt to influence the behavior of those holding power over them. STRATEGY 3: INCREASE ACCESS TO AND UTILIZATION OF YOUTH PROGRAMS AND HEALTH SERVICES. This strategy focuses on providing the opportunities, programs and services that allow young people to gain access to youth programs and health services. By strengthening the institutions that support youth, such as youth clubs, recreational facilities, religious organizations, schools and health facilities, this strategy aims to 31 Chapter 2: A Framework for ARH Program Monitoring and Evaluation Strategies to Create a Supportive Environment for Youth ¥ Mobilize community action, particularly among youth. ¥ Generate collaborative responses to ARH among youth, community members, and institutions and organizations working in the community. ¥ Raise awareness of young peoplesÕ needs and the social, cultural, economic and political issues that contribute to their RH concerns. ¥ Conduct mass media and social marketing campaigns. ¥ Gain stakeholder and other adult support for discussions with, and activities and services for, young people. ¥ Address antecedents that contribute to youth RH risks, such as dropping out of school, gender inequity, early marriage, female genital cutting, the sex industry and drug and alcohol consumption. ¥ Improve other sectors in related areas, such as female education and vocational training. ¥ Overcome resistance to providing RH information and services to young people, and ensure that these services are affordable. ¥ Institute policies to promote access to reproductive health information, education and services. Remove restrictions that limit this access. ¥ Support networks and coalitions to encourage advocacy, service referrals and broader social changes. influence individualsÕ participation. The existence of youth programs may also influence families, institutions and communities as they increase the visibility of youth engaged in positive activities and change adultsÕ attitudes toward them. Youth programs can affect young peopleÕs lives on multiple levels. Many youth programs aim to increase the number of young people who participate in activities that build their skills, build positive relationships with peers and adults and provide a creative outlet for their energy. For example, youth programs may attempt to build young peopleÕs skills; encourage activism in the community; provide sports, arts or other creative activities; or foster adult mentoring of youth. At the individual level, these programs help to build self-esteem and skills and encourage young people to have aspirations for the future. At the interpersonal level, they encourage the creation of healthy norms among peer groups and positive interaction between young people and adults. At the community and institutional levels, youth can have a direct influence on changing the environ- ment if encouraged to participate as advocates for youth-related programs and policies. The presence of youth organiza- tions can also influence how adults in the community view youth and help the community see young people as an important asset. Connectedness to schools improves young peopleÕs knowledge and skills. Increasing the quality and quantity of education young people receive is another strategy to improve adolescent reproductive health. In places where young people have few educational opportunities, increasing local and national commitment to education can be an important part of addressing adolescent reproductive health. In addition to improving access to education, schools can improve their physical and emotional environments. Programs can address sexual harassment in schools, change school policies that do not allow attendance by pregnant adolescents, improve safety in schools or strengthen the extracurricular activities. Religiosity and connection to religious organizations can positively influence youth. Adolescence is a time of rapid change, and religious beliefs can help young people understand and process the challenges they face. Religiosity may have more to do with a young personÕs strong religious belief than it does with his or her actual attendance or participation in religious activities. In addition, evidence shows that feeling connected to a religious organization can support young people in making healthy decisions. Programs may want to increase these links while respecting individual decisions in this realm. Health services enable young people to act on their healthy decisions. The provision of health services, such as counseling, contraceptives, maternal care and nutrition programs, to youth is crucial. Without such services, young people may not be able to act on the positive decisions they make. In order to reach more young people with health services, we need to understand how young people prevent reproductive health problems and seek A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 32 Families can protect youth from behavioral risks. treatment both within and outside the formal service delivery systems. Many would agree that, in order to make healthy decisions about illness, it is important to see a trained medical service provider. Yet, reproductive health programs have largely addressed older, married women and sometimes men; young people perceiveÑoften correctlyÑthat family planning and STI clinics would not welcome them. The barriers to youth access of health services are numerous: ä long distances to service locations, and unsafe or unavailable transportation; ä inconvenient hours of operation; ä lack of anonymity; ä concerns about privacy and confidentiality; ä staff attitudes and actions, including scolding and moralizing; ä fear and embarrassment; ä cost of services; and ä laws and policies that make serving youth difficult. Many youth rely on resources outside the formal health service provision system. These resources may include home remedies, traditional methods of contraception and abortifacients, provision of contraceptives through friends or relatives, clandestine abortion, and contraception and medication purchased without a doctorÕs prescription from pharmacies or traditional health practitioners. Many programs are trying to increase young peopleÕs utilization of reproductive health services through activities and strategies that: ä increase young peopleÕs knowledge about the availability of reproductive health services; ä generate demand for services, for example, by promoting services through peer outreach workers; and ä examine where and how young people seek information and treatment, and improve the Òyouth-friendlinessÓ of those services. 33 Chapter 2: A Framework for ARH Program Monitoring and Evaluation Characteristics of “Youth-Friendly” Health Services Health Provider Characteristics ¥ Staff specially trained to work with youth ¥ Respect for young people ¥ Privacy and confidentiality honored ¥ Adequate time for interaction between client and provider ¥ Peer counselors available Health Facility Characteristics ¥ Separate space and special times set aside ¥ Convenient hours ¥ Convenient location ¥ Adequate space and sufficient privacy ¥ Comfortable surroundings Program Design Characteristics ¥ Youth involvement in design and continuing feedback ¥ Drop-in clients welcomed and appointments arranged rapidly ¥ No overcrowding and short waiting times ¥ Affordable fees ¥ Publicity and recruitment that inform and reassure youth ¥ Boys and young men welcomed and served ¥ Wide range of services available ¥ Necessary referrals available Other Positive Characteristics ¥ Educational material available on site, which can be taken home ¥ Group discussions available ¥ Possible to delay pelvic examinations and blood tests before receiving contraceptives ¥ Alternative ways to access information, counseling and services outside of a formal health facility The following items are seen as characteristics of effective Òyouth-friendlyÓ health services, whether services are provided in a clinic, hospital, pharmacy, youth service organization or other venue:12 Identifying Appropriate Program Activities We now have a clearer understanding of the multiple levels of influence on adolescence and the broad strategies that promote healthy reproductive behavior among youth. Using this understanding as our foundation, we can design programs that are more likely to be effective and, thus, worth the effort of good evaluation. These programs will: ä clearly define desired health outcomes, ä identify the protective and risk- enhancing antecedents that influence those outcomes, and ä use program strategies that respond to more than one of the antecedents that impact adolescent reproductive health outcomes. One way to design your strategy while keeping these elements in mind is to use a Logic Model. The steps, outlined below, are as follows: ä Define your programÕs goals and desired behavioral outcomes (the process of defining goals and outcomes is discussed in detail in Chapter 3). ä Identify the antecedents that, according to research, influenceÑboth positively and negativelyÑthe behavioral outcomes your program desires. In many places, there may not be enough research to suggest the whole range of factors that influence youth behavior and decision making. In this case, you may base your assumption of influences either on a review of the research suggesting antecedents in other countries, or you can use your experience with youth to make a Òbest guessÓ about the antecedents that influence health outcomes. You should also try to directly ask youth what they think influences their decision making. ä Identify one or more program activities that you think, based on your own experience or on the international literature about what works, will specifically influence each antecedent. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 34 Note The Logic Model The concept of a “Logic Model,” and its importance to the design and evaluation of youth programs, was introduced by Kirby during a presentation to a meeting on Adolescent Health and Development,Washington, D.C., 4–6 February 1999. It is a simplified version of the logical framework, which emphasizes that outcomes should be pursued based on antecedents identified by research. 12 Senderowitz, 1999. 35 Chapter 2: A Framework for ARH Program Monitoring and Evaluation Define Your ProgramÕs Goal Define Your ProgramÕs Desired Behavioral Outcomes Identify the Antecedents of Behavioral Outcomes Your Program Desires Identify Program Activities that You Think Will Influence Each Antecedent ¥ Decrease rates of pregnancy and STIs among youth ages 14Ð19 in our district. ¥ Decrease premarital sex ¥ Increase use of condoms among sexually active youth ¥ Increase age of sexual initiation ¥ Increase age of marriage ¥ Community norms about premarital sex and appropriate age of sexual initiation ¥ Opportunities for education ¥ IndividualÕs ability to say ÒnoÓ to sex ¥ IndividualÕs ability to use contraception ¥ YouthÕs access to condoms, contraception and clinical services in a confidential way ¥ Community norms about appropriate age of marriage ¥ Develop education program to encourage adults to discuss norms around premarital sex with youth ¥ Initiate community mobilization campaign to change norms that do not value girlsÕ education ¥ Lobby for expansion of opportunities for secondary education ¥ Provide life skills education emphasizing how to say ÒnoÓ to sex in school health education program ¥ Establish peer education program to reach sexually active youth ¥ Encourage development of national health policies that support provision of services to youth ¥ Include youth representatives in clinic advisory committee ¥ Establish Òyouth- friendlyÓ and confidential services (e.g., educating health workers) at both national and local level ¥ Influence community norms to support later age at marriage For example, by using the Logic Model to guide program design, your assumptions about what influences your desired outcomes are both clear and specific to your context. You are more likely to think broadly about the factors that influence your desired behavioral outcomes and to include contextual influences. Finally, your program activities will be designed to link directly to the influences you think will affect the behaviors you are concerned with. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 36 Key Elements of Successful ARH Programs13 ¥ Strategic planning : Effective programs clearly state process and behavioral objectives before the program begins, as a prerequisite to measuring success. ¥ Target audience identification: Young people have diverse needs, depending on their varied characteristics, such as age, school status, marital status, gender, family characteristics and experience. In designing programs, it is important to identify the specific target group and address its needs accordingly. ¥ Needs assessment : Understanding the specific issues and needs of the youth who are expected to participate in or receive services from the program ensures that the programÕs design and content are appropriately shaped. ¥ Youth involvement: Youth are best able to identify their own needs and will feel more ownership of a program when they are included in design and implementation. ¥ Community involvement: Community members, such as policymakers, health professionals and religious leaders, should be involved in program planning to ensure support and acceptance. ¥ Adult involvement: Involving parents and other adult family members may help to ensure that the program does not meet with resistance and to educate parents about reproductive health issues and adolescent needs. ¥ Protocols, guidelines and standards: Specific and detailed operational policies governing how a program should serve youth may help to encourage a consistent level of quality, particularly when service providers are unfamiliar with the youth population. ¥ Selection, training and deployment of staff: Staff providing services to young people require specific qualities, training and supervision to ensure that clients are well-treated and to ensure client retention. ¥ Monitoring and evaluation: Collecting data helps managers monitor performance, evaluate outcomes and impact and improve program strategies. 13 Israel and Nagano, 1997; Birdthistle and Vince-Whitman, 1997; Senderowitz, 1997a; and Senderowitz, 1997b. Learning from the International Experience with Youth Reproductive Health Programming The design of ARH programs can be informed by the experience and evaluation of programs from around the world. The Òkey elementsÓ of ARH programs listed below were compiled through a literature review. While these elements have not been systematically tested in field settings, we offer them here because they may help in your program design and implementation. 37 Chapter 2: A Framework for ARH Program Monitoring and Evaluation Note Additional information on key elements The key elements of adolescent reproductive health programs are presented in four papers produced by FOCUS on Young Adults.These papers can be accessed and downloaded from the FOCUS Web site at <www.pathfind.org/focus.htm>. 38 A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs P H O T O : M ar sh a M cC o sk ri e, J H U /C C P 39 PART I: THE HOW-TO’S OF MONITORING AND EVALUATION Developing an ARH Monitoring and Evaluation Plan 3 Establishing Goals, Outcomes and Objectives for Youth Reproductive Health Programs This section discusses the goals, program outcomes and objectives of an adolescent reproductive health program, which form the basis of your M&E effort. Each is a different expression of the reproductive health outcomes the program is trying to achieve. Goals define the overall impact your program hopes to have. A goal states the impact a program intends to have on a target population. The target population is the specific group of individuals your program is trying to affect, and can include youth as well as the adult service providers, teachers, family members or community members who interact with young people. ARH programs often have the general goal of improving the reproductive health of young people. Goals may be stated more specifically depending on the reproductive health needs of the youth population. Program outcomes are the specific results that your program hopes to achieve. Your programÕs intended outcomes are related to your established goals, such as a decrease in STI rates or improvement in nutritional status. To produce these outcomes, programs focus on intermediate behavioral changes, such as the delay of sexual initiation, increased use of condoms or contraception or increased breastfeeding. Programs can establish short-term, intermediate and long-term program outcomes, as detailed on the next page. Chapter at a Glance ä Defines program goals, outcomes and objectives ä Helps you define the scope of your ARH monitoring and evaluation effort ä Offers guidance on how to plan and conduct a monitoring and evaluation effort, using the rest of this Guide Note Terminology People working in evaluation use many different terms to describe what they do.The existing evaluation termi- nology is often interpreted differently in different set- tings, and sometimes evaluators spend too much time debating which term is best to use. In this Guide, we use terms and concepts that are intended to reflect the stages and components of youth programs as they are implemented in the field.We have defined them in ways that we hope will be understandable and accessible to those who do not have a research background. Objectives are explicit, measurable statements of program outcomes. There are two kinds of objectives: population-level and program-level. Population-level objectives state intended results in terms of the target population and are directly related to the outcomes identified by your program. They describe what impact your program hopes to have in the youth population it aspires to reach, influence or serve. For example: ä Increase the average age at sexual initiation among youth ages 14Ð19 in our district by one year. ä Increase the percentage of youth ages 14Ð19 in our district who are actively involved in youth organizations that provide leisure activities. Program-level objectives state intended results in terms of the structure, management or operations of a program. They describe the activities you will undertake to achieve the impact your program hopes to have. For example: ä Train 30 peer educators to provide quality counseling to youth every six months. Measuring Objectives How you conceptualize and express your objectives and their measures will frame your actions. The measure of an objective should be stated in terms of targets. Targets are the level of the objective you plan to achieve within a stated time.1 Targets may be either quantitative (numeric) or qualitative (descriptive), depending on the nature of the activity and the indicator chosen to measure it.2 Targets may express quantity (how much), quality (how well) or efficiency (least cost per outcome produced). The target of population-level objectives should be defined by referring to baseline information. Baseline information describes the current status or situation in a community before an intervention takes place. Baseline information is important because it provides points of comparison against which you will measure whether your objectives were accomplished. If baseline information is not available, you may need to collect information about the target population and its needs before your program begins. This will provide you with starting measures that can be the basis for an outcome or impact evaluation that the program undertakes later. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 40 1 Targets are quantitative estimates that are used for the purpose of budgeting, planning and tracking changes in outcomes. They should not be understood as quotas, or used as a basis to coerce any individual to accept services, such as contraception, that are inconsistent with his or her moral, philosophical or religious beliefs. Targets should not be used as a basis for compensation of service providers. All youth reproductive health programs should, of course, safeguard the rights, health and welfare of all individuals who take part in the program 2 Indicators are discussed in detail in Chapter 4. htlaeHevitcudorpeRs’nemoWgnuoYevorpmIoT:laoG emoctuOmreT-trohS emoctuOetaidemretnI emoctuOmreT-gnoL foytilauqehtevorpmI stnerapneewtebsnoitcaretni ruoni91—01segahtuoydna .tcirtsid taegaegarevaehtesaercnI htuoygnomanoitaitinilauxes .tcirtsidruoni91—41sega setarycnangerpesaerceD ni91—41segahtuoygnoma .tcirtsidruo The source of baseline information could be: ä a survey of youth prior to the intervention; ä data documenting prior youth program experience; ä external measures collected by another organization, government agency or donor, such as government health facility utilization data; ä information on youth reproductive health obtained from a national survey, such as a Demographic and Health Survey (DHS); or ä the professional judgment of those who work with youth. For example, your prior program experience may tell you that only 5 percent of youth are seeking counseling services from peer educators in the schools where your program functions. However, you are aware that a partner organization in a neighboring district found that 8 percent of youth sought counseling. Referring to this baseline information, you might determine that your program objective should be ÒIncrease the percentage of youth ages 10Ð19 who seek counseling services from peer educators to 10 percent within one year.Ó ä The target of program-level objectives should be defined by program experience. To determine targets of program-level objectives, such as numbers of peer educators who should be trained, refer to program experience and resources. For example, you might determine that to reach the 500 youth in your target population, you would ultimately like to train 40 peer educators. Since your budget only allows for one training every six months, and experience has shown that training 20 peer educators at a time is most effective, you might want to set your target as training 20 peer educators every six months. Monitoring and evaluation requires an understanding of measurement and indicators. Measurement is the use of methods and procedures for systematic observation and assessment.3 A variety of methods and procedures are used to collect information about your program and its target population.4 To measure how a program is functioning and what outcomes it is having in the target population, you will use indicators. An indicator is a measure of program objectives and activities.5 Changes in indicators demonstrate that a program is functioning and the effectÑpositive or negativeÑit is having on the target population. Information is collected on some of your objectivesÑboth program-level and population-levelÑin order to measure whether a programÕs activities are being implemented, the quality of program implementation, to what extent the program is being utilized, or the changes that are taking place in your target population, if any. In general, information collected during a process evaluation will measure program-level objectives. Information collected during an outcome or impact evaluation will measure population- level objectives. To measure changes in objectives, baseline information is compared to data collected after the program has been operating for some period of time. 41 Chapter 3: Developing an ARH Monitoring and Evaluation Plan 3 Green and Lewis, 1986. 4 Data collection is discussed in detail in Chapter 7. 5 Indicators are discussed in detail in Chapter 4. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 42 Monitoring, process evaluation, outcome evaluation and impact evaluation involve the following steps: ä agreeing on the scope and objectives of your M&E plan with stakeholders, ä selecting indicators, ä systematically and consistently collecting information on those indicators, ä analyzing the information gathered, ä comparing the results with the programÕs initial goals and objectives, and ä sharing results with stakeholders, including youth. A strong M&E plan should use indicators to measure both population-level and program-level objectives. Population-level objectives relate most directly to the sexual and reproductive health outcomes your program hopes to achieve. However, they are often difficult to measure because they deal with sensitive issues, such as whether or not young people are having sex. You should always try to measure population-level objectives sevitcejbOotnIsemoctuOdnaslaoGetalsnarTotwoH :laoG evitcudorperehtevorpmI¥ ni91—01segahtuoyfohtlaeh .tcirtsidruo ehtebircsed,semoctuootetalsnartoT otsepohmargorpruoystlusercificeps .eveihca :semoctuO gnomaesumodnocesaercnI¥ segahtuoyevitcayllauxes .tcirtsidruoni91—51 forebmunehtesaercnI¥ rednuhtuoyevitcayllauxes ssucsidohw91foegaeht .srentrapriehthtiwsmodnoc :semoctuO gnomaesumodnocesaercnI¥ segahtuoyevitcayllauxes .tcirtsidruoni91—51 forebmunehtesaercnI¥ rednuhtuoyevitcayllauxes ssucsidohw91foegaeht .srentrapriehthtiwsmodnoc noitalupopotnisemoctuoetalsnartoT ehT:atadenilesabotrefer,sevitcejbo tnecrep5tahtdnuofSHDtnecertsom tcirtsidruoyni91—51segahtuoyfo A.esruocretnitsriftasmodnocesu ruoyybdetcudnocyevrusenilesab fotnecrep51tahtdnuofnoitazinagro 91—51segahtuoyevitcayllauxes htiwsmodnocdessucsidreveevah .srentraplauxestnerrucrieht :sevitcejbolevel-noitalupoP tsriftaesumodnocesaercnI¥ 51segahtuoyybesruocretni 01ottcirtsidruoyni91— .raeyenonihtiwtnecrep forebmunehtesaercnI¥ rednuhtuoyevitcayllauxes tcirtsidruoyni91foegaeht dessucsidreveevahohw tnerrucriehthtiwsmodnoc tnecrep52otrentraplauxes .raeyenonihtiw :semoctuO gnomaesumodnocesaercnI¥ segahtuoyevitcayllauxes .tcirtsidruoni91—51 forebmunehtesaercnI¥ rednuhtuoyevitcayllauxes ssucsidohw91foegaeht riehthtiwesumodnoc .srentrap -margorpotnisemoctuoetalsnartoT seitivitcaehtebircsed,sevitcejbolevel ehteveihcaotekatrednulliwuoy .semoctuo :sevitcejbolevel-margorP otsrotacudereep52niarT¥ snoissesgnidliub-slliksdloh tuoba91—51segahtuoyhtiw .noitaitogendnaesumodnoc gnidliub-slliks03dloH¥ segahtuoyhtiwsnoisses dnaesumodnoctuoba91—51 .noitaitogen related to intermediate behavioral outcomes, but you may have difficulty doing so. Measuring short-term objectives related to the risk and protective factors your program thinks influence young peopleÕs behavior is important for two reasons. First, in the absence of showing changes in behavior, the achievement of short-term objectives is a good sign that your program is producing outcomes. Second, measuring short-term objectives also helps test your assumptions about the factors that influence the behavior and decision making of young people. This information may provide insights into how your program strategy is working, or not working, to influence the behavior that produces the long-term reproductive health outcomes you are concerned with. Measuring program-level objectives is an important part of understanding how your program is working. Program-level objectives are measured during a process evaluation, and provide information on how a program is functioning. A process evaluation may offer insights into why your program is having an impact (or not) and is important if you plan to scale up or replicate the strategy your program uses. Defining the Scope of an M&E Effort Scope refers to the extent of the activity you will undertake in a monitoring and evaluation effort. The scope of your M&E effort is determined by several factors. Ask yourself six key questions: ä What should be monitored and evaluated? ä When should ARH programs be monitored and evaluated? ä How much will M&E cost? ä Who should be involved in M&E? ä Who should carry out the evaluation? ä Where should M&E take place? Each is discussed below. WHAT SHOULD BE MONITORED AND EVALUATED? M&E can measure each stage of your programÕs development: design, systems development and functioning, and implementation. After you have developed goals, objectives and activities, your next step is to make decisions about M&E in each of these stages. Your M&E effort can measure each stage to determine how the program is working and its impact on the target population. You can review each stage for ideas and options for M&E efforts. Program design is measured by process evaluation. A community needs assessment often forms the basis for program design. The process of program design involves developing a strategy or systematic approach to address the communityÕs needs, identifying actions and activities required to implement the strategy, and identifying the resources needed to carry out the activities. Assessing how well a program has been designed is one aspect of process evaluation because the program design affects the success of a program. Documenting the problems with program design will help explain why a program did not achieve its objectives; conversely, if a program is successful, documenting will help explain what key design elements contributed to its success. Those elements can then be used to expand or replicate a program. Chapter 5 includes information on how to monitor and evaluate the design stage of a program. 43 Chapter 3: Developing an ARH Monitoring and Evaluation Plan A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 44 1.3teehskroW sevitcejbOdnatxetnoC,semoctuO,slaoGmargorPgniyfitnedI ?slaogsÕmargorpehteratahW.1 ,mret-trohstahW.2 mret-gnoldnaetaidemretni margorpruoyseodsemoctuo ?eveihcaotepoh -noitalupop,mret-trohstahW.3 ruoyseodsevitcejbolevel eveihcaotepohmargorp otdetalersevitcejbognidulcni( ?)srotcaftnedecetna ,etaidemretnitahW.4 sevitcejbolevel-noitalupop otepohmargorpruoyseod ?eveihca level-margorpehteratahW.5 uoylliwwoH?sevitcejbo level-noitalupopehteveihca ?evobadetatssevitcejbo eblliwseitivitcatahW.6 ?margorpehtybdetnemelpmi fosredlohekatsehteraohW.7 ?margorpeht lacitiloplacolehtthgimwoH.8 ehttceffatxetnoclarutlucro ?margorp cimonocetnerruclliW.9 margorptceffasnoitidnoc ronoitatnemelpmi ?htuoyybnoitapicitrap Systems development and functioning is measured through monitoring and process evaluation. Systems development involves the creation of a management and support system to carry out the program. Support systems include MIS, financial management systems, personnel systems, and commodities and logistics systems. Conducting preparatory activities such as recruiting and training staff, developing curricula, drafting service guidelines and developing IEC or behavior change communication (BCC) materials is an important part of systems development. Systems functioning involves the ongoing performance of the systems used to operate the program and includes issues such as how decisions are made within the program, whether internal and external communication channels are functioning well, how well coordination between regional programs and headquarters is conducted, whether training and supervision are ensuring quality performance, and personnel job descriptions and job performance. If you are able to document how a programÕs systems are functioning, this will help explain why a program isÑor is notÑ working. To determine how a programÕs systems are functioning, monitoring and process evaluation should: ä document the development of support systems and determine if they are actually operating once program implementation begins; ä assess the performance of support systems; and ä measure how effective the preparatory activities are in readying program personnel for program implementation. Implementation is measured through monitoring, process evaluation and outcome/impact evaluation. Implementation is the process of carrying out program activities with the target population and providing services to them, i.e., the actual performance of your planned activities. For example, the activities of a youth center may include hiring and training staff and volunteers, holding educational sessions at the center, involving youth in developing leisure activities and providing counseling services to young people. Monitoring and process evaluation reveal how program implementation is occurring. Outcome and impact evaluation help determine whether your program is achieving its objectives by measuring changes in outcomes in your target population. Together, this information should help you explain why the program isÑor is notÑreaching its objectives, and contributes to an understanding of program outcomes. 45 Chapter 3: Developing an ARH Monitoring and Evaluation Plan Documenting the problems with program design will help explain why a program did not achieve its objectives and—if successful—what key design elements contributed to the success. The goals, objectives and activities of your program shape the scope of what will be monitored and evaluated. By identifying every activity your program has undertaken at the design, systems and implementation stages, you define the scope of your M&E effort. At each stage, your activities should be monitored and/or evaluated. The table on the next page illustrates how activities undertaken at each stage of a peer education program might be monitored and evaluated. Identifying your activities at each stage and defining the possibilities for M&E is the first step in determining scope. How you plan to use M&E information shapes what will be monitored and evaluated. Your intended use of M&E information will help you determine the scope of your M&E effort. Possible uses include the following: ä Ensuring that program activities are carried out as planned: If so, you will need to track changes in program-level objectives through an effective monitoring system. ä Assessing how well activities are being carried out and making improvements as needed during the course of program implemen- tation: If so, you should undertake a process evaluation. ä Determining whether changes in outcome indicators are occurring in the target population for your program: If so, you should conduct an outcome evaluation. If you have more resources and are interested in showing how much of the observed change in outcome indicators is due to your program, then you should conduct an impact evaluation. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 46 seitivitcA noitaulavEssecorP&gnirotinoM noitaulavEtcapmI&emoctuO egatSngiseD srotacudereeprehtehwenimreteD¥ hcaerotyawevitceffenaera .noitalupoptegratruoy tegratehtnihtuoyereW¥ noitalupop yehtrehtehwtuobadetlusnoc dluowsrotacudereepthguoht ?evitceffeeb A/N &tnempoleveDsmetsyS egatSgninoitcnuF reepniartotalucirrucpoleveD¥ .srotacude reepniartdnatceles,tiurceR¥ .srotacude erasrotacudereepynamwoH¥ ?deniartdnadetceles,detiurcer gniniartehtfoytilauqehtsitahW¥ ?srotacudereepotdedivorp A/N egatSnoitatnemelpmI gnilesnuocedivorpsrotacudereeP¥ evifnikeewasnoonretfaeerht .scinilchtlaeh ybdelesnuocerahtuoyynamwoH¥ ?srotacudereep nilesnuocehtfoytilauqehtsitahW¥ ?srotacudereepybdedivorp sedutitta,egdelwonknisegnahcoD¥ htuoygnomaruccoroivahebdna reepybdelesnuoceraohw ?srotacude 47 Chapter 3: Developing an ARH Monitoring and Evaluation Plan sredlohekatStnereffiDrofsdeeNtnereffiD:etaulavEdnarotinoMotsnosaeR elbaliavaruoynevig,detcellocebdluohsnoitamrofnitahwtuobasredlohekatsgnomasusnesnocgnipolevedyB .elbaeganameromtroffeE&Mnaekamnacuoy,secruoser sreganaMmargorP ffatSdna seicnegAgnidnuF srekamyciloPdna seitinummoC htuoYdna :serusaeME&MtahW ro/dnaseitivitcafoytilauQ• secivres sselerasetisemosyhW• lufsseccus seuqinhcetE&MniyticapaC• egarevocmargorP• :yfitnedIstluseRE&MtahW gninnalpcigetartsrofseitiroirP• noisivrepusdnagniniarT• sdeen otgnitroperevorpmiotwoH• ycnegagnidnuf stneilcmorfkcabdeeF• tonsimargorpyhW• tuotestitahwgnihsilpmocca odot E&MybdediuGerAsnoisiceDtahW :stluseR noitacollaecruoseR• fopugnilacsdnanoitacilpeR• snoitnevretni gnisiar-dnuF• ffatsgnitavitoM• ycacovdayciloP• noitazilibomytinummoC• :serusaeME&MtahW fotnemeveihcafoecnedivE• sevitcejbomargorp tcapmidnasemoctuomargorP• ycneiciffe-tsocmargorP• evitcudorperhtuoytuobaataD• htlaeh :yfitnedIstluseRE&MtahW margorpcigetartsrofseitiroirP• gnidnuf rofyfilauqtahtsmargorP• ecnatsissaronod ronodtahtsecitcarptseB• htuoyfoeriuqerdluohs smargorp ecnatsissaronodfotcapmI• E&MybdediuGerAsnoisiceDtahW :stluseR ebdluohsgnidnufhcumwoH• HRAotdetacolla smargorphtuoyfosepyttahW• dednufebdluohs sehcaorppamargorphcihW• sadetneserpebdluohs sledom ,sevitcejbocigetartsweN• segakcapstluserroseitivitca fopugnilacsdnanoitacilpeR• smargorplufsseccus :serusaeME&MtahW evitcudorperotdetalersroivahebhtuoY• htlaeh sdeens’elpoepgnuoY• tnepsgnieberasdnufmargorpwoH• ytinummocfotcapmidnassecorpehT• noitapicitrap :yfitnedIstluseRE&MtahW htuoyfostifeneblaitnetopdnalautcA• smargorp secivreshtuoyrettebdnawenrofdeeN• desuebnactahtsecruoserytinummoC• smargorpHRAtroppusot seussiHRAroftroppuslacolrofdeeN• noitcadna :stluseRE&MybdediuGerAsnoisiceDtahW ytinummochcihwoteergedehT• etapicitrapdluohshtuoydnasrebmem margorpehttroppusdnani ytinummocetanidroocrettebotwoH• HRAsserddaotsnoitca lacolfoepyttahwdnaynamwoH• HRAotdetacollaebdluohssecruoser ä Responding to donorsÕ requirements: Some donors may require programs to undertake outcome or impact evaluations. ä Understanding how your program is performing and what outcomes it is influencing: This will help you decide whether to continue, change or expand your program strategy. When Should ARH Programs Be Monitored and Evaluated? Monitoring and process evaluation should occur throughout the life of a program. The information you collect can be used to ensure that you are meeting objectives, to improve program performance and to provide feedback and support to staff and program participants. Outcome and impact evaluations are usually done near the end of a program, although they often use baseline informa- tion gathered at the programÕs start. An impact evaluation has to be included in a programÕs design from the beginning or you will not have the type of baseline information needed to measure changes in outcomes and then attribute them to your program. It is very important not to conduct an outcome or impact evaluation prematurely. For some intended outcomes, such as changes in risk behaviors, program activities need to be carried out for some time, perhaps several years, before changes in the target population can be observed. In this case, outcome or impact evaluation may take place after the program has been fully functioning for some time. When to conduct evaluations should be based on your programÕs objectives, the needs of various stakeholders for information about the program, your knowledge of the program, available resources and your judgment as a manager. The point in your program at which you start an M&E effort will determine the type of monitoring and evaluation you can undertake. Starting M&E at the beginning of a program is ideal. Monitoring and evaluation should be plannedÑand startedÑat the beginning of any new program. Early planning allows you to define your M&E effort based on your objectives and activities, and to be strategic about what you plan to measure. It also enables you to find existing information and collect baseline information at the ideal timeÑyour programÕs starting point. This will allow you to conduct either outcome or impact evaluations with greater ease and enhances your ability to measure the programÕs true impact. Starting monitoring and process evaluation early also allows you to use M&E results to make improvements in the program as it is being implemented. Finally, starting early allows you to ensure that M&E costs are adequately covered by your budget. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 48 .etaLooTE&MtratSuoYfI ruoyfosutatsehtnonoitamrofnienilesabevahtonyamuoy• ;nagebmargorpruoyerofebnoitalupoptegrat ;lufgninaemsseleblliwtcellocuoynoitamrofnieht• nistnemevorpmiekamotlufesuebtonlliwnoitamrofnieht• dna;ygetartsmargorp rehtehwtuobaevisulcnocsseleblliwstlusernoitaulaveruoy• nacsegnahcrehtehwro,derruccosemoctuonisegnahc .seitivitcas’margorpruoyotdetubirttaeb Some activities can still be measured if M&E is started in the middle of a program. You may realize that you need an M&E plan laterÑafter the program has started. If you start your M&E effort in the middle of your program, its scope will probably be limited. It may still be possible to conduct an outcome evaluation, but you will probably have to use baseline information taken after the programÕs start. While the results may not be as clear and strong, they may still be useful. While an MIS can be set up mid- program to track monitoring and process evaluation results, it will be less useful than one launched at the beginning. Even fewer activities can be measured if M&E is started toward the end of a program. Some program managers may not think about what they are going to monitor or evaluate until the program is almost complete. If you start your M&E effort at the end of your program, your options are severely limited. First, it is of little use to set up a monitoring system at the end of a program. While you can assess program activities in retrospect (by soliciting participant and stakeholder feedback after the program is well underway), you may produce biased results. Finally, while an outcome evaluation is possible, it will have to rely on external standardsÑestimates of the plausible status in your community before the intervention took placeÑas comparison data. These standards may or 49 Chapter 3: Developing an ARH Monitoring and Evaluation Plan margorPafogninnigeBehttadetratStroffEE&MnafowolF margorPfoegatS gnirotinoM ssecorP tcapmI/emoctuO ylraE ;)SIM(metsysgnirotinomputeS dnasrotacidniyfitnedi gnikcartrofnalp;stnemurtsni dnasisylanaatad,margorp .gnitroper dnatnempolevedsmetsysssessA gniniartgnidulcni,gninoitcnuf edivorP.ffatsfonoisivrepusdna fissessA.kcabdeefylrae htuoyotevisnopsersimargorp .snoitiddaynasdeentifiro dnasevitcejboyfitnedI enilesabekaT.srotacidni naetaerC.stnemerusaem noitaulavetcapmiroemoctuo .nalp elddiM fiyfidoM.ataddnaSIMssessA roetauqedanisimetsyslanigiro wensddamargorpfi tonsimargorpfI.stnenopmoc hcnual,dennalpsagnimrofrep .noitaulavessecorp mret-dimlamroferomtcudnoC ssessaotnoitaulavessecorp .ecnamrofrepmargorpfoytilauq rehtehwro,egarevocenimreteD stignihcaersimargorpeht .ecneiduadednetni .stnemerusaemmret-dimekaT emoctuomret-trohsezylanA nisegnahcsahcus,serusaem foesuniesaercni,egdelwonk nisegnahcdnasmargorp edivorP.srotcaflautxetnoc .margorpotkcabdeef etaL gnikcartmorfatadezylanA uoyfiedulcnocotmetsys samargorpehtdetcudnoc timbusdnaeraperP.dennalp .stroper margorp-fo-dneezylanA tahwenimreteD.stnemerusaem foytilauqevorpmiotenodsaw .noitatnemelpmis’margorp rofsnoitadnemmocerekaM ronoitacilpermargorp .noisnapxe -wollof(margorp-fo-dneekaT enimaxE.stnemerusaem)pu nisegnahcfoecnedive ydutsnognidnepeD.semoctuo sisylanatcapmitcudnoc,ngised semoctuorehtehwedulcnocot margorpotelbatubirttaera dnasronodottropeR.seitivitca .sredlohekatsrehto may not accurately reflect the knowledge, attitudes and behaviors of your target population before the program began, thereby limiting your ability to demonstrate change in outcomes. HOW MUCH WILL M&E COST? Your financial resources will influence the level of evaluation you take on. Program managers must determine whether the time, effort and cost of an evaluation are justified in light of the expected benefits. If you have no staff capable of conducting an evaluationÑor cannot release trained staff from other duties to concentrate on doing sound M&EÑand if you cannot afford to hire an outside evaluator, you may elect to carry out only a very basic review of your programÕs progress. If you have few resources, your first priority should be to establish a monitoring system. The best use of limited resources is to establish an effective monitoring system, so that you can ensure and document that your program was implemented according to plan. If additional resources are available, undertake some form of process evaluation. Some types of process evaluation can be done quite inexpensively, e.g., by having supervisors periodically observe service delivery or interview program clients as part of their duties. More systematic process evaluations (such as conducting focus groups with youth) require more resources. Outcome evaluations require a moderate to high level of resources. You will need to decide early if you are going to do an outcome evaluation so that you can budget accordingly. The cost will largely depend on how many outcomes you want to measure and the level of difficulty involved in measuring them. It will also depend on what data sources already exist and how much new data you will need to collect. The following steps can help you contain the costs of an outcome evaluation: ä Limit the outcomes to be examined to only the most important ones for your program. ä Choose outcomes that can be measured using less costly data collection methods.6 ä Choose indicators for which data already exist. Impact evaluations require an even higher level of financial and technical resources. Impact evaluations should only be undertaken when there is a compelling reason for doing so, such as to demonstrate the efficacy of a program strategy in a particular target population, or to meet government or donor requirements. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 50 6 Data collection methods are discussed in Chapter 7. An impact evaluation has to be designed into a program from the beginning or you will not have the type of baseline information needed to measure changes in outcomes and attribute them to your program. Whatever resources you have, be creative in using them. There are many ways to collect data. Programs often collect too much data, either collecting data about too many issues, collecting data that does not relate to their objectives or activities, or using different methods to collect the same data from the same target population. Spending a lot of resources on data collection does not guarantee that you will end up with results that help you better understand your program and participants. Managing an M&E effort requires planning and creativity. Think carefully about the types of information you need to collect. Find ways to collect data that relate to the outcomes you hope to achieve, the meaning of your program for participants and the factors that influence why your program is succeeding. Budgeting for an M&E effort is an important part of planning. The worksheet on the following page will help you think about how to calculate the costs of each category in an M&E budget and can be used as a reference when preparing detailed estimates for each budget item. However, some decisionsÑ such as what indicators and data collection methods will be used, and the frequency and timing of data collectionÑwill be based on material discussed in Chapters 4Ð8 and should also be considered before you finalize your M&E budget. WHO SHOULD BE INVOLVED IN M&E? M&E efforts should involve many stakeholders, as many people in the community have an interest in M&E. Stakeholders may include program staff, youth, school administrators and teachers, parents, community leaders, local government officials, service providers and donors. They may be active or want to be involved in some or all phases of an evaluation: planning and design; collecting and analyzing data; identifying the key findings, conclusions and recommendations of an evaluation; disseminating the results; and, finally, planning how evaluation results can be used to improve a program. Stakeholder involvement can make M&E efforts more relevant and effective. Participatory evaluation facilitates the identification of local needs and priorities, and places evaluation issues in the context of peopleÕs lives. Involving stakeholders can help you achieve the following M&E goals: ä Develop consensus about the key issues to be addressed in an evaluation. ä Identify what information stakeholders need about the program. ä Ensure that program staff understand the need for evaluation, their role in its implementation and how the results will be used to improve the program. ä Avoid intrusive or inappropriate evaluation methods. ä Create open lines of communication among stakeholders for later dissemination and discussion of evaluation results. 51 Chapter 3: Developing an ARH Monitoring and Evaluation Plan It is important to involve staff and stakeholders such as community members and youth in the discussion of how M&E information will be used. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 52 2.3teehskroW 2.3teehskroW 2.3teehskroW 2.3teehskroW 2.3teehskroW tegduBE&MnagniraperP metI fotnuomA dedeeNsdnuF foecruoS troppuSlaicnaniF foecruoS troppuSdniK-nI lennosrepehtroF:seiralaS ,ecnatsissalacinhcetrofdedeen dnayrtneatad,noitcellocatad ,sreweivretni,ffats(sisylana ).cte,srevird,srosivrepus rofstsocyliaD:meidreP doofdnagnigdol ,serafixatrosuB:levarT dnalatnerelcihev,enilosag ecnanetniam ,seriannoitseuqyevruS:gnitnirP ,stroperlanif,sediugweivretni .cte dnaselcycib,.g.E:tnempiuqE gnidulcnidna(sretupmoc )ecnanetniam ,enohpeleT:noitacinummoC ,egatsop,oidar,retupmoc,xaf .cte retupmoc,repaP:seilppuS ,soiloftrop,slicnep,setteksid .cte :seitivitcanoitanimessiD ,stsocecnerefnocroranimeS ,slairetam,stnemhserfer noitatneserp,soiloftrop .cte,seilppus LATOT Stakeholders can also help increase the knowledge of external evaluators about the program context and develop opportunities for continued contact between those conducting the evaluation and those affected by it.7 Participatory evaluation is one way to involve the most important stakeholders—youth. Young people targeted by the program are its most important stakeholders. However, some adult program managers and staff may find it difficult to work with youth on a regular basis, given the many differences that can exist between the generations in terms of attitudes, behaviors and beliefs. Participatory evaluation is a set of techniques that emphasize community involvement in gathering knowledge and help place issues of concern in the context of peopleÕs lives. This experiential knowledge aids in directing appropriate responses and defines the array of services offered. Participation generally takes place throughout all phases of the evaluation: planning and design, gathering and analyzing data, disseminating results and preparing an action plan to improve program performance.8 Program planners in the United States have found some effective strategies for working with youth that have application across many social settings, and which are presented in the box at right. Involving stakeholders and youth can raise problems. Disadvantages to involving stakeholders, especially those from other organizations, include the following: ä It may be difficult to be objective in selecting representative young people and organizations to participate in the evaluation. ä Stakeholders may not know much about how a program works. ä Organizations may hold competing perceptions and concerns that are difficult to resolve or prioritize. 53 Chapter 3: Developing an ARH Monitoring and Evaluation Plan 7 Lawrence, 1989. 8 USAID CDIE, 1996. 9 Adapted from Clark, Haughton-Denniston, Flinn, et al., 1993, cited in Brindis and Davis, 1998b: Volume 4, p. 49. E&MyrotapicitraPnihtuoYgnivlovnIrofspiT 9 piT selpmaxE otnielpoepgnuoyetargetnI E&Mdnastroffemargorp .gninnalp elbisseccanisgniteemeludehcS dnanoitacinummocniatniaM.snoitacol egaruocnE.noitamrofnidedeenyevnoc .sthgirgnitovdnanoitapicitraplluf latnem-gdujnondnanepoeB sthgisnis’elpoepgnuoytuoba .snoitseggusdna gnitcaerrognissimsidtsniagadrauG s’elpoepgnuoyotylevitagen leefotmehtrofemitekaM.snoitseggus ticiloS.yllufetapicitrapdnaelbatrofmoc .snoinipodnasaedirieht ehtfoegatnavdaekaT .reffoelpoepgnuoyesitrepxe riehterahsothtuoyegaruocnE tuobasevitcepsrepdnaegdelwonk .stceffemargorpevitagenroevitisop snoitatcepxetuobatsenoheB gnuoy,margorpehtrof dnasnoitubirtnocs’elpoep .noitapicitraphtuoyfostifeneb evlosnacmargorpehttahtmialctonoD uoytahwtuobacitsilaereB.smelborplla .elkcatnac gnuoyroftroppusreffO .elpoep ,ecnatsissalaicnanif,gnirotnemedivorP dnanoisivrepus,gniniart,noitatropsnart .noitamrofni .nufdnaevitcaretnikrowekaM ebothtuoywolladnaevitaerceB eratahtsmargorpngiseD.evitaerc .gnilliflufdnanuf,evitamrofni s’elpoepgnuoydliubpleH eromemocebnacyehtosslliks .devlovni tahtslliksdliubdnanoitamrofniedivorP wollA.ecnedifnocs’htuoyesaercni etacinummocotsyawecitcarpotmeht .secneiduatnereffidhtiw ä The ability of an evaluation to be independent may be compromised by including diverse organizations. ä More participants may require a greater allocation of your staff time and resources.10 WHO SHOULD CARRY OUT THE EVALUATION? Evaluations can be done by your own staff, by those outside your program or by a combination of the two. When deciding who will carry out an evaluation, you should consider several issues. First, what is the most appropriate structure for the evaluation team? Second, what is feasible? What are you able to afford, given your budget? You may find that it is simply too cumbersome or inefficient to involve all stakeholders in every M&E activity. Using staff to carry out evaluation has advantages. In-house staff members are familiar with the program and can be trained quickly. They also may be aware of particular program strengths or weaknesses that require attention. Finally, the results of the evaluation will be most useful to program staff, who are positioned to modify and improve the program accordingly. Using staff may also be more financially feasible, as outside evaluators are often more expensive. Also, for financial or logistical reasons, outside evaluators may only be available for a limited time. Using outside evaluators is more appropriate in some situations. Funding agencies sometimes require that evaluations be carried out, at least in part, by outside evaluators. Since they have less stake in the outcome of the evaluation, outside evaluators are perceived to be more objective in drawing conclusions and tend to have more credibility. However, while maintaining objectivity, outside evaluators must be sensitive to program goals and the local context within which the program is implemented. Rather than posing a threat, evaluators should be considered in light of their role as part of the support system for the program. When staff resources are limited, using outside evaluators may be more feasible. Whether to use in-house staff or outside evaluators also depends on the available time and expertise of your program staff, as evaluations can be very demanding. You will need to assess the experience and skills of your staff in conducting M&E, and how much time they will have to spend on these efforts. You will also need to consider which staff must be involved, how vacations and holidays may affect their availability, and whether you need any outside help. Ideally, youth and other significant stakeholders should participate to the extent possible. In some cases, evaluation may be coupled with technical assistance as part of a broader approach to enhance the effectiveness of the program and to train in-house staff. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 54 10 Lawrence, 1989. Funding agencies sometimes require that evaluations be carried out, at least in part, by outside evaluators. WHERE SHOULD M&E TAKE PLACE? If your program has only one or two sites or covers a small geographic area, you can more easily conduct monitoring and evaluation efforts for the entire area or set of sites. However, if your program covers a larger area or multiple sites, you may need to narrow the geographic scope of the effort. How you select the sites or areas to be included in your M&E effort will depend on your information needs and financial and human resources. Make an effort to monitor each program site. As monitoring is essential for effective program management, you should try to include all program sites in the collection of basic informationÑsuch as whether planned activities have been completed and the number and sex of clients that have been served by your program. This will provide a picture of how program implementation is progressing, as well as allow you to compare how sites perform in relation to one another. If some of your program sites have greater capacity to collect data than others, you might consider having them gather additional monitoring data that will be helpful in answering other questions about program implementation. If it is not possible to collect the same monitoring data from all sites, you probably should not implement a program there unless you are absolutely sure that the strategy will work without monitoring. For many strategiesÑfor example, peer educationÑ monitoring is essential to ensure that the program is being implemented as planned. If you determine that monitoring is not needed for a program to work well, you can choose to monitor only parts that you think are ÒrepresentativeÓ of the sites in your program. How to choose a representative sample of sites is discussed in Chapter 6. You probably will have to limit data collection for evaluation. For example, it is rarely possible in process evaluations to evaluate every service contact or obtain feedback about the program from every participant. In larger programs, you might also have to limit process evaluations to only a sample of your program sites. Most outcome and impact evaluations also require that some restrictions be applied about where data will be collected. This is especially true when program objectives pertain to outcomes measured for the general population of youth. Here, it will usually be necessary to collect data from youth in only a sub-set or sample of the geographic areas covered by a program. Choosing sites for conducting evaluations requires careful consideration. Sometimes a program does not have a clearly defined geographical area of influence. If the area of influence of a program is defined very broadly, such as an entire city or region, then it may be more difficult to measure changes in objectives even if the program performs optimally. 55 Chapter 3: Developing an ARH Monitoring and Evaluation Plan The advantage of using in-house staff members is that they are familiar with the program and can be trained quickly. In choosing sites or geographic areas for conducting evaluations, ask yourself these key questions: ä What geographic area does the program reach? ä How many sites or geographic areas do I need to conduct a strong evaluation? ä Do these sites represent the characteristics of the youth target population and the program being implemented? ä How many observations do I need per site or geographic area? ä How should I go about choosing the sites or geographic areas? Many of these issues are addressed in Chapters 5 and 6. The following practical considerations will also likely influence your decision about where to evaluate: ä Are existing data available in all sites? ä How easy is it to collect new data in each site? ä How will data collection affect the performance of regular program activities? ä Are there any other resources available to help collect and analyze the data (e.g., local universities or research groups)? A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 56 Determining the Type of M&E Effort You Undertake The following checklist and flow chart can be used to help you determine the type of M&E activity you might undertake. Complete this checklist before using the flow chart: q Are the goals of your program clear? q Are your objectives related to your goals and intended outcomes? q Are your objectives expressed in measurable terms? q Are your activities defined? q Do your activities relate to your program objectives? Use this flow chart to determine the type of M&E effort you should undertake. When you get to a box in bold, this is the most appropriate type of M&E effort for your program. 57 Chapter 3: Developing an ARH Monitoring and Evaluation Plan A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 58 What Is Involved in Carrying Out Each Type of Evaluation? (How to Use the Rest of This Guide) The table on the opposite page will help you determine how to use the rest of this Guide. Depending on the scope of your M&E effort, it guides you to relevant ARH indicators (Chapter 4 and Indicator Tables), data sources and data collection methods (Chapter 7) and corresponding instruments (Part II of this Guide). For outcome and impact evaluations, study designs are suggested in Chapter 5. The table refers to relevant sampling issues, which are explained in Chapter 6, and to types of analysis, described in Chapter 8. 59 Chapter 3: Developing an ARH Monitoring and Evaluation Plan H o w t o U se t h e R es t o f T h is G u id e L ev el o f R es o u rc es N ee d ed S el ec ti n g In d ic at o rs ( C h ap te r 4) D at a S o u rc es , M et h o d s an d C o lle ct io n ( C h ap te r 7) In st ru m en ts a n d D at a C o lle ct io n T o o ls ( P ar t II o f th is G u id e) S tu d y D es ig n (C h ap te r 5) S am p lin g ( C h ap te r 6) D at a C o lle ct io n a n d A n al ys is ( C h ap te r 7 & 8 ) L ow I nd ic at or T ab le I I: P ro gr am S ys te m s D ev el op m en t an d F un ct io ni ng I nd ic at or s; I nd ic at or T ab le I II: P ro gr am I m pl em en ta tio n I nd ic at or s S er vi ce s ta tis tic s; a dm in is tr at iv e r ep or ts a nd d oc um en ts ; ev en t lo gs ; ot he r ty pe s of lo gs ; su rv ey s I ns tr um en t 2: T al ly S he et s; I ns tr um en t 3: R ep or tin g F or m s; I ns tr um en t 5: C om po si te I nd ic es ; I ns tr um en t 6: I nv en to ry o f F ac ili tie s an d S er vi ce s N /A L im ite d us e C om pa rin g sy st em s d ev el op m en t an d fu nc tio n- in g an d im pl em en ta tio n in di ca to rs a ga in st t ar ge ts ; c om pa rin g in di ca to rs f or d iff er en t pr og ra m s ite s; a ss es si ng t re nd s of in di ca to rs o ve r tim e L ow t o M od er at e I nd ic at or T ab le I : P ro gr am D es ig n I nd ic at or s; I nd ic at or T ab le I I: P ro gr am S ys te m s D ev el op m en t an d F un ct io ni ng I nd ic at or s; I nd ic at or T ab le I II: P ro gr am I m pl em en ta tio n I nd ic at or s A dm in is tr at iv e r ep or ts a nd d oc um en ts ; ev en t lo gs ; ot he r ty pe s of lo gs ; do cu m en t r ev ie w ; si te v is its ; d ire ct o bs er va tio n; in te rv ie w s w ith k ey in fo rm an ts ( e. g. , s er vi ce p ro vi de rs , m an ag er s) ; u ns tr uc tu re d f ee db ac k fr om c lie nt s; e xi t in te rv ie w s w ith c lie nt s; m ys te ry c lie nt s; f oc us g ro up s o r in fo rm al li st en in g s es si on s I ns tr um en t 1: C he ck lis ts ; I ns tr um en t 2: T al ly S he et s; I ns tr um en t 6: I nv en to ry o f F ac ili tie s an d S er vi ce s; I ns tr um en t 7: O bs er va tio n G ui de f or C ou ns el in g an d C lin ic al P ro ce du re s; I ns tr um en t 8: I nt er vi ew G ui de f or S ta ff P ro vi di ng R H S er vi ce s; I ns tr um en t 9: G ui de f or C lie nt E xi t I nt er vi ew ; I ns tr um en t 10 : Q ue st io nn ai re f or D eb rie fin g M ys te ry C lie nt s; I ns tr um en t 14 : A ss es si ng C oa lit io n E ffe ct iv en es s W or ks he et N /A C ho os in g s am pl es o f p ro gr am s ite s, p ar tic ip an ts , s er vi ce p ro vi de rs , s er vi ce t ra ns ac tio ns , e tc . fo r th e m ea su re m en t o f sy st em s d ev el op m en t/ f un ct io ni ng a nd im pl em en ta tio n in di ca to rs C om pa rin g sy st em s d ev el op m en t an d fu nc - t io ni ng a nd im pl em en ta - t io n in di ca to rs a ga in st t ar ge ts o r st an da rd s; c om pa rin g in di ca to rs f or d iff er en t pr og ra m s ite s; a ss es si ng t re nd s in in di ca to rs o ve r tim e M od er at e I nd ic at or T ab le I V : P ro gr am I nt er ve nt io n O ut co m e In di ca to rs S er vi ce s ta tis tic s; s ur ve ys ; po pu la tio n s ur ve ys ; fo cu s g ro up s or in fo rm al li st en in g se ss io ns I ns tr um en t 3: R ep or tin g F or m s; I ns tr um en t 6: I nv en to ry o f F ac ili tie s an d S er vi ce s; I ns tr um en t 11 : C om m un ity Q ue st io nn ai re ; I ns tr um en t 12 : C om pr eh en si ve Y ou th S ur ve y; I ns tr um en t 13 : F oc us G ro up D is cu ss io n G ui de f or I n- S ch oo l A do le sc en ts N /A C ho os in g s am pl es o f y ou th , pr og ra m s ite s, c om m un iti es , e tc . fo r th e m ea su re m en t o f ou tc om e in di ca to rs C om pa rin g ou tc om e in di ca to rs a ga in st t ar ge ts ; co m pa rin g in di ca to rs f or d iff er en t p ro gr am s ite s; a ss es si ng t re nd s in in di ca to rs H ig h I nd ic at or T ab le I V : P ro gr am I nt er ve nt io n O ut co m e In di ca to rs S er vi ce s ta tis tic s; s ur ve ys ; po pu la tio n s ur ve ys ; fo cu s g ro up s or in fo rm al li st en in g se ss io ns I ns tr um en t 3: R ep or tin g F or m s; I ns tr um en t 6: I nv en to ry o f F ac ili tie s an d S er vi ce s; I ns tr um en t 11 : C om m un ity Q ue st io nn ai re ; I ns tr um en t 12 : C om pr eh en si ve Y ou th S ur ve y; I ns tr um en t 13 : F oc us G ro up D is cu ss io n G ui de f or I n- S ch oo l A do le sc en ts C ha pt er 5 C ho os in g s am pl es o f y ou th , pr og ra m s ite s, c om m un iti es , e tc . in e ac h e xp er im en ta l o r c om pa ris on g ro up f or t he m ea su re m en t o f ou tc om e in di ca to rs C om pa rin g ou tc om e in di ca to rs f or Òt re a tm e n tÓ a n d Òc o n tr o lÓ g ro u p s Monitoring Process Outcome Impact 60 A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs P H O T O : H ar ve y N el so n 61 Part I: The How-To’s of Monitoring and Evaluation Indicators 4 What Is an Indicator? An indicator is a measurable statement of program objectives and activities. Once you have defined a programÕs objectives and activities, you can develop indicatorsÑor measuresÑfor each objective and activity. Some programs may have single indicators, and others have multiple indicators. Generally, it is preferable to have several indicators to capture the multiple dimensions of your program. However, you should carefully select a manageable number of indicators so that they accurately reflect your program objectives and activities and your evaluation priorities. Continuing with the example presented in Chapter 3, the table below shows some indicators that can be used to measure the objectives and activities associated with delaying the age at sexual initiation through a peer education program. Indicators can be expressed in different forms. As you can see in the example above, indicators can be expressed in different ways. Numeric indicators are expressed as counts, percentages, ratios, proportions, rates or averages. The following indicators are counts: ä Number of radio advertisements aired ä Number of clients who seek peer counseling services In evaluation terms, it is usually more informative to state indicators as percentages, ratios and proportions. These measures allow you to see what was achieved in relation to the denominator, or total possible number, while counts simply give you an idea of the number of events that took place, or the number of people reached, without indicating the total possible number. For example, you may count the number of youth who have delayed sexual initiation, but if you have a denominator, i.e., the total number of youth in a given geographic area, you will be able to calculate the proportion of youth in that area who delayed sexual initiation. This will allow you to measure the coverage of your program and the effects on behaviors at the population level. Chapter at a Glance ä Defines and explains indicators ä Provides examples of how to select and modify indicators to match your program objectives and activities Note Later in Part I of this Guide, we provide a definition for each of these terms and give instructions for how to calculate different types of numerical indicators. A Guide to Monitoring and Evaluating Adolescent Reproductive Health ProgramsA Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 62 seitivitcAdnasrotacidnI,sevitcejbOgninimreteD evitcejbO elbissoP srotacidnIemoctuO seitivitcA elbissoP srotacidnImargorP :sevitcejbolevel-noitalupoP lauxesfoegayaleD• gnomanoitaitini 91–41segahtuoy ehtesaercnI• htuoyfoegatnecrep keesohw91–41sega secivresgnilesnuoc srotacudereepmorf tnecrep52ot lauxesfoegaegarevA• gnomanoitaitini 91–41segahtuoy htuoyfoegatnecreP• keesohw91–41sega secivresgnilesnuoc srotacudereepmorf foytilibaliavaetomorP• gnilesnuocreep hguorhtsecivres sdaoidar reep“tnemelpmI• ni”renrocgnilesnuoc scinilchtlaehevif srotacudereepevaH• lanoitamrofnievig eciwtsloohcstasklat keewa srotacudereepevaH• ytilauqedivorp secivresgnilesnuoc oidarforebmuN• deriastnemesitrevda oidarwenforebmuN• deriastnemesitrevda evahtahtscinilcforebmuN• srenrocgnilesnuocreep reepsyadforebmuN• sirenrocgnilesnuoc keewrepdeffats keesohwstneilcforebmuN• secivresgnilesnuocreep sklatlanoitamrofniforebmuN• sloohcsnisrotacudereepyb ohwhtuoyforebmuN• sklatlanoitamrofnidnetta ’srotacudereepfoytilauQ• desab(snoitatneserp )alucirrucrofairetircno etarohwstneilcfonoitroporP• ytilauq-hgihsagnilesnuoc sweivretnitixegnirud reepfoerocsytilauQ• nodesab(srolesnuoc ybnevig)airetircgnilesnuoc srevresbo :sevitcejbolevel-margorP foyticapacesaercnI• otsrotacudereep otgnilesnuocedivorp htuoy reepforebmuN• eraohwsrotacude edivorpottnetepmoc htuoyotgnilesnuoc srotacudereeptiurceR• foloopmorf ohwstnecseloda cinilcdnetta srotacudereep03tceleS• alucirrucgniniartpoleveD• srotacudereepniarT• gnilesnuocedivorpot ?detelpmoctnemtiurceR• )oN/seY( reepforebmuN• detcelessrotacude ?depolevedalucirrucgniniarT• )oN/seY( ”scipotyek“forebmuN• sasrevocalucirrucgniniart tsilkcehcotderapmoc srotacudereepfonoitroporP• evitceffeetartsnomedohw elorgnirudslliksgnilesnuoc syalp Non-numeric indicators are expressed in words. They are also referred to as qualitative or categorical indicators. These indicators usually denote the presence or absence of an event or criteria. The following are non-numeric indicators: ä Peer education recruitment completed? (Yes/No) ä Training curricula included topic on relationships and sexuality? (Yes/No) Non-numeric indicators can also be used to summarize descriptions or assess quality or comprehensiveness. You can do this by creating an index of items that can each be assigned a number, which are then totaled to produce a score. In the table below, for example, each of the items in the right column would be assigned a point, and then those points would be totaled to determine the overall score of the presentation. Like objectives, indicators should be specific. The more specific your indicator, the more likely that you will accurately measure your objectives and activities. Indicators should specify the: ä characteristics of the target population you intend to reach, such as gender, age and residential, marital and schooling status; ä location of the target population, such as rural or urban youth, youth in a certain city or district, youth who participate in your program or youth who attend certain schools or clinics; and ä the time frame within which you intend to achieve your objectives. An indicator should have the same scale as its corresponding program objective. For example, if your objective is to delay the average age at sexual initiation among youth ages 14Ð19 who live in your district, then the indicator should measure Òaverage age at sexual initiation among youth ages 14Ð19 who live in district X.Ó If your indicatorÕs scale is different from your objectiveÕs, your results will be misleading. Types of Indicators Once you have decided on the scope of your M&E effort, different indicators should be developed for each component of the program to be measured. For example, if you plan to conduct a process evaluation, you should develop indicators for design, systems development and functioning, or implementation. If you plan to conduct an impact evaluation, you should develop indicators for program implementation and outcomes. 63 Chapter 4: Indicators Using Non-numeric Indicators to Measure Quality Indicator Index and Quality Score Peer educatorÕs presentation is comprehensive Observe presentation. Check each topic that is covered accurately. Give one point for each item checked, and total to determine quality score. q Anatomy and reproduction q Abstinence q Contraception q How to use a condom q Making the decision to have sex q How to say ÒnoÓ to sex q Resisting peer pressure to have sex q Where to get counseling q Where to get health services Total: _________ A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 64 In this Guide, we have categorized indicators based on what component of the program will be monitored and evaluated. Chapter 10 provides four Indicator Tables, each containing examples of indicators based on program aspects. You can use these tables to select and adapt indicators to match your program. Design indicators are related to Òkey elements.Ó Youth programs should be designed based on Òkey elementsÓ of quality. The international experience of youth programs; lessons from the field of maternal-child health, family planning and HIV/AIDS; and practitioner intuition and experience have produced a number of recommended key elements of youth program design. Some examples are: ä existence of clearly defined goals and objectives, ä involvement of local stakeholders in program planning, and ä assessment of needs and preferences of the target young adult audience for reproductive health services. Systems development and functioning indicators are related to programmatic objectives and activities. Programmatic objectives state results in terms of the organizational structure, management or operations of a program, and the corresponding activities involve the development and functioning of your systems. Systems development and functioning indicators measure whether an organizationÕs or programÕs systems are operating and how effectively they have prepared program personnel for implementation. Examples of systems development and functioning indicators include: ä number of peer educators trained to provide youth counseling, ä existence of a clear organizational structure, and ä number of partnerships, networks or coalitions established to support the ARH program. Implementation indicators are related to both programmatic and population objectives and activities. Both programmatic and population objectives will be met by the implementation of program activities. Implementation indicators measure whether and how many planned activities have been conducted, and the quality of the implementation of those activities. Examples of implementation indicators include: ä number of youth who seek peer counseling services, ä number and type of involvement by stakeholders in the ARH program, and ä number and type of communication products developed for the target audience. Outcome indicators are related to population objectives. Population objectives state results in terms of the program participant and are measurable statements of the outcomes you hope to achieve in your target population. Outcome indicators measure the changes in outcome that your programÕs activities are trying to produce in your target population. Examples of outcome indicators include: ä average age at sexual initiation; ä percent of youth who say they would advocate healthy behaviors among their peers and friends; ä pregnancy rate among female youth during a specified time period; and ä incidence rate of STIs for young adults during a specified time period. How Should Indicators Be Stated? Precision and clarity about your indicators will produce meaningful results from your M&E effort. Assess indicators in terms of their importance and ease in data collection. Indicators are considered of high importance if one or more of the following applies: ä The indicator is a priority, given the purpose and scope of the evaluation. ä The indicator tests a new approach. ä Staff members want to know about the indicator. ä Youth have identified the indicator as important. ä A donor requires information that the indicator will measure. If you determine that the data needed to calculate your indicators are not available, then new information will need to be collected. It is important to assess how easy or difficult the collection of these data would be. Factors to consider in determining ease of data collection are: ä sensitivity of topics (especially in terms of local norms and cultural context), ä staff resources and expertise, ä logistical requirements (e.g., transport, printing, vehicles), ä time, ä cost, and ä slang, vernacular and professional terms used to refer to subject. State indicators in clear and precise language. It is important to use clear and precise words and phrases to state your indicators. General indicators may be open to many interpretations and will hinder your ability to interpret M&E results. For example, a general indicator might be ÒNumber of youth who seek peer counseling services.Ó This indicator should be more precisely stated as ÒNumber of youth ages 14Ð19 who reside in our district who seek counseling services from peer educators during a six- month period.Ó Avoid changing the wording of indicators after an M&E effort has begun. Changing the wording of your indicators during program implementation may hinder your ability to interpret M&E results. For example, assume your indicator is ÒNumber of youth ages 14Ð19 who reside in our district who seek counseling services from peer educators during a six-month period.Ó If in the middle of your program you change this to count the number of youth ages 14Ð16 who seek counseling services, it may appear that the number of clients has gone down. Therefore, your results would suggest that fewer youth are utilizing your program, when in fact this may not be true. If you have already begun your M&E effort and discover that your indicators are not specific enough, it is advisable to add 65 Chapter 4: Indicators Outcome indicators measure the changes in outcome that your program’s activities are trying to produce in your target population. indicators rather than to change existing ones. For example, if you found that youth who seek counseling services are mostly between the ages of 12 and 15, you could add the indicator ÒNumber of youth ages 12Ð13 who reside in our district who seek counseling services from peer educators during a six-month period.Ó You would then continue to measure the original indicator for youth ages 14Ð19, in addition to the new indicator for youth ages 12Ð13. Indicators should be consistent over time. The indicators you use should be consistent for the duration of the monitoring and evaluation effort. If you drop, add or modify indicators during the programÕs implementation, then you may not be able to assess why changes are occurring in your target population. For example, consider the following indicator on STIs: ä Percent of young adults who report specific symptoms of STIs Suppose that to measure this indicator, you initially developed a checklist of six symptoms that peer counselors use to record what their clients report. After six months, you review clinic records at four clinics in your catchment area, and find that youth whose symptoms are different from those on your checklist are being diagnosed with STIs. You then add another four symptoms to the checklist used by peer educators. This means that peer educators may begin to record youth who mention any of these four additional symptoms, whereas before these youth would not have been included. Therefore, if the percentage of youth reporting symptoms of STIs subsequently increases, you will not know if this change occurred because of a true increase in the prevalence of STIs, or simply because you added four more possible criteria to the checklist. Carefully determine the time dimension of outcome indicators. Most outcome indicators refer to medium- or long-term desired outcomes. For example, it may take several years to document changes in the pregnancy rate among female youth. What you define as medium- and long-term will vary according to the nature and complexity of the programÕs objectives and activities.1 For example, some programs may define medium-term outcomes as those achieved within one year, and long-term outcomes as those achieved in five years. You should make sure to establish a reasonable length of time to achieve desired outcomes. Youth programs are often under pressure to demonstrate outcomes and therefore try to measure changes in an unrealistic amount of time. Your results might then falsely indicate that you have not met your objectives. Once you determine the amount of time you think it will take to achieve your objectives, you can state the time dimension of your outcome indicators. You will then need to A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 66 1 Many of the indicators included in the Indicator Tables at the end of Part I of this guide are medium- term (e.g., No. of times YAs have had STIs in the past year). If you drop, add or modify indicators during the program’s implementation, then you may not be able to assess why changes are occurring in your target population. track your outcome indicators for a sufficient period of time to be able to observe changes. Indicators should be valid and reliable. Indicators should be valid, which means that they accurately measure the concept or event they are supposed to measure. They should also be reliable, measuring the issue or event consistently every time. Assessing the validity and reliability of indicators helps to ensure that you minimize error in measurement. Two steps can strengthen the validity of your indicators: 1. Develop indicators whose content ade- quately samples all possible meanings of a concept. For example, to measure the quality of interactions between youth and their parents, think about all the possible meanings of quality of inter- action. You might determine that how often youth communicate with their parents, how long their conversations last, what topics they discuss and the young personÕs perception of the interaction all contribute to its quality. You therefore might develop a series of indicators that together measure the quality of interactions, such as: ä frequency of youth communication with parent over past week, ä average length of time of a parent- child communication, ä topics discussed by youth and their parents, and ä youthÕs perception of the quality of parent-child communication in the last week. 2. Develop indicators that explore the relationship between two measures of the same phenomenon. For example, in exploring a parent-child relationship you consider two related indicators: ä youthÕs perception of whether their parents understand them, and ä youthÕs perception of what types of problems they are able to discuss with their parents. By measuring both of these indicators, you would be able to assess the extent to which hypothesized relationships be- tween related concepts can be verified. For example, you could measure whether all youth who say their parents understand them also say they are able to talk to them about a variety of their problems. You can increase the reliability of indicators by reducing the chance that random, temporary conditions in a person, situation or set of measurement procedures occur: ä Check the consistency of an individualÕs responses by asking him or her similar questions more than once during a survey or interview. For example, a young man who reports having quality interactions with his parents but also says that he cannot talk to his parents when he has problems shows inconsistency in his answers. In data analysis, you could check to see how many youth gave similarly inconsistent answers. If many youth did, you would have identified an unreliable measurement of these indicators. If only a few youth did, you would have identified an error in the individualÕs understanding of these questions. ä Collect data at different times and check how consistent youthÕs answers are. For instance, you might ask the same series of questions about the quality of interactions with parents on surveys given every six months. 67 Chapter 4: Indicators ä Assess the data you collect by looking for inconsistencies due to error in observation, coding or data entry processes. For example, check to see if youth interviewed by interviewers of different ages have significantly different answers. Also check to see whether answers to open-ended questions are coded correctly, for example, whether ÒhappyÓ and ÒjoyfulÓ are coded as the same or a different response. Rigorously testing validity and reliability may require outside assistance to perform statistical tests. Minimally, it is important that you consider these issues as you develop indicators. A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 68 Worksheet 4.1 Preparing a List of Possible Indicators 1. Write your objectives in the table. 2. For each objective, write the activities you have planned to achieve the objective. Refer to the Logic Model you developed to ensure that activities that address all antecedent factors are included. 3. For each activity, note who will participate (for example, youth ages 8 to 12; boys; vulnerable populations) and where it will take place. 4. For each activity, refer to the Indicator Tables (Program Design, Program Systems Development and Functioning, Program Implementation and Program Intervention Outcome) to list all possible indicators, or develop your own indicators. Objectives Activities Target Population Location Possible Indicators Objective 1 Activity 1 Activity 2 Activity 3 Indicator 1 Indicator 2 Indicator 3 Indicator 4 Objective 2 Activity 1 Activity 2 Activity 3 Indicator 1 Indicator 2 Objective 3 Activity 1 Activity 2 Indicator 1 Indicator 2 Indicator 3 Objective 4 Activity 1 Activity 2 Indicator 1 Indicator 2 Indicator 3 69 Chapter 4: Indicators Worksheet 4.2 Assessing Possible Indicators 1. List indicators from Worksheet 4.1 in the first column. 2. Clarify the scope of the program. Is it a large-scale effort to reach all members of the target population, or a smaller, more limited intervention that will reach only those who participate in specific services or activities? 3. For each indicator, write the possible sources of the data needed, such as survey or focus group. 4. For each source of data, circle whether data are available or will need to be collected. 5. Rate ease of data collection, based on availability, time and cost to collect. 6. Rate importance of indicator (high or low). 7. Determine priority based on ease of data collection and importance of indicator. Possible Indicators Scope of Program Are Data Available Now? Need to Collect New Data? Sources of Data Ease of Data Collection Importance of Indicator Priority (1 is highest) (from Worksheet 4.1) L = Large S = Small Y = Yes N = No Y = Yes N = No E = Easy F = Feasible D = Difficult H = High L = Low 1 = EH 2 = FH 3 = DH 4 = EL 5 = FL 6 = DL Indicator 1 L S Y N Y N E F D H L Indicator 2 L S Y N Y N E F D H L Indicator 3 L S Y N Y N E F D H L Indicator 4 L S Y N Y N E F D H L Indicator 5 L S Y N Y N E F D H L Indicator 6 L S Y N Y N E F D H L A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 70 70 A Guide to Monitoring and Evaluating Adolescent Reproductive Health ProgramsA Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs P H O T O : J H U /C C P 71 Part I: The How-To’s of Monitoring and Evaluation EVALUATION DESIGNS TO ASSESS PROGRAM IMPACT 5 Why Should I Conduct an Impact Evaluation? An impact evaluation will reveal the extent to which any observed changes in outcome indicators is due to your program activities. If your evaluation only measures changes in outcome indicators, your findings may not be fully credible for several reasons: ä Other events or conditions may contribute to changes in outcome indicators: Your program is only one of a number of factors that might affect the outcomes you are trying to influence. For example, changes in economic conditions or other social changes might influence how young people think about an acceptable age to begin having sex or use condoms. Other programs may be directed to the same target audience and conducted at the same time as your program. These type of external events could make the effects of your program appear to be larger or smaller than they really were. These factors are referred to as extraneous events. ä Changes may take place within the individuals being studied over time: ChildrenÕs growth and development, or maturation, affects their attitudes and physical status, threatening the internal validity of an evaluation that aims to link changes in outcomes of such things as knowledge, attitudes or skills to health education or health promotion programs. ä Program participants may have a predisposition to particular outcomes: It is possible that your program attracts young people who are predisposed to the positive outcomes encouraged by the program activities. For example, your program might attract mostly youth with high education aspirations who might be less inclined toward risky behaviors. If so, simply measuring changes in outcome indicators would overstate the effectiveness of your program since many of your program participants would have realized CHAPTER AT A GLANCE ä Offers guidance on an considerations around the need for impact evaluation ä Reviews study designs you can use to carry out an impact evaluation ä Outlines the technical requirements and resources needed for each type of evaluation ä Presents options for initiating evaluations after a program is underway A Guide to Monitoring and Evaluating Adolescent Reproductive Health ProgramsA Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs 72 positive outcomes even without being exposed to your program. This problem is referred to as selection bias. Types of Study Designs for Impact Evaluations The three major types of study designs for impact evaluations are: ä randomized experiments, ä quasi-experiments, and ä non-experimental designs. (These different study designs are explained in detail later in this chapter.) In general, several factors differentiate one design from another: ä Whether a ÒcontrolÓ or ÒcomparisonÓ group is used: A control or comparison group is a group of persons, facilities or communities similar to those who receive an intervention but who have not been exposed to the intervention. The purpose of a control or comparison group is to provide an estimate of what would have happened had you not implemented your program. A control group is randomly assigned, while a comparison group has similar characteristics but is not randomly assigned. ä How participants are assigned to intervention and control groups: In some evaluation studies, participants are assigned to intervention and control groups through random assignment. In others, control groups are selectedÑrather than randomly assignedÑto match the characteristics of the intervention group, with the exception of their exposure to the intervention being evaluated. ä The timing of data collection in relation to program implementation: An evaluation may collect data before, during and/or after program implementation. ä The complexity of statistical analysis required: Some study designs require a more highly sophisticated statistical analysis. There are several factors you should take into account when selecting an appropriate study design for measuring program impact: ä Ethical issues: Conduct a study only if it can be done ethically. If it compromises individual rights or denies the control group a chance at receiving a program or services they would have received if they did not participate in the study, change or abandon the study design. Note, however, that resource constraints often make it necessary to limit the target audience for a program, making the use of control and comparison groups more ethical. ÒPilotÓ programs and phased-in programs provide opportunities to use experimental designs. You should budget plenty of time before attempting to measure changes in outcomes, and ensure that your objectives clearly state the outcomes that you expect to produce. ä The importance of being able to demonstrate impact: If community support and/or funding depends on demonstration of impact, you may be better served by using stronger study designs. ä Validity: Validity refers to the ability of a study design to measure the ÒtrueÓ impact of a program or intervention. The strongest study designs are those that are the least vulnerable to threats to validity. Tw

View the publication

You are currently offline. Some pages or content may fail to load.