Chapter 8 - Program Evaluation
Part 4 - Staff Services/Special Programs
Title | Section |
---|---|
Introduction | 4-8-1 |
Purpose | 4-8-1A |
Scope | 4-8-1B |
Authorities | 4-8-1C |
Policy | 4-8-1D |
Principles | 4-8-1E |
Definitions | 4-8-1F |
Evaluation Training and Support | 4-8-1G |
Evaluation and Regulatory Compliance | 4-8-1H |
Meaningful Use of Evaluation Findings | 4-8-1I |
4-8.1 INTRODUCTION.
- Purpose. This chapter establishes the policy and procedures for planning, funding, and using information from program evaluations to assess the impact of Indian Health Service (IHS) health care services, as well as functions related to the delivery of IHS health care services. This policy applies to programs operated by IHS and by IHS grantees, if specified in a grant program’s funding announcement.
- Scope. A number of statutes, regulations, and memoranda direct IHS to use evaluative information (i.e., data, evidence, etc.) in the ongoing management of federal programs. This chapter clarifies the definition and use of program evaluation to meet these requirements, including but not limited to: evaluation planning; evaluation funding and support; training and support for effective evaluations; and the appropriate use of evaluation findings and evidence for use in management decision making and for establishing or revising policies and strategic goals and objectives.
- Authorities.
- The Snyder Act of 1921 (25 United States Code (USC) § 13)
- The Transfer Act (42 USC § 2001)
- The Indian Health Care Improvement Act (25 USC § 1601, et seq.)
- The Government Performance and Results Act of 2010 (Public Law 103-62)
- Policy. The IHS is committed to conducting and using well-designed, rigorous evaluations on a routine basis to enable programs to adhere to performance and accountability mandates, validate outcomes, and improve program effectiveness. It is IHS policy to use program evaluation to determine the accessibility and quality of the health care services it delivers. The IHS also uses program evaluation to assess the manner and extent to which federal programs achieve intended objectives and use evaluative information to make management decisions.
- Principles.
- Rigor. Evaluations should use the most rigorous methods that are appropriate and feasible within statutory, budget, and other constraints. Rigor is required for all types of evaluations, including impact and outcome evaluations, implementation and process evaluations, descriptive studies, and formative evaluations. Rigor requires ensuring that inferences about cause and effect are well founded (internal validity); requires clarity about the populations, settings, or circumstances to which results can be generalized (external validity); and requires the use of measures that accurately capture the intended information (measurement reliability and validity).
- Relevance. Evaluation priorities should take into account legislative requirements and Congressional interests, and should reflect the interests and needs of leadership, specific agencies, and programs; program office staff and leadership; and IHS partners such as states, territories, tribes, and grantees; the populations served; researchers; and other stakeholders. Evaluations should be designed to address IHS's diverse programs, customers, and stakeholders; and IHS should encourage diversity among those carrying out the evaluations.
- Transparency. Unless otherwise prohibited by law, information about evaluations and findings from evaluations should be broadly available and accessible, typically on the Internet. This includes identifying the evaluator, releasing study plans, and describing the evaluation methods. The IHS will release results of all evaluations that are not specifically focused on internal management, legal, or enforcement procedures or that is not otherwise prohibited from disclosure. Evaluation reports will present all results, including favorable, unfavorable, and null findings. The IHS will release evaluation results timely (usually within two months of a report's completion) and will archive evaluation data for secondary use by interested researchers (e.g., public use files with appropriate data security protections).
- Independence and Impartiality. To ensure its credibility, the evaluation process will be independent from any process involving program policy making, management, or activity implementation. The evaluation function will be located separately from other management functions so that it is free from undue influence and so that unbiased and transparent reporting is assured.
- Ethics. The IHS-sponsored evaluations will be conducted in an ethical manner and safeguard the dignity, rights, safety, and privacy of participants. Evaluations will comply with both the spirit and the letter of relevant requirements such as regulations governing research involving human subjects.
- Definitions. The following definitions are applicable to this Chapter.
- Accountability. The responsibility of program managers and staff to provide evidence to stakeholders, as well as authorizing and funding agencies, that a program is effective and in conformance with its expectations and requirements.
- Activities. The actual events or actions that take place as a part of the program.
- Data Collection Method. The way facts about a program and its outcomes are amassed. Data collection methods often used in program evaluations include literature search, file review, natural observations, surveys, expert opinion, and case studies.
- Evaluation (program evaluation). The systematic collection of information about the activities, characteristics, and outcomes of programs (which may include interventions, policies, and specific projects) to make judgments about that program, improve program effectiveness, and/or inform decisions about future program development.
- Evaluation Design. The logic model or conceptual framework used to depict a program’s theory of change and how program resources are expected to lead to the program’s intended outcomes. The evaluation design drives evaluation planning by focusing on evaluation questions.
- Evaluation Plan. A written document describing the overall approach that will be used to guide an evaluation, including why the evaluation is being conducted, how the findings will likely be used, and the design and data collection sources and methods. The plan specifies what will be done, how it will be done, who will do it, and when it will be done.
- Experimental (or randomized) Designs. Designs that aim to establish causal attribution by ensuring the initial equivalence of a control group and a treatment group through random assignment. Some examples of experimental or randomized designs are randomized block designs, Latin square designs, fractional designs, and the Solomon four-group.
- Funded Recipient. Grantees or others receiving IHS program funding to carry out specific prevention or intervention activities.
- Impact. The effect that interventions or programs have on people, organizations, or systems to influence health. Impact is often used to refer to effects of a program that occur in the medium or long term with an emphasis on ones that can be directly attributed to program efforts.
- Large Program. Any program with a budget that exceeds $1 million within a given year or cohort.
- Logic Model. A visual representation showing the sequence of related events connecting the activities of a program with the program’s desired outcomes and results.
- Outputs. The direct products of program activities; immediate measures of what the program did.
- Outcomes. The results of program operations or activities; the effects triggered by the program. (For example, increased knowledge, changed attitudes or beliefs, reduced tobacco use, reduced morbidity and mortality.)
- Program. Any set of related activities, broadly defined, undertaken to achieve an intended outcome. It encompasses environmental, system, and media initiatives; preparedness efforts; and research, capacity, and infrastructure efforts.
- Stakeholders. People or organizations that are invested in the program or that are interested in the results of the evaluation or what will be done with results of the evaluation.
- Evaluation Training and Support.
- This chapter establishes an agency-wide evaluation work group. The work group will be led by the Office of Public Health Support (OPHS), Division of Planning, Evaluation, and Research (DPER) with meeting support provided via DPER staff.
- Membership:
- Office of Public Health Support
- Office of Management Services
- Office of Clinical and Preventive Services
- Office of Tribal Self-Governance
- Office of Direct Service and Contracting Tribes
- Office of Urban Indian Health Programs
- Other Offices as determined by the IHS Director
- Responsibilities:
- Advise, support and monitor program evaluation efforts at IHS.
- Review program evaluation plans to assure they are appropriate for the program and follow established program evaluation ethics and best practice.
- Integrate program evaluation efforts into routine IHS practice:
- Develop standard processes to use evaluation results to improve program development, implementation and monitoring.
- Incorporate evaluation criteria into the grant award process.
- Membership:
- The DPER will work with the Division of Regulatory Affairs, and others, as appropriate, in the creation of an expedited clearance process for evaluation-related data collections under the Paperwork Reduction Act.
- The DPER will create and maintain IHS evaluation internet/intranet sites that provide evaluation resources for IHS personnel, partners, and stakeholders, including but not limited to:
- Examples of position descriptions for evaluation positions;
- Evaluation training materials and opportunities;
- Links to key federal evaluation resources;
- Repository of internal evaluation activities and materials including webinars, podcasts, and planning/workgroup meeting notes; and
- Previous and existing evaluation projects (from IHS and related/similar public health programs).
- This chapter establishes an agency-wide evaluation work group. The work group will be led by the Office of Public Health Support (OPHS), Division of Planning, Evaluation, and Research (DPER) with meeting support provided via DPER staff.
- Evaluation and Regulatory Compliance.
- The IHS Offices and large programs will ensure that sufficient evaluation capacity and resources are made available to: assess program effectiveness; identify opportunities for program improvement; and inform future management decision-making. Where appropriate, IHS Offices and large programs will consult with Tribes to identify and prioritize programs for evaluation, consistent with the Tribal Consultation Policy.
- Beginning in FY 2019, a program falling within the threshold of this policy, either with or without an active evaluation component, shall initiate evaluation activities in compliance with this policy in preparation for its next funding cycle/cohort.
- Programs shall use program resources to cover the costs of evaluation planning, implementation, and analysis. For planning purposes, the industry standard, at the low end is 5-10% of program funding. The cost, or range of costs, for any individual program evaluation will be determined by a variety of factors, including, but limited to, the following:
- Resource availability: Staff may be available or may need to be brought in via a contract or some other vehicle.
- Data availability and collection: While program data is generally available, data specific to the program evaluation may or may not be available. New data (types, sources, collection, etc.) increase time and costs. Including program evaluation data and collection into the original program plan can minimize this.
- Nature of outcomes: Measuring longer-term or more complex program impacts will require longer or more complex evaluation designs.
- Evaluation rigor/quality: As evaluation plans increase rigor or move towards seeking causal relationships (i.e., research), the evaluation can become more complex.
- Purpose of evaluation: Evaluations can be designed for program improvement or more complex internal/external accountability or cost-benefit models. These differences can increase complexity and costs.
- Working with partners: May involve multiple data systems or different chains of command. Can also effect language used for dissemination.
- Dissemination plans: Different dissemination products for different audiences at different or variable frequencies can increase costs.
- Evaluator's familiarity with program: Evaluators new to the program may require time to learn the program function, structure and goals. This may also initially require more program input into the evaluation design.
- Paperwork Reduction Act. Reviews require time and resources.
- Cybersecurity. Can require dealing with data security, access or systems match issues. All take time and may limit who can do the required work.
- Evaluation planning activities will be started as soon as reasonable during program planning and development. Best practice suggests this planning should occur as soon as the program’s mission and goals are established.
- This defines the scope, data requirements, and use of the evaluation at the national level. The program planning office can then identify and assess possible cost factors for the evaluation and identify the resources needed to complete it.
- This planning process also should include identifying and clarifying the program outcomes and evaluation expectations to include in the program’s funding announcement.
- Prospective grantees should be aware of and agree to these expectations before applying for and receiving the award.
- Evaluation expectations could include: evaluation design, program logic model, baseline data needs, data collection plan and instruments, evaluation costs, technical assistance available, identity of evaluation provider and data use/dissemination plan.
- Programs will work with DPER on evaluation planning and implementation and consult as needed for regulatory and policy compliance, contract oversight, and guidance for grantees pre-/post-award.
- Programs may choose any mechanism to complete the evaluation, including:
- Partnering with DPER to use the Evaluation Services Indefinite Delivery/Indefinite Quantity contract
- An independent/separate contract
- Program resources/staff
- Other resources/staff
- Programs shall use program resources to cover the costs of evaluation planning, implementation, and analysis. For planning purposes, the industry standard, at the low end is 5-10% of program funding. The cost, or range of costs, for any individual program evaluation will be determined by a variety of factors, including, but limited to, the following:
- The IHS Offices and large programs will use evaluative information to help inform the annual budget justification process, including addressing the following requirements:
- The OMB Circular A-11 budget submission process, i.e., thorough discussion of the evidence and examples of innovation for a given program.
- The OMB Circular A-123 language regarding improvement of the accountability and effectiveness of Federal programs and operations.
- Performance measurement requirements: Government Performance and Results Act, Government Performance and Results Modernization Act.
- Planning, Performance and Program Integrity management improvement and similar initiatives, specifically relating to:
- Ensuring that evaluation plans are demonstrative of the organization’s strategic goals and objectives;
- Tracking data trends and illustrating evaluation findings and other evidence for use in decision-making and program improvement; and
- Identifying and addressing areas of risk that limit the impact of programs.
- Examining program efforts to achieve health impact, apart from and in addition to the evaluations of funded recipients’ performance.
- Ensuring evaluation findings are timely and relevant, so as to maximize their use in the organization’s planning, performance reporting, budgeting, and priority-setting processes.
- Ensuring that evaluation findings are easily accessible to users, major constituencies, and stakeholders.
- Ensuring that new public health programs or major health initiatives present an evaluation plan/approach that includes evaluations across the lifecycle of the effort so that findings can be deployed for program improvement even in early stages.
- Involving DPER evaluation staff early in the development of new Funding Announcements and large contracts to ensure that evaluation findings inform program improvement and accountability.
- Coordinating and communicating evaluation activities across the IHS organizational units with overlapping or complementary missions.
- Ensuring a process for tracking how evaluation findings are used to improve program planning, administration, implementation and oversight and outlining how evaluation findings will effect program decisions and modifications.
- Meaningful Use of Evaluation Findings.
- Headquarters Responsibilities:
- Identify how the funded effort contributes to agency and Departmental strategic plans and government-wide management priorities.
- Indicate which populations are disproportionately affected by the health issue and whether they are being addressed or targeted by the funded program, with special attention to vulnerable populations and people with disabilities.
- Match evaluation designs and methods to the size and scope of the funded initiative, purpose of the evaluation, and capability of the funded recipients.
- Use a logic model or other method of presentation to present a uniform set of outputs and short, intermediate, and long-term outputs and outcomes within the funding announcement.
- Specify outcome and supporting measures within the funding announcement.
- Provide standards, definitions, and format(s) for reporting results.
- Funded Recipients’ Responsibilities:
- Be aware of, and agree to, program outcomes and evaluation expectations as described within a program’s funding announcement (if applicable).
- Provide clarity on resources, activities, outputs, and short, intermediate, and long-term outcomes of their health promotion effort (using a logic model or other method of presentation).
- Develop plans for dissemination and use of evaluation findings to maximize program improvement for health impact (including how the dissemination effort will be evaluated).
- Provide the number and capabilities of staff assigned to evaluation and performance measurement (although the number and type may vary with the level of grantee capacity).
- Develop plans for the engagement of stakeholders in helping shape evaluation and measurement design.
- Provide clarity on evaluation and measurement design and data collection sources and methods.
- Headquarters Responsibilities: