18 FAM 301.4
DEPARTMENT OF STATE PROGRAM AND PROJECT DESIGN, MONITORING, AND EVALUATION
(CT:PPP-34; 12-26-2024)
(Office of Origin: BP and F)
18 FAM 301.4-1 PURPOSE
(CT:PPP-34; 12-26-2024)
a. The Department of State is committed to using design, monitoring, evaluation, and data analysis best practices to achieve the most effective, evidence-informed U.S. foreign policy outcomes and greater accountability to our primary stakeholders, the American people. This policy establishes a linkage from objectives documented in strategic plans to the programs and projects that support them. This policy also provides principles and requirements for program and project design, monitoring, and evaluation.
b. This policy applies to bureau and independent offices programming diplomatic engagement and foreign assistance funds. Bureaus and independent offices must maintain documentation on the completion of these requirements, as they are subject to audit by the Office of Inspector General (OIG) or Government Accountability Office (GAO). This policy covers data generated for monitoring and evaluation purposes; bureaus and independent offices should consult the Department’s Data Policy in 20 FAM 101-103 for information on required data management and governance. The program design, monitoring and evaluation policy detailed here is designed to provide the fundamental requirements and tools for the entire Department. Some bureaus and offices may have additional guidance or requirements for program design, monitoring, or evaluation, such as the particular statutory authorities listed under section A below.
18 FAM 301.4-1(A) Authorities
(CT:PPP-34; 12-26-2024)
The authorities relevant to the Design, Monitoring, and Evaluation Policy include:
· The United States Information and Educational Exchange Act of 1948 (“Smith-Mundt Act”)
· The Foreign Assistance Act of 1961
· The Mutual Educational and Cultural Exchange Act of 1961 (“Fulbright-Hays Act”)
· Information Technology Management Reform Act of 1996 (ITMRA) (Clinger-Cohen Act)
· Government Performance and Results Act Modernization Act of 2010
· The Federal Information Security Modernization Act of 2014
· Foreign Aid Transparency and Accountability Act of 2016
· Program Management Improvement Accountability Act of 2016
· Foundations for Evidence-Based Policy Making Act of 2018
· Section 5603 of the Department of State Authorization Act of 2021 (Div. E, P.L. 117-81)
· OMB Memorandum M-18-04, Monitoring and Evaluation Guidelines for Federal Departments and Agencies that Administer United States Foreign Assistance
18 FAM 301.4-1(B) Definitions
(CT:PPP-34; 12-26-2024)
Assessment: A general term for a task that involves information collection and review. An assessment may refer to an examination of country or sector context to inform program or project design.
Baseline: Data that are collected before or at the start of a program, project, or process and provide a basis for planning and/or assessing subsequent progress and impact.
Collaborative evaluation: Also “joint evaluation.” An evaluation funded or commissioned jointly by more than one bureau, office, agency, or international partner for which a written arrangement defining the roles and responsibilities for the collaboration is in place.
Data quality assessment (DQA): An examination of the quality of performance indicator data in light of the five standards of data quality (validity, integrity, precision, reliability, and timeliness) to ensure that decision-makers are fully aware of data strengths and weaknesses, and the extent to which data can be relied upon when making management decisions and reporting progress. (See “Data Quality Standards” below.)
Data quality standards: Standards for determining the quality of performance indicator data for evidence-based decision-making and reporting credibility. The five standards of data quality are 1) validity, 2) integrity, 3) precision, 4) reliability, and 5) timeliness.
Evaluation: The systematic collection and analysis of information about the characteristics and outcomes of programs, projects, or processes as a basis for making judgments, improving effectiveness and informing decisions about current and future programs, projects, and processes. Evaluation is distinct from assessment.
Evidence: The available body of facts, or information indicating whether a belief or proposition is true or valid. Evidence can be quantitative or qualitative and may come from a variety of sources, including research, performance measurement, policy analysis, and program evaluation.
External evaluation: An evaluation conducted by an individual or organization that is not managing or implementing the program being evaluated. A Department of State external evaluation refers to an evaluation commissioned by the State Department to a third-party organization that is not implementing the program being evaluated. An implementing partner external evaluation refers to an evaluation commissioned by an organization implementing a Department of State program to an outside individual or firm.
Impact: A result or effect that is caused by or attributable to a program, project, process, or policy. Impact may also refer to higher-level effects that occur in the medium- or long-term and can be intended or unintended and positive or negative.
Impact evaluation: Assesses the causal impact of a program, policy, or organization, or aspect thereof, on outcomes relative to those of a counterfactual. This type of evaluation estimates and compares outcomes with and without the program, policy, or organization, or aspect thereof. Impact evaluations include both experimental (i.e., randomized controlled trials) and quasi-experimental designs.
Internal evaluation: An evaluation conducted by an organization’s own staff. A Department of State internal evaluation refers to an evaluation led by a Department of State employee. An implementing partner internal evaluation refers to an evaluation led by a staff member of a Department of State implementing partner.
Logic model: A methodology used for program or project design that focuses on the causal linkages between project inputs, activities, outputs, short-term outcomes, and long-term outcomes. It is a visual representation that shows the sequence of related events connecting a planned program’s or project’s objectives with its desired outcomes.
Major project: A project that aims to achieve a goal, objective, or sub-objective of a Joint Regional Strategy or Functional Bureau Strategy, and is defined by a major funding stream or supported by a major portion of a bureau or independent office. A major project is time-bound and may or may not be nested under a program.
Milestones: Measures that divide progress into a series of steps or define a single desired end state to mark a key achievement. Milestones require expected timelines and criteria for assessing whether they have been achieved.
Monitoring: See Performance Monitoring.
Performance evaluation: An evaluation of a program, project, or process that does not include either an experimental or quasi-experimental design. All evaluations that are not impact evaluations (including formative, process or implementation, and outcome evaluations) are considered types of performance evaluations for the purpose of this policy.
Performance indicator: A particular characteristic or dimension used to measure intended changes. Performance indicators are used to observe progress and to measure actual results compared to expected results.
Performance indicator reference Sheet (PIRS): A tool used to define performance indicators; it is key to ensuring indicator data quality and consistency.
Performance management: The systematic process of collecting, analyzing, and using performance monitoring data and evaluations to track progress, influence decision-making, and improve results.
Performance monitoring: The ongoing and systematic collection of performance indicator data and other quantitative or qualitative information to reveal whether strategy, program, or project implementation is on track and whether expected results are being achieved.
Pilot program: A program that includes any new, untested approach that is implemented to learn of potential feasibility and efficacy/effectiveness because it is anticipated to be replicated or expanded in scale or scope.
Process: A systematic series of steps taken to achieve a particular end.
Program: A major line of effort that includes a set of activities, processes, initiatives, or projects aimed at achieving a goal or objective of a Joint Regional Strategy or Functional Bureau Strategy. A program may be ongoing or time-bound, implemented by a single party or several parties, and cut across sectors, themes, and/or geographic areas.
Program design: The process of analyzing the context, identifying the root causes of issues to be addressed, and constructing logic and a theory of how and why a proposed program, project, or process will work.
Program/project/process goal: The highest-order outcome or end state to which a program, project, or process is intended to contribute.
Program/project/process objective: A statement of the condition or state one expects to achieve toward accomplishing a program, process, or project goal.
Project: A time-bound set of activities or actions intended to achieve a defined product, service, or result with specified resources. Multiple projects often make up the portfolio of a program and support achieving a program goal or objective. (See also Major Project.)
Project charter: A document that defines the project justification, scope, stakeholders, and key deliverables.
Situational analysis: A review of the current state or conditions that could affect the design, implementation, or outcome of a program, project, or process.
Theory of change: A brief statement that ties a logic model together by summarizing why, based on available evidence and consideration of other possible paths, the changes described in a logic model are expected to occur. It explains why we believe our program activities will result in particular outcomes.
18 FAM 301.4-1(C) Bureau Performance Management Coordinator
(CT:PPP-34; 12-26-2024)
Each bureau or independent office must identify a point of contact to serve as the Bureau Performance Management Coordinator (BPMC) to coordinate the bureau or independent office’s design, monitoring, and performance reporting activities and ensure that monitoring data and other forms of evidence are integrated into the planning and decision-making processes of their bureau or independent office. The BPMC will interact with BP and F on implementation of this policy. The BPMC may serve other roles in their bureau, including Bureau Planner or Bureau Evaluation Coordinator. The BPMC is not required to be a direct-hire employee. Bureaus and independent offices are encouraged to designate sufficient staff with skills in design and performance management to execute bureau design and monitoring functions.
18 FAM 301.4-1(D) Identifying and Defining Programs and Major Projects Within a Bureau or Independent Office
(CT:PPP-34; 12-26-2024)
a. To implement 18 FAM 300, bureaus and independent offices must first identify their programs and/or major projects by reviewing major lines of effort they undertake to achieve the broader outcomes specified in the goals or objectives of their strategic plan. Bureaus and independent offices may identify programs and major projects based on factors such as the characteristics of the accounts managed, organizational structure, the countries in which the bureau or independent office executes activities, portfolio definitions within a particular bureau or independent office, or other factors. A bureau’s or independent office’s programs and major projects may correspond to the objectives in their strategic plan.
b. When a new Functional Bureau Strategy (FBS) or Joint Regional Strategy (JRS) is developed or major changes are made to an existing FBS or JRS, bureaus and independent offices must determine whether to retain existing programs and major projects from their previous strategy, realign them into new programs or major projects, or establish new programs or major projects.
c. When identifying, updating, or adding to their list of programs or major projects, bureaus and independent offices must consult with F/DMEL and/or BP/PRE.
d. Bureaus and independent offices must submit their program and major project list to F/DMEL and BP/PRE at MFR@state.gov following approval of a new or updated bureau strategy or other occasions when a new program or major project is introduced or updated. Bureaus and independent offices should follow the program and major project submission process and timelines as established in the Guidance for the Design, Monitoring, and Evaluation Policy at the Department of State.
18 FAM 301.4-2 PROGRAM/PROJECT DESIGN
(CT:PPP-34; 12-26-2024)
a. Program and project design involves determining how the program or project will align to higher-level strategies, analyzing the context in which the program will operate, identifying the root causes of the issues or problems to be addressed, establishing program or project goals and objectives, and creating a plan to achieve those objectives. Program or project designs should draw on the best available evidence on which approaches effectively address the identified problem.
b. Bureaus and independent offices must develop, maintain, and regularly update a design for each of their programs, major projects, and foreign assistance funded projects. Program designs must include the required elements described in 301.4-2c. Major project designs must include the required elements described in 301.4-2d. For project designs for non-major foreign assistance funded projects, bureaus and independent offices have the flexibility to design non-major foreign assistance projects according to good practice in a manner appropriate to the scale and scope of the project.
c. Program design must include:
(1) Alignment to higher-level strategies: When initiating a program, assess how it can best align with and advance existing strategies or other high-level directives. In addition to relevant national- and agency-level guidance or strategies, these include relevant Joint regional strategy (JRS), functional bureau strategy (FBS), integrated country strategy, and the Sustainable development goals;
(2) Situational analysis: Conduct a review of the current state or conditions surrounding the program idea that could affect its design, implementation, and outcome. External factors to assess could include community participation and experiences of marginalized communities; political, legal, and other power structures; and conditions pertaining to security, culture, gender equality, economy, environment, infrastructure, institutional capacity, and other relevant considerations. Consultation with members of impacted groups, including marginalized populations, should be integral to all situational analyses and implementation. Analysis tools may include but are not limited to: Community Participation Analysis Tool [Department Equity Council]; gender analysis of conflict [CSO]; gender, equity, and inclusion framework [INL]; and intersectional gender analysis [S/GWI]; among others;
(3) Goals and objectives: Programs must have clearly stated program goals and objectives that reflect an understanding of the problem, need, or issue to be addressed; and
(4) Logic model and theory of change: Programs must include both a logic model (or equivalent) and a theory of change as defined in 301.4-1(B) that articulate how and why the program is expected to work.
d. Major project design must include:
(1) Alignment to higher-level strategies (as described in 18 FAM 301.4-2c);
(2) Situational analysis (as described in 18 FAM 301.4-2 paragraph c);
(3) Goals and objectives (as described in 18 FAM 301.4-2paragraph c);
(4) Project charter - a document that defines the project justification, scope, stakeholders, and key deliverables. A project charter may also include a logic model and theory of change, if appropriate.; and
(5) Project schedule: A milestone schedule or Gantt chart with appropriate amount of detail for the complexity of the major project.
e. The Guidance for the Design, Monitoring and Evaluation Policy at the Department of State, the Department’s program design and Performance Management Toolkit, and the TeamWork@State website provide guidelines, examples, and templates for these steps.
18 FAM 301.4-3 MONITORING
(CT:PPP-34; 12-26-2024)
a. Bureaus and independent offices must monitor the performance of their programs and projects.
b. Monitoring involves regular, ongoing data collection against performance indicators or milestones to gauge the direct and near-term effects of activities and whether desired results are occurring as expected during implementation. Monitoring includes assessing the quantity, quality, and timeliness of outputs and outcomes. Monitoring should be:
(1) Objective and supported with unambiguous and unidimensional indicators;
(2) Based on data and information that meet the Department’s data quality standards (see 18-FAM 301.4-3(A);
(3) Logically linked to program or project efforts and measure changes plausibly caused by the program or project; and
(4) Useful to inform course corrections during implementation.
c. Bureaus and independent offices should ensure that monitoring data are collected, managed, analyzed, and reported to support management needs.
d. The Guidance for the Design, Monitoring, and Evaluation Policy at the Department of State as well as the Department’s Program Design and Performance Management Toolkit provides information on approaches, guidelines, examples, and templates for monitoring.
18 FAM 301.4-3 (A) Data Quality Standards for Monitoring
(CT:PPP-34; 12-26-2024)
a. High-quality data are the cornerstone for evidence-based decision-making. To ensure that performance indicator data are credible and sufficient for decision-making, bureaus and independent offices must review data reported by their implementing partners (contractors, grantees, etc.) or collected by staff for these five standards of data quality:
(1) Validity: the data accurately represent the intended measure;
(2) Integrity: safeguards are in place that minimize risk of data manipulation or error to ensure data are accurate and consistent throughout their lifecycle;
(3) Precision: data have a sufficient level of detail for decision making;
(4) Reliability: data collection processes and data analyses are consistent over time; and
(5) Timeliness: data are as current as possible and available at a useful frequency for decision making
b. A performance indicator reference sheet (PIRS) is one tool for helping to ensure the provision of high-quality data. A PIRS includes the indicator definition, source of the indicator data, the frequency with which it will be collected, and any necessary disaggregation of data, among other elements. The PIRS should be accessible to all parties collecting, analyzing, or using indicator data.
(1) A PIRS is required for:
(a) Standard Foreign Assistance Indicators; and
(b) Publicly reported performance indicators (e.g., bureau-specific public reports, APP/APR, and APGs).
(2) A PIRS is recommended for all performance indicators that are included in program and project monitoring plans.
c. Data quality assessments (DQAs) are another tool for helping to ensure data reported meets the Department’s data quality standards. DQAs are used for assessing data quality, documenting any limitations in quality, and establishing a plan for addressing those limitations. DQAs are required for:
(1) Standard foreign assistance indicators; and
(2) Publicly reported indicators (e.g., bureau-specific public reports, APP/APR, and APGs).
d. Consult the program design and Performance Management Toolkit for further guidance and templates.
18 FAM 301.4-3 (B) Monitoring Plans
(CT:PPP-34; 12-26-2024)
a. Monitoring plans are tools for recording and tracking data on program and project performance. Bureaus and independent offices must develop and maintain a monitoring plan for each program, major project, and foreign assistance funded project and incorporate its use into program and project management. Monitoring plans should be scoped appropriately with considerations for purpose, scale, timeline, feasibility, and available resources. Monitoring plans should be reviewed at least annually to ensure that the selected indicators continue to be relevant and useful for management needs.
b. Monitoring plans for programs and major projects must include the required elements described in 18 FAM 301.4-3(B) paragraph c. Bureaus and independent offices have flexibility to develop monitoring plans for non-major foreign assistance projects according to good practice in a manner appropriate to the scale and scope of the project.
c. Program and major project monitoring plans must include:
(1) Performance indicators to monitor progress and to measure actual results compared to expected results;
(2) Indicator information documented in a PIRS for Standard Foreign Assistance Indicators and publicly reported indicators. Please see 18 FAM 301.4-3(A) paragraph b. The PIRS should be completed before data collection begins.
(3) Performance indicator baseline values. Baseline data should usually be collected before or at the start of a program or project to provide a basis for planning and monitoring subsequent progress. Baseline data are used to inform the identification of outyear targets;
(4) Targets for each performance indicator to indicate the expected change over the course of each period of performance; and
(5) DQA documentation for standard foreign assistance indicators and publicly reported indicators. Please see 18 FAM 301.4-3(A) paragraph c.
d. Bureaus and independent offices must store performance indicator data in a monitoring tracking table or information system that allows structured storage of indicator data. Performance indicator data in the tracking table must include:
(1) Baseline values;
(2) The baseline time frame;
(3) Actual values;
(4) Targets, and
(5) Data collection frequency.
NOTE: Tracking tables or information systems to store performance indicator data may also include narrative fields for describing a rationale for each target and deviations from a target.
e. The Guidance for the Design, Monitoring and Evaluation Policy at the Department of State as well as the Department’s Program Design and Performance Management Toolkit provide additional guidelines, examples, and templates for these steps.
18 FAM 301.4-4 EVALUATION
(CT:PPP-34; 12-26-2024)
a. Bureaus and independent offices should conduct evaluations to examine the performance and outcomes of their programs, projects, and processes at a rate commensurate with the scale of their work, scope of their portfolio, and the size of their budget.
b. Bureaus and independent offices are required to complete at least one evaluation per fiscal year whether they receive foreign assistance funding, diplomatic engagement funding, or a combination of both.
c. Bureaus and independent offices that receive and directly manage foreign assistance program funds must evaluate each foreign assistance funded program (as identified according to 18 FAM 301.4-1(D)) once in its lifetime or every five years, whichever is shorter. Evaluating a subset or component of a program is acceptable for meeting this requirement provided the evaluation addresses critical questions related to the program.
d. Bureaus and independent offices that receive and directly manage foreign assistance program funds should complete an impact evaluation of foreign assistance funded pilot programs before replicating or expanding the pilot in scale or scope. If an impact evaluation is deemed to be impracticable or inappropriate for a particular pilot program, a performance evaluation must be conducted with a justification of the methodological choice documented in an annex to the evaluation report.
e. Bureaus and independent offices may conduct internal evaluations, commission external evaluations, or conduct collaborative evaluations. Evaluations conducted with a bureau or independent office’s own staff (i.e., internal evaluation) only count toward the evaluation requirements in 18 FAM 301.4-4 if:
(1) The bureau or office has trained evaluation staff with the requisite knowledge and experience commensurate with the complexity of the evaluation proposed, as determined by the bureau or office; and
(2) The evaluation team leader is not the manager directly overseeing the program or project to be evaluated nor directly supervised by the manager(s) directly overseeing the program to be evaluated.
f. Bureaus and independent offices that receive and directly manage foreign assistance program funds may count toward the evaluation requirements in 18 FAM 301.4-4 evaluations commissioned by implementing partners to an outside organization. Internal evaluations conducted by implementing partners with their own staff will not count toward the evaluation requirements in 18 FAM 301.4-4.
g. Bureaus and independent offices may request an exception to the requirements for evaluations as described in 18 FAM 301.4-4. Reasons may include the size of the resources managed by the bureau or independent office, a lack of directly managed programs, small programs with limited budgets, or a plan for multi-year evaluations. Following discussions with BP/PRE and/or F/DMEL regarding grounds for an exception, bureaus and independent offices may request a time-limited exception from the evaluation requirements described 18 FAM 301.4-4 in an Action Memo to the Department of State Evaluation Officer(s). This memo should be sent to evaluation@state.gov and state the grounds for the exception. The Department of State Evaluation Officer(s) may approve this memo based on a review of the specific circumstances outlined in the memo and the bureau’s or independent office’s overall adherence to 18 FAM 301.4-4 paragraph a.
h. GAO and OIG reports are not considered bureau evaluations for the purposes of this policy, but bureaus and independent offices are encouraged to use such reports to inform the planning of evaluations, as applicable.
i. For additional detail and guidance on evaluation, please consult the Guidance for the Design, Monitoring and Evaluation Policy at the Department of State.
18 FAM 301.4-4(A) Bureau Evaluation Coordinators
(CT:PPP-34; 12-26-2024)
Each bureau or independent office must identify a point of contact to serve as the Bureau Evaluation Coordinator (BEC) to ensure that the evaluation function is integrated into the planning and decision-making processes of their bureau or independent office. The BEC will serve as the main point of contact in the bureau on evaluation and will coordinate evaluation activities and interact with BP and F on the bureau’s implementation of this policy. The BEC may serve in other roles in their bureau, including the role of performance management coordinator. The BEC is not required to be a direct-hire employee. Bureaus are encouraged to designate sufficient staff with skills in evaluation methods and management to execute bureau evaluation functions.
18 FAM 301.4-4(B) Bureau Evaluation Plans and Department Annual Evaluation Plan
(CT:PPP-34; 12-26-2024)
a. All bureaus and independent offices are required to develop and submit an annual Bureau Evaluation Plan (BEP) to the BP and/or F systems of record (i.e., the Evaluation Management System (EMS) and FACTSInfo Evaluation Registry (EVR)), providing details on evaluations completed in the past fiscal year, ongoing evaluations, and evaluations planned for the current and next fiscal year. Bureaus and independent offices should consult the annual Evaluation Data Call guidance from BP and F for specific timelines on when the BEP is due, what system it should be submitted in, and what specific information the BEP should include.
b. BEPs support the development of the Department of State Annual Evaluation Plan (AEP). The Department of State AEP describes the evaluation activities that the Department intends to conduct in the fiscal year following the year the plan is submitted. The Department of State evaluation officer(s) lead, coordinate, and develop the AEP.
18 FAM 301.4-4(C) Considerations for Evaluation
(CT:PPP-34; 12-26-2024)
a. Bureaus, independent offices, and posts should consider the following when planning, conducting, commissioning, and managing evaluations:
(1) Relevance and utility: Evaluations should address questions of importance and serve the information needs of the Department in general, and the commissioning units in particular. Evaluations should present findings that are actionable and available in time for use. Evaluation should present information in ways that are understandable;
(2) Rigor: Evaluations should produce findings that the Department can rely on while providing clear explanations of limitations. Evaluations should be managed by qualified staff with relevant education, skills, and experience for the methods undertaken. Evaluations, regardless of method (qualitative, quantitative, or mixed), should have the most appropriate design and methods to answer key questions while balancing its goals, scale, timeline, feasibility, and available resources;
(3) Independence and objectivity: While bureaus and independent offices commissioning evaluations have an important role in identifying evaluation priorities, the implementation of evaluations should be appropriately insulated from political and other undue influences that may affect their objectivity, impartiality, and professional judgement. Evaluators should strive for objectivity in the planning and conduct of evaluations and in the interpretation and dissemination of findings, avoiding conflicts of interest, bias, and other partiality;
(4) Transparency: Evaluations should be transparent to relevant stakeholders in the planning, implementation, and reporting phases to enable accountability and help ensure that aspects of an evaluation are not tailored to generate specific findings. Once evaluations are complete, reporting of the findings should be released in a timely manner to relevant stakeholders and provide sufficient detail so that others can review, interpret, or replicate/reproduce the work; and
(5) Ethics: Evaluations should be conducted to the highest ethical standards to protect the public and maintain public trust in the government's efforts. Evaluations should be planned and implemented to safeguard the dignity, rights, safety, and privacy of participants and other stakeholders and affected entities. Evaluators should abide by current professional standards pertaining to treatment of participants. Evaluations should be equitable, fair, and just, and should take into account cultural and contextual factors that could influence the findings or their use.
18 FAM 301.4-4(D) Collaborating with Other Bureaus, Offices, Agencies, and Organizations on Evaluations
(CT:PPP-34; 12-26-2024)
a. The evaluation policy recognizes that bureaus and independent offices do not always directly implement programs. In many cases, they provide funds to other bureaus, offices, agencies, or international organizations to carry out a program. In such cases, there are two options:
(1) Ensure the implementing organization carries out evaluations consistent with the policy and disseminates a final evaluation report; or
(2) Conduct collaborative evaluations with the implementing partners or organizations.
b. Bureaus and independent offices are encouraged to undertake collaborative evaluations with other entities, including other bureaus or offices, U.S. Government agencies, universities and colleges, non-governmental organizations, and bilateral or multilateral partners. Collaborative evaluations count as one full evaluation toward the policy’s evaluation requirements (301.4-4) for each bureau or independent office that is party to the collaborative evaluation.
18 FAM 301.4-4(E) Dissemination and Reporting Requirements for Evaluations
(CT:PPP-34; 12-26-2024)
a. When planning an evaluation, the office responsible for the evaluation must develop an evaluation dissemination plan that identifies all stakeholders and ensures potential users of the evaluation will receive copies or have ready access to them.
b. All bureaus, independent offices, and posts must maintain copies of their final evaluation reports.
c. Evaluation reports must include an executive summary; a description of the program evaluated; the evaluation purpose; evaluation questions; evaluation design, methodology, and their limitations; key findings; conclusions; and (if requested by the commissioning office) recommendations.
d. Unclassified evaluation reports that warrant administrative control and protection from the public or other unauthorized disclosure, should be marked Sensitive but Unclassified (SBU) in accordance with 12 FAM 541. Classified evaluations must be marked with appropriate classification markings.
e. Unclassified evaluation reports (including SBU evaluations) must be posted internally to the BP and/or F systems of record (i.e., the EMS and FACTSInfo EVR), where they will be accessible to all State bureaus and independent offices. SBU evaluation reports that are too sensitive for sharing beyond those with a need-to-know (e.g., if they contain personally identifiable information or law-enforcement-sensitive information) should consult with F/DMEL and BP/PRE at evaluation@state.gov regarding internal posting.
f. Unclassified foreign assistance-funded evaluation reports must be accompanied by a separate stand-alone evaluation summary report. The evaluation summary report should include (at minimum) a description of the program evaluated; the evaluation questions; and a summary of the methodology, key findings, and (if requested by the commissioning office) key recommendations. The evaluation summary report should be 1-10 pages in length. If the full evaluation report is determined to be SBU, the commissioning bureau or independent office should endeavor to exclude SBU information from the evaluation summary report so it can be shared publicly.
g. Unclassified foreign assistance-funded evaluation reports and evaluation summary reports must be submitted to the F system of record (i.e. the FACTSInfo EVR) within 30 days of completion. Foreign assistance-funded evaluation reports and evaluation summary reports must meet Department of State standards for accessibility under Section 508 of the Rehabilitation Act of 1973 (as amended) prior to submission to the F system of record. Unclassified foreign assistance-funded evaluation reports will be posted publicly within 90 days of completion by F unless they are SBU, in which case an evaluation summary report should be posted, as determined by F in consultation with the commissioning bureau or independent office.
18 FAM 301.4-4(F) Evaluation Use
(CT:PPP-34; 12-26-2024)
a. Bureaus, independent offices, and posts should consider evaluation findings to make decisions about policies, strategies, priorities, and delivery of services, as well as for planning and budget formulation processes.
b. Once the evaluation is completed, the office responsible for the evaluation must develop a post-evaluation action plan that: (1) summarizes recommendations from the evaluation team and other actions proposed by DoS staff based on the completed evaluation, (2) indicates whether the relevant office concurs or not with the recommendation or action, (3) describes a plan for implementing the accepted recommendation or action, and (4) designates a point of contact and timeframe for implementing each accepted recommendation or action. The post-evaluation action plan should be maintained until accepted recommendations and/or actions are resolved.
c. Recommendations proposed by an evaluation team (whether internal or external) are not required to be implemented. Recommendations and other proposed actions based on an evaluation should only be implemented if they are judged to be empirically sound and accepted by the responsible office.
18 FAM 301.4-5 ANALYSIS, Use, AND LEARNING
(CT:PPP-34; 12-26-2024)
a. Bureaus and independent offices should use different forms of evidence, including monitoring data and evaluation findings, for making decisions about policies, strategies, program priorities, and delivery of services, as well as for planning and budget formulation processes.
b. Learning takes place when a team engages in thoughtful discussion of evidence in order to look for opportunities to make positive changes. Bureaus and independent offices should regularly discuss available data and evidence to determine whether the right data are being collected to inform decisions, or if ongoing monitoring and/or evaluation plans should be modified to collect information more useful to decision makers.
c. The Department of State Learning Agenda (Evidence-Building Plan) is a Department-wide multi-year plan for systematically identifying and addressing priority questions relevant to the programs, policies, and regulations of the Department of State. The Department of State Evaluation Officer(s) are responsible for coordinating, developing, and implementing the Learning Agenda. Bureaus and independent offices should contribute to the development and implementation of the Department Learning Agenda, when applicable.
d. The Department of State Capacity Assessment for Research, Evaluation, Statistics, and Analysis (Capacity Assessment) is a department-wide assessment that documents the Department’s capacity to carry out the evidence-building activities supporting research and analysis efforts, including the Learning Agenda. The Department of State Evaluation Officer(s) are responsible for leading the Capacity Assessment in conjunction with the statistical official, chief data officer, and other Department personnel. Bureaus and independent offices must participate in the capacity assessment, when requested by the Department of State evaluation officer(s).
18 FAM 301.4-6 IMPLEMENTATION
(CT:PPP-34; 12-26-2024)
18 FAM 301.4-6(A) Budgeting for Program/Project Design, Monitoring, and Evaluation Activities
(CT:PPP-34; 12-26-2024)
Bureaus and independent offices should ensure that they have adequate resources to meet the design, monitoring, and evaluation requirements of this policy through early planning and by including these costs in their resource requests. The Department’s grant and contract regulations allow performance monitoring and evaluation as program or project costs. Consult the Guidance for the Design, Monitoring and Evaluation Policy at the Department of State and contact BP/PRE and F/DMEL for more information on estimating costs.
18 FAM 301.4-6(B) Transfer of Foreign Assistance Funds
(CT:PPP-34; 12-26-2024)
a. When a Department of State bureau or independent office transfers foreign assistance funds to another Department of State bureau or independent office, the responsibility for executing the requirements of 18 FAM 301.4 rests with receiving bureau or independent office, unless the division of responsibilities are otherwise decided by both bureaus or independent offices and documented.
b. Transfers of foreign assistance between State and other federal agencies or institutions typically take place using one of two types of agreements. Per the Foreign Assistance Act:
(1) Section 632(a): this authority covers transfers of funds to another agency. Under this authority, the recipient agency takes on responsibilities for program accountability; and
(2) Section 632(b): this authority involves interagency agreements when one agency is “buying" services from another agency. Under this authority, the buying agency retains responsibilities for program accountability.
c. When a Department of State bureau or independent office transfers foreign assistance funds to other federal agencies or institutions under 632(b), the State bureau or independent office is responsible for ensuring the appropriate procedures are in place for managing, monitoring, and evaluating the outcome(s) pertaining to the use of those funds broadly commensurate with 18 FAM 301.4-2, 18 FAM 301.4-3, and 18 FAM 301.4-4, and for establishing what information the receiving institution must supply to the State Department to ensure sound management of the resources. At a minimum, the State Department bureau must obtain from the receiving institution records of how the funds were used, sufficient monitoring data associated with the funds to determine if adequate progress and results are being achieved, and any evaluation findings related to the outcomes achieved with the funds.
d. When a Department of State office transfers foreign assistance funds to a public international organization, the State bureau or independent office is responsible for ensuring appropriate procedures are in place at the receiving organization for monitoring and evaluating the use of those funds. The State bureau or independent office is also responsible for establishing what information the receiving institution must supply to the Department of State.
e. When foreign assistance funds are transferred to a Department of State bureau or independent office from another federal agency or institution, the State bureau or independent office must ensure the appropriate procedures for managing the funds in a way commensurate with 18 FAM 301.4-2, 18 FAM 301.4-3, and 18 FAM 301.4-4 are established and executed.
18 FAM 301.4-6(C) Programs or Projects Fully Designed and Managed at Post
(CT:PPP-34; 12-26-2024)
When post is responsible for designing and managing programs and major projects, then post staff are responsible for executing the design and monitoring requirements of 18 FAM 301.4, in consultation with the appropriate functional or regional bureau.