Version: v2026.1.0

Executive Summary

Please note: this website is under development and the first version of the AEA will not be finalized until February 2026

The Annual Evaluation Agenda (AEA) is developed and managed by the Los Angeles County Department of Homeless Services and Housing (HSH). It sits alongside systemwide performance indicators and Measure A metrics as a central pillar of HSH’s approach to understanding whether Measure A investments are working, for whom, and why. The AEA translates the measure’s accountability requirements into a concrete plan for generating evidence that can drive system improvement by enabling better decisions about funding and program design.

The AEA is designed as a living, versioned agenda. It links Measure A goals to a prioritized list of studies so the County can learn what to change to meet its mission of preventing and ending homelessness. The AEA also establishes evaluation principles – shared expectations for how County and external evaluators will carry out their work to ensure community participation, equity, research ethics, transparency, and commitment to continuous improvement.

How the agenda was developed

The first version of the AEA was created in the second half of 2025 through an intentionally bottom-up development process. Our goal was to build shared ownership over what the County chooses to study and to ensure we avoided blindspots, surfacing questions where answers were most urgently needed to drive the system forward.

To achieve this goal, HSH co-designed the questions on the AEA in partnership with people with lived experience, providers, researchers, and program administrators and funders. We developed the agenda in two main phases:

  • A broad co-design phase that generated a longlist of nearly 300 evaluation questions drawn from community workshops, focus groups, surveys, governance partner input, and existing research agendas.
  • A structured prioritization phase that combined Measure A requirements, governance partner priorities, and a community preference survey to produce a ranked shortlist of high-priority evaluations.

What is included in this version

This agenda presents:

  • A set of ongoing evaluations already underway.
  • A prioritized list of upcoming evaluations, sequenced to generate learnings as capacity and funding allow
  • For each evaluation: the core question, its connection to Measure A goals, the learning strategy, methodological approach, and whether causal evidence is expected. We also clarify which of three “learning strategies” the evaluation uses: measure, improve, where the evaluation is measuring an existing program to inform an immediate decision; test, improve, where the evaluation is testing a new approach; and learn, test, improve, where the evaluation proceeds in two phases, first building foundational understanding of a problem then testing promising solutions identified in the first phase to see if they should be scaled.
  • Evaluation principles that establish expectations for community participation, equity, research ethics, transparency, and commitment to continuous improvement
  • An appendix that includes the full longlist of questions developed in the co-design phase; a glossary of acronyms and technical terms; a record of previous versions of the AEA.

How the agenda will be used over time

The AEA is intended to be revisited, updated, and used as Measure A implementation evolves. Over time, the agenda will:

  • Inform which evaluations are launched internally and which are procured externally
  • Shape future revisions to Measure A performance metrics and targets
  • Provide a transparent record of how community input and evidence influence system decisions
  • Create continuity across evaluation cycles, even as specific questions and priorities change

As required by Measure A, future versions of the AEA will reflect new evidence, emerging system challenges, and updated input from governance and co-design partners.