Evaluation Agenda
The list of evaluations below presents the shortlist of highest-priority upcoming evaluations, as identified through the Annual Evaluation Agenda (AEA) development process.
All parties implementing these evaluations – whether internal or external to the County – are expected to uphold the evaluation principles described below.
Upcoming evaluations are ordered by priority, reflecting a combination of Measure A requirements and input from governance and co-design partners. Higher-priority questions are intended to be initiated earlier in the evaluation cycle, based on available funding and bandwidth.
For each evaluation, we present the question it answers, the problem it addresses and its connection to Measure A goals, a brief explanation of why the question was prioritized relative to others, the methodology, and whether it is intended to generate causal evidence.
Each evaluation also fits into one of three “learning strategies” that clarify how it will generate system improvement:
- “Measure, improve” is used when programs or policies are already in place and the goal is to understand how well they are working and where performance or equity can be strengthened, using existing or easily collected data.
- “Test, improve” is used when a new or modified approach is ready to be tried and the goal is to learn whether it leads to better outcomes before scaling.
- “Learn, test, improve” is used when the right solution is not yet clear, and the agenda first focuses on understanding patterns, gaps, and inequities in current data before designing and testing targeted changes.
We also clarify whether each evaluation could conceivably be carried out using staffing and resources internal to the county. That possibility is determined by whether the evaluation exceeds the methodological capacity of the County and / or whether the evaluation requires independence for credibly evaluating County success. Evaluations that require extensive original data collection or specialized methods beyond current County capacity are generally not flagged as possible to conduct internally. Separately, evaluations that assess the success of major investments on key outcomes are typically not considered ripe for internal evaluation, since independence is important to producing credible and publicly accountabile results in such cases. Evaluations focused on internal process improvement might be appropriate for employing internal capacity.
Upcoming Evaluations
Evaluation priority 1: Increasing systemwide impact
Evaluation question: What is the systemwide causal impact of Measure A investments on key homelessness outcomes? Where does current system capacity fall short of what is required to meet Measure A goals? For which subpopulations are gaps widest, including people with mental health and substance disorders?
Learning strategy: Measure, improve
Motivation: This evaluation addresses two closely linked system-level questions required for effective stewardship of Measure A funds. First, what is the causal impact of Measure A investments on core homelessness outcomes, including unsheltered homelessness, homelessness among people with mental health and substance use disorders, and permanent exits from homelessness. Second, given that impact, where does current system capacity fall short of what is required to meet Measure A goals and broader system aspirations.
The evaluation combines a systemwide impact assessment with a capacity gap analysis. The impact component estimates the progress achieved per dollar invested, drawing on existing administrative data, prior evidence, and transparent modeling assumptions. The gap component compares the current system capacity to that needed to meet feasible and stretch goals. The difference is the capacity gap that needs to be filled to meet those goals, accounting for constraints such as housing supply. By pairing a systemwide gap analysis that identifies where capacity is most needed with an impact analysis that estimates which investments most effectively produce progress, this evaluation supports more informed decisions about where additional resources are required and how existing funds should be allocated to best address identified shortfalls.
Results will be produced on a recurring basis, at least annually.
Why prioritized: This evaluation is prioritized because it directly supports the core accountability and adjustment mechanisms embedded in Measure A. The ordinance requires regular assessment of whether system goals are being met, identification of highly effective programs, and the use of evidence to inform funding reallocations and future baseline and target metrics. A systemwide assessment that jointly examines impact and capacity is essential to meeting these requirements in a credible and actionable way.
Governance partners have indicated this evaluation is high priority: the need for a clear, systemwide assessment of Measure A impact was explicitly raised by members of the Executive Committee for Regional Homeless Alignment (ECRHA) in July 2025 and separately requested by executive leadership at the Department of Homeless Services and Housing (HSH). Related questions on identifying gaps in capacity were among the most popular for our co-design partners as reflected in the community preference survey, receiving support from more than two-thirds of respondents when paired against randomly selected alternatives. This question consolidates more than a dozen crowdsourced questions from the longlist.
- Related longlist questions: 266, 272, AV, 41, 51, 128, 136, 137, 148, 149, 150, 209, 210, 227, 233, 239, 129, 286
- Causal: Yes
- Methodology: Quantitative impact modeling and gap analysis
- Potentially conducted internally: No - requires original data collection and independence
Evaluation priority 2: Testing coordination and connection pilots
Evaluation question: To what extent do pilot programs funded through the Measure A Innovation budget – namely, community liaisons, faith-based regional coordinators, veteran call centers, and veteran resource centers – strengthen coordination across providers and / or improve connection to services for the groups they are designed to serve?
Learning strategy: Test, improve
Motivation: The evaluation will focus on a small set of new programs funded through the Measure A “Innovations” budget and designed to improve coordination and service navigation. We expect they will employ mixed methods, with findings delivered early enough to inform decisions about whether and how these pilots should be scaled or redesigned.
Why prioritized: Measure A intends Homelessness Solutions Innovations funds “to incubate and test new ideas for future, larger-scale spending”. Their inclusion on the AEA reflects the need to ensure they are tested. Three larger pilots in this section of the budget already have standalone evaluations ongoing. These smaller pilots are best assessed together given their common goals and scale.
- Related longlist questions: 259, 111, 262
- Causal: Yes
- Methodology: Likely qualitative, mixed methods with quantitative if possible
- Potentially conducted internally: No - requires original data collection and independence
Evaluation priority 3: Tracking and improving referral pathways
Evaluation question: How do referral and intake pathways currently function across the homelessness system? Where do breakdowns occur that prevent people from connecting to or remaining engaged with services? What does the qualitative experience of service participants reveal about where painpoints lie? What changes would need to take place to workflows and data entry in order to track handoffs between providers?
Learning strategy: Measure, improve
Motivation: This evaluation focuses on understanding how people actually move through referral and intake pathways across the homelessness system, and where those pathways break down. It supports Measure A goals by identifying points where people disengage, experience delays, or fall out of care, particularly during transitions between outreach, interim housing, behavioral health services, and permanent housing. The work will use qualitative mapping of participant and provider experiences alongside descriptive analysis of available administrative data to document current workflows, identify pain points, and assess what changes to processes and data entry would be needed to reliably track handoffs between providers. Findings will be delivered early to inform near-term system improvements and data infrastructure decisions, with follow-up work as needed to refine measures.
Why prioritized: Governance partners have repeatedly identified limited visibility into referral pathways as a core barrier to system improvement. Members of the subcommittee on Best Practices for Standardization of Care (BPSC) of ECRHA specifically requested targeted fact-finding in this area, with the Data Subcommittee of the Executive Steering Committee for Homelessness IT and Data Governance noting in its feedback to the BPSC that referral tracking is currently not possible and requires foundational analysis before system improvements and performance measures can be designed and implemented. Members of the Equity Subcommittee of the Leadership Table for Regional Homeless Allignment (LTRHA) also recommended investigation into where different racial or minority population groups experience the greatest bottlenecks or drop-offs. In the community preference survey, the question on referral pathways ranked among the most popular, preferred more than two-thirds of the time when compared against randomly selected alternatives. It consolidates several questions crowdsourced from co-design partners focused on how people move through the system, where referrals stall, and how those experiences differ across populations, including groups facing disproportionate drop-offs.
- Related longlist questions: 273, 134, 271, 9, 52, 108
- Causal: No
- Methodology: Mixed methods: qualitative behavioral mapping and quantification of pathways
- Potentially conducted internally: No - qualitative component requires original data collection and journey mapping
Evaluation priority 4: Improving permanent housing retention and graduation
Evaluation question: Using longitudinal, integrated adminstrative data, which factors predict permanent housing retention, successful graduation from supports, and returns to homelessness? Where are the largest racial inequities, and what appears to be driving them? Do targeted interventions informed by these predictors improve housing stability, reduce inequities, and support safe transitions out of intensive services? How should housing, service, and retention strategies be redesigned or scaled to free supportive housing capacity without creating harm?
Learning strategy: Learn, test, improve
Motivation: This evaluation focuses on understanding what helps people remain stably housed over time, successfully step down from intensive supports, and avoid returning to homelessness, and why these outcomes differ across racial groups. As resources become more constrained, stabilizing people in permanent housing and safely graduating those who no longer need intensive services has become increasingly important to sustaining system capacity and preventing harm. The evaluation supports Measure A goals by identifying factors that drive permanent exits from homelessness and helping to free up scarce supportive housing capacity without causing harm. The work will proceed in two stages following a “learn, test, improve” approach: an initial, rapid-cycle internal analysis using linked administrative data to identify predictors of retention, graduation, and inequities, followed by an externally-led test of targeted interventions informed by those findings. Early results from the first stage will be delivered quickly to inform program design, with the second stage generating causal evidence to guide scaling and redesign decisions.
Why prioritized: Governance partners have elevated this topic repeatedly: the BPSC of the ECRHA recommended multiple evaluations focused on permanent supportive housing retention, and the Equity Subcommittee of the LTRHA recommended in its March 2025 report “conducting studies on tenant retention in PSH and other permanent housing, especially for Black and AIAN tenants who have higher returns to homelessness after permanent placement”. The evaluation question consolidates a large number of crowdsourced questions focused on retention, eviction risk, graduation from supports, and racial inequities, reflecting both the volume and consistency of concern raised by co-design partners during the agenda development process. Related questions were among the most popular in the community preference survey.
- Related longlist questions: 232, AG, Y, 269, 112, 66, 105, 152, 232, 204, 48, 55, 64, 249, 292, 108
- Causal: Yes
- Methodology: Two-stage evaluation: predictive analytics / machine learning followed by RCT or QED.
- Potentially conducted internally: Yes - first stage relies on administrative data and may not require independence. Second stage should be external.
Evaluation priority 5: Reducing administrative burden in data entry
Evaluation question: How many staff hours do data entry and reporting requirements consume, where does duplicate or dual entry account for the largest share of those hours, and which changes would free up the most staff time while improving participant outcomes? What do staff- and participant-centered qualitative journey maps through services and housing reveal about where where changes would be most feasible and impactful?
Learning strategy: Measure, improve
Motivation: Co-design and governance partners frequenty expressed a concern that fragmented data systems divert staff capacity away from direct service delivery. At a moment when the new department is designing a regional “blueprint” for data integration, this evaluation would examine how much staff time is consumed by data entry and reporting requirements, where duplication across systems creates the greatest burden, and which changes would most effectively free up time while improving participant experience and outcomes. The work supports Measure A goals by identifying ways to reduce delays in housing pathways, improve continuity of care for people with mental health and substance use disorders, and accelerate progress toward permanent housing. The evaluation will combine staff- and participant-centered journey mapping with quantitative measurement of time burden to identify feasible, high-impact changes.
Why prioritized: This question consolidates a large set of concerns, crowdsourced from co-design partners, related to duplicate data entry, administrative burden, data quality, and client experience. The issue has been repeatedly elevated as a systemwide barrier to effectiveness by governance partners, including Homeless Policy Deputies of the Board of Supervisors and members of the BPSC of ECRHA and the LTRHA. While it did not rank among the most popular questions in the community preference survey, time burden and fragmentation surfaced consistently from co-design partners in workshops with providers and people with lived experience.
- Related longlist questions: OP, AI, O, 278, 275, 191, 25, 277, 36, 113, 118, 126, 171, 178
- Causal: No
- Methodology: Mixed methods: qualitative staff- and participant-centered journey mapping with quantitative survey measurement of time burden
- Potentially conducted internally: No - requires original data collection and journey mapping
Evaluation priority 6: Strengthening the provider workforce
Evaluation question: Which workforce conditions and practices predict staff retention, burnout, and service quality across homelessness programs, and how do these workforce dynamics affect client outcomes and equity? Do targeted investments in compensation, training, leadership, and work-life balance improve workforce stability and service quality? How should workforce strategies be redesigned or scaled to sustainably build provider capacity across the system?
Learning strategy: Learn, test, improve
Motivation: Meeting Measure A goals and broader aspirations for the LA County homelessness system cannot be achieved without sustaining a capable and stable provider workforce. This evaluation examines how workforce conditions and management practices shape staff retention, burnout, and service quality across homelessness programs, and how those workforce dynamics affect client outcomes and equity. The evaluation will follow a “learn, test, improve” approach in two stages: first, descriptive and predictive work to identify the workforce conditions most strongly associated with turnover, service quality, and client outcomes across program types and geographies; and second, a targeted test of scalable investments such as compensation changes, training and supervision improvements, leadership supports, and work-life balance interventions. Findings will inform how workforce strategies should be redesigned or scaled to build provider capacity sustainably and improve system performance.
Why prioritized: This question consolidates 16 crowdsourced questions spanning pay, workload, leadership, training, cross-training, and retention across multiple program areas, reflecting how consistently this issue surfaced during the co-design phase. It was also among the most popular questions in the community preference survey and is high-leverage given its structural impact across the system.
- Related longlist questions: M, 17, 18, 37, 69, 85, 89, 155, 205, 219, 220, 221, 222, 234, 39, 23
- Causal: No
- Methodology: Mixed methods: qualitative interviews and quantitative predicitive analytics in first phase; quantitative testing of intervention in second
- Potentially conducted internally: No - requires original data collection
Evaluation priority 7: Reducing returns to homelessness
Evaluation question: What are the key factors that drive reentry into homelessness after prior successful exits? How can these insights inform prevention strategies?
Learning strategy: Learn, test, improve
Motivation: Data on homelessness primarily stems from individuals’ encounters with service providers. As such, there is limited visibility into what happens after individuals cease to have contact with service providers. This blindspot inhibits prevention of returns to homelessness, which is essential to sustaining progress toward Measure A goals on permanent exits and reducing strain on the system from inflow. This evaluation focuses on understanding why some people return to homelessness after previously exiting to housing and what can be done to prevent those returns. It will follow a “learn, test, improve” approach, beginning with original data collection and longitudinal analysis to understand post-exit trajectories, followed by targeted testing of prevention strategies informed by those findings.
Why prioritized: The question was the most popular among co-design partners in the community preference survey, selected three quarters of the time when paired against alternative evaluation questions. It consolidates several crowdsourced questions focused on reentry, post-exit outcomes, and how to measure stability outside of active service participation.
- Related longlist questions: AG, Z, 21, 177, 204, 153, 175
- Causal: Yes
- Methodology: Quantitative - using admin and possibly survey data
- Potentially conducted internally: External
Evaluation priority 8: Improving coordinated entry
Evaluation question: How do the systems and workflows that implement Coordinated Entry System (CES) prioritization and matching shape system efficiency and equity, as reflected in unit-level outcomes (e.g., time vacant) and participant-level outcomes (e.g., timeliness and equity of placement across populations and geographies)? Using simulation models that reflect real-world operational constraints, how would alternative implementation designs affect throughput and equitable access to housing resources?
Learning strategy: Measure, improve
Motivation: The CES is the primary gateway into housing and services. As such, even small implementation changes can have large effects on system efficiency and fairness. This evaluation examines how implementation of CES prioritization and matching rules affect who gets housed, how quickly housing is filled, and whether access is equitable across populations and geographies. The work will combine retrospective, quasi-experimental analysis of recent changes with simulation modeling to test how alternative prioritization and matching approaches would perform under current housing supply and resource constraints.
Why prioritized: HSH executive leadership have expressed a desire to better understand the impact of CES decisions on systemwide equity and throughput, and other governance partners have emphasized the importance of understanding entry points into services, particularly for unsheltered populations. This question also consolidates multiple co-design partner questions focused on prioritization rules, assessment bias, geographic access, and racial and population-level bottlenecks, reflecting consistent concern about how entry into the system is structured.
- Related longlist questions: T, 71, 75, 79, 202, 108
- Causal: Yes
- Methodology: Quantitative simulation modeling and quasi-experimental impact evaluation
- Potentially conducted internally: Yes - can be done using administrative data and may not require independence
Evaluation priority 9: Improving outreach engagement
Evaluation question: Which street outreach approaches are associated with better engagement and connection outcomes across subpopulations? How do people experiencing unsheltered homelessness describe their experiences with these approaches, including trust, rapport, and the value of referrals and supports? Which approaches should be refined or tested for scaling?
Learning strategy: Measure, improve
Motivation: Outreach is a primary entry point into services for people sleeping outdoors, so differences in outreach approach can have outsized effects on who connects to services and who does not. As outreach team capacity tightens due to funding reductions, understanding which models build trust, generate meaningful referrals, and lead to sustained engagement is therefore critical. The evaluation supports Measure A goals by identifying outreach practices that more effectively move people from encampments into services and housing, particularly for people with mental health and substance use disorders, and by informing which approaches should be refined or scaled. The work will follow a “learn, test, improve” approach, combining analysis of administrative data with qualitative interviews to capture participant perspectives on rapport, trust, and value, followed by targeted testing of promising outreach models.
Why prioritized: Related questions ranked among the most popular among co-design partner responses to the preference survey, reflecting strong interest in improving outreach quality and impact.
- Related longlist questions: R, 120, 134, 138, 139
- Causal: Yes
- Methodology: Two-stage evaluation: predictive analytics / machine learning followed by RCT or QED.
- Potentially conducted internally: Yes - first stage relies on administrative data and may not require independence. Second stage should be external.
Evaluation priority 10: Extending local resources through federal funds
Evaluation question: What are the systemwide savings from connecting clients to non-housing federal benefits (SSA, VA, IRS)? Where are the largest opportunities for drawing down federal funds, and what are the equity implications of targeting such opportunities?
Learning strategy: Measure, improve
Motivation: This question consolidates several crowdsourced questions focused on federal benefit enrollment, referral efficiency, and the fiscal effects of drawing down external resources. It can be conducted at relatively low cost and has the potential to further progress towards Measure A goals across the system through cost avoidance.
Why prioritized: This question consolidates several crowdsourced questions focused on federal benefit enrollment, referral efficiency, and the fiscal effects of drawing down external resources. While it was not explicitly prioritized by governance or co-design partners, it is recommended for inclusion because it can be conducted at relatively low cost and has the potential to identify meaningful opportunities for revenue generation and cost avoidance. As Measure A emphasizes accountability, effective use of funds, and systemwide learning, this evaluation may offer a practical way to extend limited local dollars.
- Related longlist questions: B, 94, 95, 96, 267, 268
- Causal: No
- Methodology: Mixed methods: qualitative interviews and quantitative modeling
- Potentially conducted internally: No - requires original data collection
Evaluation priority 11: Reducing negative exits from interim housing
Evaluation question: What key factors predict unknown or unfavorable exits to interim housing and accelerated move-ins to permanent housing? Where are the greatest inequities in movement in interim housing placement and throughput? How can these insights inform interventions to prevent negative exits and improve the speed, quality, and equity of positive exits from interim housing?
Learning strategy: Learn, test, improve
Motivation: Accelerating positive exits from interim housing and preventing loss to follow-up is essential to achieving Measure A goals because interim housing is a critical bridge between unsheltered homelessness and permanent housing. This evaluation examines why some people leave interim housing without a stable outcome, why others move more quickly into permanent housing, and where inequities in interim housing placement and throughput reside. This work supports Measure A by identifying participant-, provider-, and program-level factors that drive unfavorable or unknown exits, highlighting where people with mental health and substance use disorders are most likely to fall out of the pipeline, and informing interventions that improve the speed, quality, and equity of transitions into permanent housing. The evaluation will follow a “learn, test, improve” approach, beginning with analysis and original data collection to diagnose drivers of negative exits and delays, followed by targeted testing of interventions designed to reduce drop-offs and improve throughput.
Why prioritized: Consolidates a large number of crowdsourced questions focused on unfavorable exits, self-exits, provider practices, and inequities in interim housing outcomes, reflecting widespread concern across the co-design process. Related questions were among the most popular in the community preference survey, indicating strong stakeholder interest in understanding and improving interim housing throughput.
- Related longlist questions: 156, L, 5, 59, 156, 7,10, 60, 61, 159, 264, 108
- Causal: Yes
- Methodology: Two-stage evaluation: predictive analytics / machine learning followed by RCT or QED.
- Potentially conducted internally: Yes - first stage relies on administrative data and may not require independence. Second stage should be external.
Evaluation priority 12: Leveraging service bundles
Evaluation question: Which combinations of housing models, case-management intensity, and health supports are associated with better long-term housing stability and equity across subpopulations? Do targeted improvements to these service combinations improve stability, and how should programs be adjusted based on the results?
Learning strategy: Learn, test, improve
Motivation: Participants in the homelessness system receive simultaneous services from different providers who may not be in direct coordination. Programs therefore should not be considered in isolation, as they may produce differing outcomes depending on what other services they are bundled with. Yet there is limited evidence on how different housing and service components interact and how those combinations shape outcomes for people with varying levels of acuity. This evaluation would follow a “learn, test, improve” approach, beginning with analysis to identify promising pairings of service bundles and populations, followed by a causal test of targeted improvements to those service bundles to inform program redesign and scaling decisions.
Why prioritized: Related questions were among the most popular in the community preference survey, reflecting strong interest from co-design partners in moving beyond single-program evaluations toward system-relevant learning.
- Related longlist questions: AN, 97, 151, 164
- Causal: Yes
- Methodology: Two-stage evaluation: predictive analytics / machine learning followed by RCT or QED.
- Potentially conducted internally: Yes - first stage relies on administrative data and may not require independence. Second stage should be external.
Evaluation priority 13: Improving case manager effectiveness
Evaluation question: To what extent do long-term housing stability and related outcomes vary across case managers, after accounting for client needs and program context? Which case management practices are associated with stronger and more equitable outcomes across subpopulations, such as transition-aged youth?
Learning strategy: Measure, improve
Motivation: Despite the centrality of case managers to the experience and journey of homelessness system participants and the considerable volume of administrative data tracking service encounters, current guidance for supervision, training, and hiring do not leverage data-based insights to their fullest potential. Using existing administrative data, this evaluation would examine whether long-term housing stability and related outcomes vary across case managers after accounting for client needs and program context, and which case-management practices are associated with stronger and more equitable outcomes. The effectiveness of case management models adapted to the needs of specific subpopulations, such as transition-aged youth, will be of central interest.
Why prioritized: Reflects several questions from co-design partners focused on understanding what works at the front line. Related questions ranked among the most popular in the community preference survey.
- Related longlist questions: AN, 97, 151, 164
- Causal: No
- Methodology: Predictive analytics with machine learning on administrative data
- Potentially conducted internally: Yes - relies on administrative data and may not require independence.
Evaluation priority 14: Increasing impact of needs assessments
Evaluation question: How accurately do tools such as the 5x5 capture participant needs? Where do assessed needs and service placements diverge? What factors inhibit or encourage case worker follow-through on referrals and next steps?
Learning strategy: Measure, improve
Motivation: Assessments are the primary way need is documented and referrals are justified. Therefore, gaps in accuracy or execution can undermine housing pathways and service effectiveness. This evaluation would examine whether assessment tools such as the 5x5 accurately capture participant needs, where assessed needs diverge from actual service placements, and what system constraints limit follow-through after needs are identified. The evaluation would use measurement calibration and validation techniques, including comparison to clinical indicators, service utilization, and participant-reported needs, to ground-truth assessment results and identify where tools systematically over- or under-estimate acuity. This would be combined with qualitative input from staff and participants to identify concrete changes to assessment use, training, and referral workflows.
Why prioritized: This question was prioritized based on a clear recommendation from governance partners, particularly members of the BPSC, who identified assessment accuracy and follow-through as persistent system concerns. It consolidates multiple crowdsourced questions focused on whether tools like the 5x5 and CES assessments meaningfully reflect participant needs and whether those assessments translate into appropriate services. The evaluation is descriptive rather than causal and has a less direct line to outcomes than some other questions.
- Related longlist questions: S, U, 11, 245, 28, 70, 12, 201, 246, 63
- Causal: No
- Methodology: Assessment validation and calibration using administrative data and participant-reported outcomes
- Potentially conducted internally: No - requires original data collection and assessment validation techniques
Evaluation priority 15: Improving client health outcomes
Evaluation question: According to participants and providers, which kinds of clinical support innovations, such as risk-based targeting of earlier engagement, adjusted dosage, or continued follow-up after stabilization, best improve housing retention, health outcomes, and service continuity among participants with similar acuity and situation? What is the quantitative, causal evidence for the most promising practice on these outcomes?
Learning strategy: Learn, test, improve
Motivation: The Measure A ordinance directs the County to ensure that people experiencing homelessness have access not only to housing, but also to medical care, mental health services, substance use treatment, and other supportive services needed to achieve long-term stability. This vision of an all-encompassing, person-centered service model makes clinical supports a core component of housing success. As Measure A scales these integrated services, it is essential to generate clear, causal evidence on which clinical support innovations actually improve housing retention, health outcomes, and continuity of care, so resources are focused on approaches that deliver durable and equitable results. The evaluation will first use participant and provider perspectives to identify and refine promising clinical support practices. It will then rigorously test the most promising approach using a randomized or quasi-experimental design to estimate its impact on housing retention, health outcomes, and continuity of care.
Why prioritized: This question consolidates a large set of questions from co-design partners about early intervention, discharge planning, dosage, and post-stabilization follow-up, many of which relate to recent or ongoing program changes. It was prioritized because it builds directly toward causal learning while addressing a high-cost, high-impact part of the system. With clinical resources under pressure, the ability to target supports more precisely could generate system efficiencies.
- Related longlist questions: WW, AW, N, V, 255, 252, 253, 257, 287, 288, 290, 291, 293, 294, 295, 254
- Causal: NA
- Methodology: Two-stage evaluation: predictive analytics / machine learning followed by RCT or QED.
- Potentially conducted internally: No - requires original data collection and independence.
Evaluation priority 16: Leveraging culturally and trauma informed care
Evaluation question: How are trauma-informed and culturally responsive care models currently defined and operationalized across homelessness and housing services? To what extent are these models implemented with fidelity in practice, as reflected in service delivery, staff training, referral patterns, and participant experience? How does variation in implementation relate to engagement, trust, and housing outcomes across racially and culturally diverse populations? Where should practices be strengthened to improve equity?
Learning strategy: Learn, test, improve
Motivation: Service experiences vary widely across the system, particularly for participants from racially and culturally marginalized communities. A central aim of this evaluation is to clarify what culturally responsive and trauma-informed care actually looks like in practice within homelessness and housing programs, rather than treating it as an abstract principle. The study would examine how service models are defined and implemented on the ground, including staff training and supervision, referral and case-conferencing practices, and the role of participant goals and culturally rooted approaches in housing and service decisions. It would then assess how these models relate to trust, engagement, and housing outcomes, with the goal of identifying concrete practices that can be strengthened or scaled to improve equity and effectiveness.
Why prioritized: This question was prioritized in response to concerns raised by the Equity Subcommittee of the LTRHA, which emphasized the need to implement specific practices to drive progress on reducing racial disparities.
- Related longlist questions: A, 207, 101, 110, 123, 124, 125, 181
- Causal: No
- Methodology: Qualitative implementation + fidelity evaluation
- Potentially conducted internally: No - requires original data collection
Evaluation Principles
Include voices with relevant lived experience
Involve people with lived experience throughout the research lifecycle: setting priorities, designing research and data collection instruments, collecting data, and interpreting findings
Treat research participants as active partners in the research, not just as advisors or subjects
Center equity and cultural responsiveness
Center racial and other forms of equity in evaluation goals and methods
Prioritize outcomes that correct historical disadvantages and reflect the values of impacted communities
Design evaluations that respect the culture, language, and lived realities of participants
Adapt tools and measures to reduce bias
Elevate qualitative data as essential to interpretation. Use both numbers and stories to understand what’s happening
Employ humility in interactions with research participants. Treat people with direct experience as the experts.
Protect people and do no harm
Ensure informed consent with clear, accessible language and explanation of rights
Protect participants’ privacy: secure handling of personal information, limiting access, and collecting only what’s essential to the research
Minimize harm and design studies with participant well-being in mind
Use randomization when it can improve fairness and strengthen findings, but employ ethical randomization frameworks: e.g., ensure no group would be denied essential support but for the creation of a comparison group
Create a trauma-informed environment that gives participants control, choice, and space to engage on their own terms
Apply principles of fairness and equity in who is included in research and how
Uphold transparency and accountability
Publicly pre-register evaluations prior to data collection, including goals, methods, and planned analyses. Do this irrespective of methodology or questions
Share results publicly and accessibly, in as disaggregated a form as privacy protections will allow
Use independent evaluators when evaluating County success on key outcomes
Commit to continuous improvement
Design evaluations to inform decisions by program administrators and funders
Use findings to adjust funding, redesign programs, and correct blind spots
Close the loop with community. Report back what was learned and how/whether it’s being applied