AI Systems for Social Good
Preventing Homelessness in Los Angeles
Role: Lead Researcher
Overview/Problem Statement
In California, homelessness continues to be a humanitarian issue affecting roughly 185,000 residents. Los Angeles County’s Homelessness Prevention Unit in collaboration with UCLA’s California Policy Lab needed a solution that could predict who is most at risk of becoming unhoused, enabling early intervention. The AI initiative studied in this case aims to leverage machine learning to identify individuals at risk by analyzing variables such as emergency room visits, jail stays, and food assistance usage. My research explored the ethical implications, effectiveness, and potential biases within this system, focusing on how technology can be used responsibly to address social welfare challenges.
Impact & Recommendations
-
Ongoing evaluations of the AI model to monitor and prevent bias
Bias, if left unchecked, could deepen existing inequalities. Regular assessments will ensure fairness and improve the system's reliability.
-
Increase transparency measures around data use to build public trust
People deserve to know how their personal information is being used, especially when the stakes involve housing and personal security.
-
Expand scope of AI system's metrics beyond housing retention
Also evaluate the psychological, social, and economic well-being of individuals over time to create a holistic approach to preventing homelessness.
Research Process
Why This Approach?
Given the sensitive nature of predicting homelessness, an ethical review was essential. My approach was grounded in assessing the fairness and transparency of the algorithm, ensuring that the system did not perpetuate biases or further marginalize vulnerable communities.
Research Focus
Question 1: How can AI be ethically applied to predict homelessness without violating privacy or reinforcing societal biases?
Question 2: What mitigation measures are necessary to ensure that the system remains fair and accurate for all communities, particularly marginalized groups?
Methods Used
Pattern Recognition and Data Analysis
I explored the system’s ability to recognize over 580 factors contributing to homelessness, such as emergency room visits and food assistance.
Ethical Framework Analysis
I examined the ethical frameworks guiding the system, including privacy concerns and the need for informed consent when using personal data.
Stakeholder Interviews
Conducted interviews with experts from the California Policy Lab and LA County Department of Health to gain insights into the real-world application and ethical challenges of using AI in social services.
Findings & Insights
-
While the system was effective in predicting homelessness, concerns about privacy and the lack of transparency around how personal data was being used emerged. Interviewees noted that individuals were often unaware that their data was being analyzed, raising questions about informed consent.
The ethical use of AI in social services hinges on transparency and accountability. Without it, AI systems risk undermining public trust, particularly among vulnerable populations.
-
My analysis, supported by interviews with data scientists, revealed that although the model was not biased against historically disadvantaged communities, it required ongoing evaluation to prevent the introduction of bias over time.
Particularly, false negatives (failing to identify at-risk individuals) could disproportionately affect marginalized groups. Preventing bias is critical in any AI system, but especially in one dealing with life-altering consequences like homelessness. Addressing these concerns upfront can reduce harm and improve system accuracy.
-
The AI system successfully helped 86% of participants retain housing, demonstrating its potential for scalability. However, human judgment and ongoing support remained necessary to complement the system’s recommendations.
While AI can assist in identifying risk, human oversight is essential to ensure the recommendations are executed compassionately and effectively.
Reflection & Lessons Learned
This case study demonstrates the immense potential for AI to drive social good, but also shows the need for responsible, ethical AI practices. From ensuring data privacy to preventing algorithmic bias, my work reinforces the idea that although technology can assist in solving societal issues, it must be human-centered and transparent to succeed.