Moat was Highly Commended in the Artificial Intelligence category of the Housing Technology Awards 2026.
Seeing harm only after it has formed
Damp and mould rarely presents itself as a single, dramatic event. More often, it develops quietly through repeated repairs, ageing stock or patterns that only become obvious in hindsight. By the time a case is formally logged, whether through a resident’s report, an inspection or a sequence of reactive visits, the opportunity for an early, proportionate intervention may already have narrowed.
Across social housing, this creates an uncomfortable reality. Housing providers can respond diligently once a problem is visible yet they still struggle to answer a harder question: are we seeing risk early enough to act before the conditions escalate?
At Moat, this tension became increasingly clear. The challenge wasn’t effort or intent. It was that the signals of emerging damp and mould risk were fragmented across our systems, teams and time.
Resisting the instinct to jump straight to AI
Faced with this challenge, there was an understandable temptation to jump straight to predictive analytics. AI promised earlier signals and sharper prioritisation.
However, we decided to postpone that enthusiasm until we were confident the basics would survive it.
Instead, we stepped back and asked a more basic question: were our processes and data reliable enough to support any form of prediction at all? Without that foundation, any model, however sophisticated, would struggle to earn our trust or deliver meaningful change.
This choice set the direction for the work that followed.
Investing in discipline, not novelty
In 2023, our damp and mould handling was typical of many organisations. Cases were managed with care but the workflows varied, triage was inconsistent and any insights were largely retrospective. Patterns such as repeat visits, long-running cases or emerging hotspots were difficult to see until they became acute.
Our first step was intentionally unglamorous. It involved standards, consistency and follow-through, which rarely attract much excitement but tend to matter later.
We introduced a structured, low-code workflow to ensure damp and mould cases were logged consistently, triaged in a standard way and then followed through with clear audit trails.
The trade-off was time and focus. This work didn’t feel innovative and delivered little immediate insight. However, it removed a critical source of uncertainty: whether the underlying data and processes could be trusted at all. Only after this baseline was in place did it make sense to move forward.
Turning hindsight into shared visibility
In 2024, with workflows embedded, we focused on visibility. A Power BI reporting suite brought together case volumes, repeat visits, property archetypes and geographic clustering.
For the first time, our teams could clearly see where the damp and mould cases were concentrating, how long cases were remaining open and which homes were experiencing repeat interventions. Operational conversations shifted from anecdote to evidence, enabling more informed planning and prioritisation.
This was transformational in its own right but it also exposed a limitation. The data showed where problems had already occurred but it didn’t show where risk might be quietly building next; that gap became the catalyst for our next phase.
Using AI as decision support, not authority
By 2025, working closely with Moat’s property services department, we asked whether historical patterns could help to identify homes with a greater risk of damp and mould before problems were reported.
This was framed explicitly as decision support, not automated decision-making. Any signal would be used to prioritise inspections and conversations, never to replace professional judgement.
We combined data from multiple sources, including repairs history, property characteristics, EPC data, voids information, CRM records and selected environmental indicators. Features such as construction era, property type, repeat repair patterns and previous damp-related activity were engineered as potential risk indicators.
We tested several machine-learning (ML) approaches using Python, selecting XGBoost for its balance of performance and interpretability. Model development and evaluation were tracked using MLflow, creating a transparent record of experiments, metrics and versions.
From the outset, explainability was non-negotiable. Damp and mould is a safety-critical area and opaque predictions would not be acceptable to surveyors or residents. We embedded SHAP explainability into the model so our staff could see which factors most influenced each risk score.
In practice, this allowed the surveyors to understand why a home was flagged and to challenge the output where professional judgement suggested otherwise. All outputs remain subject to human review and the model’s role is clearly advisory.
What responsible AI looks like in practice
The model now highlights properties with elevated risk indicators, enabling earlier inspections and more proactive engagement. However, surfacing risk is only half the battle. Integrating these signals into an environment stretched by reactive demand and finite surveyor capacity is a significant operational hurdle.
We’re currently navigating the tension between what the data tells us and what the service has the capacity to do. Crucially, this creates a virtuous cycle. Every time a surveyor validates or challenges a prediction, that professional judgement is captured as ‘labelled data’.
We aren’t just predicting risk; we’re building a feedback loop where human expertise refines the algorithm. The model doesn’t just support the surveyor, the surveyor matures the model.
This collaboration has grounded our conversations about risk. Three lessons stand out:
Trust the foundation: AI only works when the workflows and data are already trusted. Business intelligence delivered value long before predictive analytics entered the picture.
Explainability is non-negotiable: In safety-critical contexts, transparent models (such as SHAP) build the trust necessary for staff to challenge the outputs.
Respect the maturity journey: Moving from structured workflows to BI and then to ML reduces risk and increases adoption. It is an incremental evolution, not a leap.
Predictive analytics will never replace professional judgement in social housing, nor should it. But when built on strong foundations, it helps us have evidence-based conversations about how we prioritise our most limited resource: time.
For us, the most important outcome isn’t the model itself, but the confidence that comes from knowing why a home is flagged and who remains accountable for the decision that follows.
That is what responsible AI looks like in practice.
Chi Fox is the head of data and business systems at Moat. The housing provider was Highly Commended in the Artificial Intelligence category at the Housing Technology Awards 2026.

