Machine learning (ML) can improve data applications in disaster risk management, especially when coupled with computer vision and geospatial technologies, by providing more accurate, faster, or lower-cost approaches to assessing risk. At the same time, we urgently need to develop a better understanding of the potential for negative or unintended consequences of their use.
The Responsible AI track of the Open Cities AI Challenge asked participants to consider the applied ethical issues that arise in designing and using ML systems for disaster risk management. How might we improve the creation and application of ML to mitigate biases, promote fair and ethical use, inform decision-making with clarity, and make safeguards to protect users and end-beneficiaries? The three winning submissions below examine the practical ethics of ML and its impacts on data for urban decision-making.
Fairness in Machine Learning: How Can a Model Trained on Aerial Imagery Contain Bias?
Stop pretending technology is value neutral
Thomas Kavanagh and Alex Weston
Contributed Geographic Information: Gray Zones in Collection and Usage