The Responsible AI for DRM report highlights the ethics and responsibility concerns in AI- and ML-supported projects, such as algorithmic bias, transparency and privacy issues, and reduced roles for local participation and expert judgment.
When community mapping meets artificial intelligence
Artificial Intelligence can help with geospatial data collection — and those data can save lives. But AI can also have unintended consequences for marginalized groups. That’s where Responsible AI comes in.
Open Cities AI Challenge: Segmenting Buildings for Disaster Resilience
The Open Cities AI Challenge, put on by GFDRR with Azavea and DrivenData, recently concluded with over 1,100 participants, 2,100 submissions and $15,000 in total prizes awarded. Along two competition tracks, the Challenge produced global public goods — open-source data, code, research, and know-how — that will support mapping efforts for disaster resilience. This includes: Increasing… Read more »
Perspectives on Responsible AI for Disaster Risk Management
Machine learning (ML) can improve data applications in disaster risk management, especially when coupled with computer vision and geospatial technologies, by providing more accurate, faster, or lower-cost approaches to assessing risk. At the same time, we urgently need to develop a better understanding of the potential for negative or unintended consequences of their use. The… Read more »
Machine Learning for Disaster Risk Management
This guidance note explains how the World Bank Group uses machine learning algorithms to collect better data, make more informed decisions, and, ultimately, save lives.