The Responsible AI for DRM report highlights the ethics and responsibility concerns in AI- and ML-supported projects, such as algorithmic bias, transparency and privacy issues, and reduced roles for local participation and expert judgment.
Artificial Intelligence can help with geospatial data collection — and those data can save lives. But AI can also have unintended consequences for marginalized groups. That’s where Responsible AI comes in.
The Open Cities AI Challenge, put on by GFDRR with Azavea and DrivenData, recently concluded with over 1,100 participants, 2,100 submissions and $15,000 in total prizes awarded. Along two competition tracks, the Challenge produced global public goods — open-source data, code, research, and know-how — that will support mapping efforts for disaster resilience. This includes: Increasing… Read more »
Machine learning (ML) can improve data applications in disaster risk management, especially when coupled with computer vision and geospatial technologies, by providing more accurate, faster, or lower-cost approaches to assessing risk. At the same time, we urgently need to develop a better understanding of the potential for negative or unintended consequences of their use. The… Read more »
This guidance note explains how the World Bank Group uses machine learning algorithms to collect better data, make more informed decisions, and, ultimately, save lives.