This year, the topic of AI for social good had a large presence at the 2018 Conference on Neural Information Processing Systems, better known as NeurIPS. XPrize, a non-profit organization with a mission to enable a better, safer, more sustainable world, kicked off the workshop with a presentation Sunday morning that provided an update on their $5M AI for Good competition. Joined by The Ocean Project, they also addressed the ability of AI to solve humanity’s greatest problems. In short, AI has enormous potential to address real-world challenges, but it’s critical to make sure we mitigate any unintended consequences.
There were many talks on ethics, fairness, diversity, and inclusion in AI; nearly half of the keynotes at NeurIPS and a large number of workshops were devoted to the subject. In fact, Sean McGregor of XPrize provided conference attendees with a handy guide to some of the top talks – though it was hard to capture them all. This post (and a future post on ethics in AI) will provide a high-level overview of the topics’ presence at the conference, but our sincere apologies if we do not include your favorite talk or poster, and even more apologies if it is yours.
Though they are not mutually exclusive, we will break the content into projects that address the use of AI for social good and projects that address the ethical application of AI. AI for social good projects have the potential to positively impact people, animals, or the environment. Examples include projects that protect children from online predators, use drones to study the health of whales, help to grow crops more efficiently, or enable quadriplegics to control a motorized wheelchair using 3D facial gesture recognition. A common way to assess projects is to use the United Nations Sustainable Development Goals, or SDGs. The 17 SDGs exist as a way to “address a range of social needs including education, health, social protection, and job opportunities while tackling climate change and environmental protection.”
Over the course of the week, there were many projects that helped to solve one or more of the UN SGDs. In particular, projects presented in the Black in AI , AI for Social Good , Machine Learning for the Developing World, Medical Imaging meets NeurIPS , and the Machine Learning for Health workshops. The projects were varied in terms of how they helped society as well as the types of technological solutions that were applied.
However, there were a few consistent key takeaways from these projects. Primarily, the project groups worked very closely with the affected communities to ensure their needs were very well known, the community wanted their assistance and understood what the groups were trying to do, and/or engaged with local experts in close communication with the communities who could speak on their behalf. For instance, it can be relatively straightforward to build a computer vision classifier to identify whiteflies on a Cassava leaf (or any leaf). The students and professors building this classifier at the AI and Data Science research lab at Makerere University in Kampala, Uganda ensured the project was successful by working closely with farmers and scientists studying the crops. They built a solution that users needed, not one that researchers thought they needed. This was a common key takeaway and one that is very important to remember.
It was quite exciting to see all the ways the broader AI community is speaking about AI for social good. Intel was a proud sponsor of the AI for Social Good workshop along with the WiML, Black in AI, and LatinX workshops. and NeurIPS itself. As the Head of AI for Social Good at Intel, I am thrilled to see the various societal problems being tackled. You can find a comprehensive overview of all our NeurIPS activities here, our AI for Social Good projects here, and in a future blog, I’ll provide an overview of #AI4Good’s second component at NeurIPS, Ethics in AI.