How AI intersects with social good

Over the past two years, AI has had a transformational effect on the business landscape. The technology’s potential to increase creative output, enhance worker productivity, and eliminate jobs has been debated ad nauseam. But one aspect of AI’s proliferation has flown under the radar, or at least not gotten the attention it deserves: how it will affect social impact. At Fast Company’s annual Impact Council meeting, which took place on June 3 at the New York Stock Exchange, we brought together a collection of top business leaders from within the Impact Council and elsewhere to explore whether this transformative wave of technology can unlock creative solutions and help mission-driven companies achieve their goals beyond the bottom line. How can it be used to build a diverse workforce and an equitable workplace? How can AI enhance productivity in a way that makes work not just more efficient but also safer? How can it be harnessed in a way that’s consistent with a company’s values? This experienced group came at these questions from various perspectives—but with a consistent focus on practical solutions. Here are some of their insights. [Portrait illustrations: Kagan McLeod] Navrina SinghFounder and CEO, Credo AI“We really need a multi-stakeholder oversight mechanism for artificial intelligence. What we are seeing is if you just put in AI experts as the stakeholders of managing oversight, they are going to be far removed from the business outcomes, such as reputational damage or regulatory risk.” Amy WebbFounder and CEO, Future Today Institute“AI’s energy demands pose a paradox, offering climate solutions but also contributing to carbon emissions, a concern in energy-constrained areas. The unequal distribution of AI advancements risks deepening global inequalities, with the Global South facing significant disadvantages.” Tara ChklovskiFounder and CEO, Technovation“Currently, less than half the workforce is equipped to use generative AI technology, but I’ve seen a huge increase in the number of students explicitly showing interest in AI. [Meanwhile], there are only 3 million women in tech across the world—and as a result, the [AI] solutions you see are very narrow.” Jon CookGlobal CEO, VML“I’m more excited about AI than afraid of it, but I can’t project those feelings onto my employees, who may be worried about their job security. AI will be able to handle repetitive tasks and let employees focus on their creativity and come up with new ideas.” Greg HarrisonChief creative officer, MOCEAN“AI is so great at interrogating data. But I think the elephant in the room is the training data. There are legal and ethical concerns there. That feels wrong to me, as someone who has been in maker culture for 30 years. We’re handing tools to creators that work on the backs of the outputs of other creators. That has not been resolved legally.” Laura ManessGlobal CEO, Grey“[AI] feels systemic in a way. It doesn’t feel like a tool looking for a problem to solve. It definitely feels like a new foundation from which to build.” Amy MerrillCofounder, Plan C and Eyes Open“It seems increasingly like Google AI summaries are going to be the new way we access information online. It’s convenient, but it can spread misinformation. We don’t know what resources the AI is trained on. This can lead to confusion when people want to access information about abortion. What abortion headlines is the AI looking at? It’s total chaos.” Raphael OuzanCofounder and CEO, A.Team“Generative AI has reminded people that most companies have no idea how to build digital products. It’s not just about AI, but how do you use building blocks of AI to create new workflows that serve a particular purpose? This new era that we’re getting into [will allow] builders to put together these building blocks.” Disney PetitFounder and CEO, LiquiDonate“The nonprofit sector is often left behind. AI doesn’t seem to be accessible for those spaces. Nonprofits don’t really have the time or the expertise or the financial stability to invest in these technologies. So, how do we make sure that their needs are being taken into consideration?” Kojin OshibaCofounder, Robust Intelligence“As with traditional software, there is immense value in promoting the development of open-source AI models. That does, however, come with specific risks, so it’s essential that advancements are made in parallel to ensure the safety and security of companies using such models. This includes the ability to develop AI security standards and efficiently operationalize them with deep tech.” Robert SheenCEO, Trusaic“HR can use AI to close the gap—to pinpoint what is causing discrimination in an organization, for example. But the onus is really on organizations leveraging these technologies to use them responsibly. The vendor is just providing a tool.” Peter SmartChief experience officer and managing partner,

How AI intersects with social good
Over the past two years, AI has had a transformational effect on the business landscape. The technology’s potential to increase creative output, enhance worker productivity, and eliminate jobs has been debated ad nauseam. But one aspect of AI’s proliferation has flown under the radar, or at least not gotten the attention it deserves: how it will affect social impact. At Fast Company’s annual Impact Council meeting, which took place on June 3 at the New York Stock Exchange, we brought together a collection of top business leaders from within the Impact Council and elsewhere to explore whether this transformative wave of technology can unlock creative solutions and help mission-driven companies achieve their goals beyond the bottom line. How can it be used to build a diverse workforce and an equitable workplace? How can AI enhance productivity in a way that makes work not just more efficient but also safer? How can it be harnessed in a way that’s consistent with a company’s values? This experienced group came at these questions from various perspectives—but with a consistent focus on practical solutions. Here are some of their insights. [Portrait illustrations: Kagan McLeod] Navrina SinghFounder and CEO, Credo AI“We really need a multi-stakeholder oversight mechanism for artificial intelligence. What we are seeing is if you just put in AI experts as the stakeholders of managing oversight, they are going to be far removed from the business outcomes, such as reputational damage or regulatory risk.” Amy WebbFounder and CEO, Future Today Institute“AI’s energy demands pose a paradox, offering climate solutions but also contributing to carbon emissions, a concern in energy-constrained areas. The unequal distribution of AI advancements risks deepening global inequalities, with the Global South facing significant disadvantages.” Tara ChklovskiFounder and CEO, Technovation“Currently, less than half the workforce is equipped to use generative AI technology, but I’ve seen a huge increase in the number of students explicitly showing interest in AI. [Meanwhile], there are only 3 million women in tech across the world—and as a result, the [AI] solutions you see are very narrow.” Jon CookGlobal CEO, VML“I’m more excited about AI than afraid of it, but I can’t project those feelings onto my employees, who may be worried about their job security. AI will be able to handle repetitive tasks and let employees focus on their creativity and come up with new ideas.” Greg HarrisonChief creative officer, MOCEAN“AI is so great at interrogating data. But I think the elephant in the room is the training data. There are legal and ethical concerns there. That feels wrong to me, as someone who has been in maker culture for 30 years. We’re handing tools to creators that work on the backs of the outputs of other creators. That has not been resolved legally.” Laura ManessGlobal CEO, Grey“[AI] feels systemic in a way. It doesn’t feel like a tool looking for a problem to solve. It definitely feels like a new foundation from which to build.” Amy MerrillCofounder, Plan C and Eyes Open“It seems increasingly like Google AI summaries are going to be the new way we access information online. It’s convenient, but it can spread misinformation. We don’t know what resources the AI is trained on. This can lead to confusion when people want to access information about abortion. What abortion headlines is the AI looking at? It’s total chaos.” Raphael OuzanCofounder and CEO, A.Team“Generative AI has reminded people that most companies have no idea how to build digital products. It’s not just about AI, but how do you use building blocks of AI to create new workflows that serve a particular purpose? This new era that we’re getting into [will allow] builders to put together these building blocks.” Disney PetitFounder and CEO, LiquiDonate“The nonprofit sector is often left behind. AI doesn’t seem to be accessible for those spaces. Nonprofits don’t really have the time or the expertise or the financial stability to invest in these technologies. So, how do we make sure that their needs are being taken into consideration?” Kojin OshibaCofounder, Robust Intelligence“As with traditional software, there is immense value in promoting the development of open-source AI models. That does, however, come with specific risks, so it’s essential that advancements are made in parallel to ensure the safety and security of companies using such models. This includes the ability to develop AI security standards and efficiently operationalize them with deep tech.” Robert SheenCEO, Trusaic“HR can use AI to close the gap—to pinpoint what is causing discrimination in an organization, for example. But the onus is really on organizations leveraging these technologies to use them responsibly. The vendor is just providing a tool.” Peter SmartChief experience officer and managing partner,