Finding the AI balance: how to use AI for good without trashing the planet and its people

As entrepreneurs eager to use artificial intelligence for the greater good grapple with AI’s negative impacts on people and planet, speakers at the ChangeNow conference offer practical solutions to create a truly sustainable AI – from driving inclusion to cutting carbon emissions.

Under the glass roof of Paris’s Grand Palais during the ChangeNow conference in April, dozens of people rushed to listen to a 190-minute discussion on one of the flagship themes of the event: “AI for impact”. 

In the audience were many impact entrepreneurs and professionals, and ideas for using AI as a tool to create positive social and environmental impact weren’t lacking; but a question remained: in the face of the well-documented negative effects of AI, how to ensure “AI for impact” is also sustainable AI?

Speakers at the session, AI at a Crossroad: Progress Comes with Responsibility – which featured entrepreneurs, academics and big business – offered some practical solutions.

 

1. Shape it

While AI has been criticised for its negative impacts, the technology is still in the making, which creates an opportunity to shape it for the better. 

Shani GwinTo start with, the type of the people involved in the development of new AI tools will have a direct impact on its ethics, speakers argued. In particular, there was a risk that, because of a lack of diversity of the workforce developing AI, models were rife with biases that were detrimental to many communities, said Shani Gwin, the founder of Canadian-based pipikwan pêhtâkwan, an Indigenous-owned, led and majority-staffed public relations agency.

She explained her company was developing an AI assistant built in collaboration with Indigenous people in Canada, including a “guidance circle” of indigenous elders from across the country. “We’re trying to ensure that it has the perspective of indigenous people. A lot of our histories have been rewritten, AI tends to honour the perspective of a colonial white, cis-male. So when you go to ask it questions, it doesn't necessarily know the truths of our people.”

AI tends to honour the perspective of a colonial white, cis-male

Common in AI development, the availability of open-source technology – where developers share the code source of their software for all to use – created an opportunity for more diverse people to get involved, Gabriela Ramos, Mexico’s candidate for Unesco director-general, said in a pre-recorded video message.

“Initially, [AI] advancements were concentrated in the hands of few players with deep pockets in the US, in China, in the UK. Now… the recent open-source AI dynamics are contributing to trigger [a] turning point, one where a broader range of actors can engage in the development, and even the governance of these systems,” she argued.

Making sure to drive AI, rather than being driven by it, was key, Antoine Petit, CEO of research organisation CNRS, said. “The main danger is to let AI standardise thinking.”

 

2. Be frugal

In this context, the best way to limit the damage was to take a “frugal” approach to AI, James Martin, founder of online media organisation Better Tech, said. Frugal AI means using AI only when it’s necessary, and taking all the steps available to use models which are as “lean” as possible – by not storing unnecessary data or using smaller models that are less energy-hungry, for example. 

French authorities have issued guidelines on what frugal AI looks like, and the government has committed to favour businesses demonstrating a frugal AI approach in public procurement, Martin added.

 

3. Educate

With artificial intelligence expected to affect 40% of jobs globally, educating people to develop AI skills will be crucial to manage its social impact.

“We need to skill people with new digital and AI skills,” said Anthony Virapin, worldwide leader at Microsoft Entrepreneurship for Positive Impact. Microsoft is deeply involved in AI development, including with its own AI assistant Copilot, and last year made a US$13.75bn investment in OpenAI, the company running the now-ubiquitous ChatGPT. 

Anthony VirapinVirapin (pictured) said it was “part of our responsibility to help people and train people to be ready for this next generation of jobs”. Microsoft had to date provided AI training and qualifications to 14m people around the world, with a focus on under-served communities, he claimed.

Reaching underserved communities was no easy feat, however, Indigenous entrepreneur Shani Gwin reminded the audience. “[In] Canada and the US, a lot of our folks [indigenous people] do not have access to high speed internet, so we can’t actually even try to play around with AI, never mind trying to develop our skills,” she said.

Building relationships and creating opportunities for them to learn about AI and what it can do is “really important” as a first step to equip people with AI skills, she added.

 

4. Disclose the impact  

The first step to tackle the impact of artificial intelligence is to measure it – but information from key players is lacking, according to speakers.

Data on broader trends is available: the amount of energy consumption from data centres is expected to double by 2030 – growing four times faster than global energy consumption, according to a report from the International Energy Agency using a “scenario-based approach” .

The main danger is to let AI standardise thinking

However, Better Tech’s Martin explained that big AI platforms such as OpenAI did not report their negative impacts, making it difficult to assess how they could be tackled.

“The most impactful thing we could start with is measuring,” Axelle Lemaire (pictured), group executive at IT consultancy Sopra Steria and a former French minister, said. “Because the reality is, we can share some facts and figures, but we do not have access to the environmental information from the AI platform providers [such as OpenAI].”

In particular, providers do not disclose the impact of training AI models to perform their tasks. “We do not have the carbon impact of that training, to the extent that we altogether do not realise now the reality of what's behind AI in terms of environmental impact.”

 

5. Regulate

Axelle LemaireLemaire (pictured) said regulating the use of AI would be a powerful tool to steer the private sector in the right direction. “Regulation for business is not necessarily a burden,” the former centre-left minister added, “and when it comes to ESG topics, we actually need it, and not only to make sure that private companies work in responsible ways and produce values in responsible way, but also, I'm convinced that it is a source for better competitiveness.”

Lemaire lamented that the recent AI Act, adopted by the EU, aimed at providing a governance framework for AI, including regarding ethics, saw its requirements on environmental impact diluted under corporate pressure.

“We, I think, need rules on the environmental impact of AI, and that could be a huge differentiator for companies who have activities in Europe compared with other regions of the world.”

 

Top picture: from left: Anthony Virapin, Shani Gwin and moderator Kristen Davis at the ChangeNow conference 2025. All pictures courtesy of ChangeNow. 

Ready to invest in independent, solutions-based journalism?

Our paying members get unrestricted access to all our content, while helping to sustain our journalism. Plus, we’re an independently owned social enterprise, so joining our mission means you’re investing in the social economy. 

Please consider becoming a member