Over the past few months, we have seen an avalanche of news and analysis about artificial intelligence (AI) and how it threatens to change how we work and access information with the same magnitude as the internet once did. But don’t worry; today, we won’t talk about Skynet and the war between machines and humans. Instead, let’s take a look at how AI offers opportunities and risks for nonprofits, how AI regulation is happening, and how it could involve our organizations.

First, we must understand that AI is a huge group of technological developments that, in general terms, aim for machines to learn more and more to perform automated tasks that only a human could achieve before. This technology is not new and has been developed since 1950, hand-in-hand with data science. If you want some nerdy entertainment, I recommend this IBM post about the history of AI and this MIT resource about the relationship between data science, machine learning, and AI.

In the nonprofit world, the conversation about the role of AI is all the rage. For example, The Chronicle of Philanthropy reports that during the last Nonprofit Technology Conference, held in April of this year, “Few events attracted as much attention as a smattering of sessions that explored the potentially transformative application of artificial intelligence, and other simpler forms of automation, to the nonprofit world(…) ‘A.I. has been coming for decades, and suddenly A.I. is here overnight, Alex Kasavin, senior product manager at Microsoft, told a rapt audience, before explaining how new and emerging A.I. tools could help nonprofits compose emails, manage relationships with donors, and synthesize meetings.”

Artificial intelligence also has the potential as an alternative for some nonprofit organizations to solve issues like workforce shortages or resource limitations because it offers an option to automate tasks, such as response emails, donor interaction, data processing and report preparation, and images, audio, and videocreations, among many others.

When Christopher Washington, Executive Vice President and Provost of Franklin University, asked ChatGPThow AI could impact nonprofit organizations’ operations, The answer is bafflingly true: “Improved efficiency: AI can help automate certain tasks and processes, freeing up small staff with limited resources to focus on more important and value-added activities. Enhanced decision-making: AI can help nonprofit organizations analyze large amounts of data and extract insights that can inform decision-making. Increased donor engagement: AI can help nonprofit organizations better understand donor preferences and behaviors and tailor their outreach and fundraising efforts to better meet the needs and interests of donors. Improved communication and outreach: AI can analyze data on the effectiveness of different channels and messages to better understand what works and what doesn’t, and optimize their communication strategies.”

However, AI also offers a significant number of risks for nonprofit organizations. On one hand, the nonprofit sector could experience job losses due to the automatization of tasks that a human being used to do, like writing this blog, for example. According to Goldman Sachs, approximately 300 million jobs around the world could be automated. Nevertheless, from a historical perspective, technological disruptions have created more new jobs than they have destroyed. For example, between 1910 and 1950, the arrival of the automobile was a threat to the employees of the horse economy. In the end, the balance was positive since 7.5 million new jobs were created and only around 623,000 were lost, with the benefit of increased productivity, and reduced travel times for society. It remains to be seen whether this pattern will continue.

But in addition to the changes in the labor market, AI brings serious risks in the generation of fake content, intellectual property, and discrimination. Here are some examples: the famous fake Pope Francis photo in a feather coat, the case of Andersen v. Stability AI et al, where three artists are suing artificial intelligence companies for using their work to create new content, or the algorithmic wage discrimination for drivers.  

However, regardless of your opinion about the virtues or risks of AI disruption, there are two clear things: one, it is a reality, and two, public policymakers know it. In October 2022, The White House released a “Blueprint for an AI Bill of Rights”, which offers a roadmap for the federal government to respond to the use of artificial intelligence and the possible risks in terms of security, algorithmic discrimination, data privacy, explicit declaration of when AI is being used, and offering of human interactive.

In Congress, the House Committee on Oversight and Accountability hosted a hearing about AI on March 8. The same day on the Senate side, the Homeland Security and Governmental Affairs Committee convened an Artificial Intelligence: Risks And Opportunities hearing, in which one of the conclusions was that “Some AI models, whether because of the data sets they are trained on or how the algorithm is applied are at risk of generating outputs that discriminate based on race, sex, age, or disability.” And there seems to be a term on which the nonprofit sector must reflect carefully: Data privacy.

The main food of AI is data. As you might remember from our previous blog, we have been calling attention to the need to have a conversation about data privacy and its regulation. The state regulation is in motion, and the possibility of reintroducing the American Data Privacy and Protection Act is on the table. On the other side of the Atlantic, the European Union is already considering updating its GDPR to introduce regulations on data privacy and AI.

The question worth asking is whether the debate about AI regulation in the US will prompt federal regulation on data privacy and the implications of this regulation for nonprofits. New regulation in this area could involve more protocols, requirements, and resources to obey laws and limit or restrain nonprofits’ access to this new technology. It also could be a useful framework to prevent the misuse of all the capabilities that AI offers. As for Independent Sector, we will continue to be attentive to how this discussion evolves and how it could affect the operation and future of the nonprofit sector – and we’ll keep you informed.

Have you seen additional opportunities and risks that AI presents for nonprofits? We’d love to hear from you.

Manuel Gomez is Manager, Public Policy at Independent Sector.

The post Artificial Intelligence: Why the nonprofit sector should pay attention appeared first on Independent Sector.