photo: PopTika/shutterstock

photo: PopTika/shutterstock

Our 2018 Philanthropy Awards named artificial intelligence as the year’s “Hottest Science Giving Trend,” writing that “year after year, machine learning and other forms of AI have been a white hot topic among research funders, and those concerned about legal and ethical implications.”

With the new year upon us, I’d like to take this idea a step further: 2019 will find philanthropic funders, domestic and foreign governments, and businesses dramatically scale up their investments in AI. And a key question looking forward is whether donors worried about the “legal and ethical implications” of AI can possibly keep pace with the unrelenting profit-driven forces of private industry?

I’ll explore these issues momentarily. But first, let’s turn to an instructive case study out of Boston, where Northeastern University announced a $50 million endowment gift from alumnus and trustee Amin Khoury in support of its College of Computer and Information Sciences, which will be renamed Khoury College of Computer and Information Sciences. The gift, which aims to “advance the frontiers of knowledge in the age of artificial intelligence,” comes less than two months after Stephen Schwarzman made a $350 million AI gift to neighboring MIT.

“Given the growing importance of artificial intelligence, machine learning, robotics, and cybersecurity in global economic activity,” reads Northeastern’s press release, “the university’s academic plan, Northeastern 2025, envisions a world in which humans are empowered to become agile learners, thinkers, and creators beyond the capacity of any machine. During a private meeting with computer science faculty and staff, Khoury suggested that he was doubling down on that vision.”

“Imperative for Success”

Khoury is an accomplished entrepreneur who earned his MBA at Northeastern and launched companies in a variety of fields, including medical products, medical services, aerospace manufacturing, aerospace distribution, and oilfield services. He was a trustee of the Scripps Research Institute and also serves on the board of directors of the West Palm Beach, Florida-based Raymond F. Kravis Center for the Performing Arts and is the chair of the center’s Investment Committee.

In 2017, Khoury sold his company BE Aerospace to Rockwell Collins (now United Technologies Corporation) in a multi-billion-dollar deal. A year later, he sold his aerospace distribution business to the Boeing company in another multi-billion-dollar transaction. His net worth stands at roughly $2.8 billion.

He and his wife, Julie, met at Northeastern while she was also earning her MBA. In 2017, Khoury was honored with Northeastern’s inaugural Distinguished Entrepreneur Award. In 2003, the couple established the Amin J. and Julie E. Khoury Endowed Scholarship Fund for undergraduate students pursuing technological entrepreneurial studies, which has granted scholarships to 22 students.

Northeastern’s Khoury College of Computer and Information Sciences is a national leader in cybersecurity and privacy. Mirroring exploding demand in computer science and the Boston region in particular—40 percent of MIT undergrads are majoring in, or pursuing a joint major that includes computer science—computer science enrollment at Northeastern has skyrocketed over the past decade, and now stands at 3,474 students.

The gift finds Khoury—and I’m quoting the press release here—“anticipating trends and making transformative decisions” as AI increasingly interfaces with fields like cybersecurity and privacy. This isn’t a coincidence. As a recent Investors Business Daily piece notes, computer security has become an AI hot spot for tech companies concerned that bad actors will use AI to launch more potent cyber attacks. What’s more, many U.S. companies face a shortage of computer security personnel to thwart and detect threats, which helps to explain the proliferation of university cybersecurity gifts across the past few years.

Sounding the Alarm

On the surface, the AI fundraising boom resembles other higher ed areas where donors seek to make the recipient university a magnet for researchers and students. Recent examples include Kenneth Griffin’s $125 million gift to the University of Chicago’s economics department, a spate of gifts for medical research and patient care, and the University of Washington and Carnegie Mellon University’s efforts to build new engineering buildings.

But the burgeoning AI space is more than your typical donor-driven higher ed arms race.

First, a handful of disproportionately influential billionaire donors control the funding. A recent Chronicle of Higher Education study found that since 2015, nine wealthy donors have given a total of about $583.5 million to nonprofit institutions that are developing new AI tools and studying the effects of AI on human life. (The analysis did not include Khoury’s gift.)

Roughly 81 percent of the $583.5 million given since 2015 came from just two donors: Stephen Schwarzman and the late Paul Allen, who gave $125 million to his Allen Institute for Artificial Intelligence, a Seattle nonprofit he launched in 2013. This should be a cause for concern in an era in which mega-donors exert a disproportionate influence over society.

Second, if we’re to believe AI’s opponents, the stakes are unimaginably high. How high? Nothing less than the possible extinction of the human race, according to Stephen Hawking. The same can’t be said about a University of Chicago economics class on Milton Friedman (though some progressives would beg to disagree).

This distinction helps to explain the surge in gifts earmarked to ensure the responsible and ethical development of AI technology, many of which are coming from techies who understand the gravity of the threat.

Examples include an AI-centered philanthropic fund, formed with a $27 million pool of donations from the Knight and Hewlett Foundations, Reid Hoffman, the Omidyar Network, and investor Jim Pallotta. Hoffman also gave nearly $2.5 million to the University of Toronto to create a professorship to study how future artificial intelligence will affect people’s lives.

Elon Musk, who called AI “humanity’s “biggest existential threat,” gave $10 million in 2015 to the Future of Life Institute, a research organization that studies the risks from advanced technologies and seeks to harness them in ways that will help people flourish rather than self-destruct. In 2016, the Open Philanthropy Project, anchored by the wealth of Dustin Moskovitz and Cari Tuna, made a $5.5 million gift toward the launch of the Center for Human-Compatible Artificial Intelligence.

Schwarzman, meanwhile, urged MIT president Rafael Reif to focus on the ethical issues raised by automated decision-making in everything from medical diagnosis to self-driving cars, as well as the workplace impact. “We really need to try to understand this technology, not just get hit by it,” he said. Last year, Schwarzman also gave $5 million to the Harvard Business School to support the development of case studies and other programming that explore the implications of artificial intelligence on industries, business, and markets.

And in a preview of Khoury’s gift, earlier this year, Northeastern University commissioned a survey exploring Americans’ view of AI. The survey, conducted by Gallup, found that most U.S. adults have an overall positive view of AI, but believe they are ill-prepared to deal with AI’s expected impact on the global digital economy. 

“The answer to greater artificial intelligence is greater human intelligence,” said Northeastern president Joseph E. Aoun. “The AI revolution is an opportunity for us to reimagine higher education—to transform both what and how we teach. If colleges and universities can adapt and modernize, we can ensure that tomorrow’s learners will be robot-proof.” Northeastern also rolled out a video in conjunction with Khoury’s gift further elucidating its vision for AI in the world of higher education and beyond.

A Burgeoning MultiTrillion Dollar Industry

While donors’ desire to pump the breaks on AI is certainly heartening, the relentless march of technology and markets suggest their efforts may be quixotic at best.

A recent KPMG report found that venture capital (VC) investment in AI doubled to $12 billion in 2017. VC investment in China to a record high of $40 billion in 2017, up 15 percent from the previous year. China’s 2030 plan envisions a $1 trillion artificial intelligence industry. Meanwhile, the U.S. military plans to spend $2 billion on “next generation” AI.

In comparison, AI-related gifts by donors across the past four years—not all of which, mind you, have been earmarked for oversight—are a drop in the bucket.

As for the tech giants, Facebook continues to invest heavily in AI, Amazon is rebuilding itself around AI, and, as one would expect, Microsoft, Google, and Apple are all betting big on the technology. Gartner, Inc. estimates that $3.9 trillion in AI-derived business value will be created by 2022.

If Facebook’s recent missteps have taught us anything, it’s that tech companies, faced with the ever-looming specter of quarterly earnings reports and buttressed by acquiescent politicians in Washington, believe that when it comes to rolling out potentially disruptive—if not downright destructive—technology, it’s easier to ask forgiveness than permission. In other words, it would be a mistake to assume that tech companies and the Chinese government are uniformly driven by the common good.

Compounding this issue is that unlike other sectors of society, AI isn’t regulated by a governing body. For some commentators, this is a problem. “I’m increasingly inclined to think there should be some regulatory oversight [of AI], maybe at the national and international level,” Musk said. But this government oversight, should it even come to fruition, will take years to develop, leaving civil society as a vital bulwark against the downsides of AI.

Prescient Funders

While some imagine AI as a sinister agent that will usher in some Terminator-like dystopian hellscape, AI’s more certain impacts on society are likely to be plenty disruptive. A recent report by McKinsey Global Institute claims that as many as 800 million jobs—and 73 million in the U.S. alone—could be lost worldwide to automation. (To be fair, some research suggests AI and automation will actually generate a net gain in job growth.)

Among the first casualties in this shift will be those who drive for a living, as autonomous vehicles eliminate jobs in trucking and ride hailing services that are often held by less skilled men—the same group that’s already been whacked by other economic changes ushered in by automation and globalization. These job losses could help to further stoke the fires of populism and resentment of elites. Given the past failure of philanthropy to tune into the devastation of America’s white working class, and how that story has played out, you’d think that any number of funders would be looking just over the horizon at how the disappearance of millions of more jobs might play out—both in people’s lives and in politics.

But so far there’s not a lot of funding action in this space. The Ford Foundation’s initiative on the Future of Work, for example, seems only partly concerned with the elephant in the room that is automation, while many of the foundations operating in the workforce development space don’t seem to be paying much attention to this trend.

Perhaps unsurprisingly, among the donors who are paying attention harken from the same tech industry that’s about to put millions of Americans out of work. Last year, for instance, Keith Block, vice chairman, president and COO of Salesforce, and his wife, Suzanne Kelley, VP of operations & PMO, global business units at Oracle Corporation, gave Carnegie Mellon $15 million to examine “the future of work” and “impact of technology on the ways in which workers at all skill levels will make their living in the 21st century.”

Meanwhile, any number of techies–from Mark Zuckerberg downward—have expressed interested in a universal basic income. These elites know what’s coming in terms of technology and, clearly, they’re afraid that the pitchforks may soon follow.

The Dystopian Future Is Already Here

But getting back to the higher ed fundraising space, it’s easy to see why donors are so bullish on AI. Alumni donors want to see their alma maters take the lead in a surging research field, equip students with critical skills, and attract top-flight faculty. But as this juggernaut moves forward, it will be important for universities to keep an eye on the larger issues raised by AI.

This is why the Northeastern and MIT gifts are so important. To the latter, it will be interesting to see how MIT’s works evolve from press release platitudes—“technological advancements must go hand in hand with the development of ethical guidelines,” said president Reif—to more concrete action items.

Meanwhile, concerned donors making gifts explicitly earmarked for AI oversight will have to dig a lot deeper if they wish to catch up with actors driven by profit and geopolitical dominance—and they need to do it soon. The genie’s already out of the bottle.

Writing in the New York Times, Timothy Egan’s piece “The Deadly Soul of a New Machine” looked at how the recent crash of Lion Air Flight 610 was caused by the plane’s AI-driven “cyber-pilot.” After takeoff, the cyber-pilot “sensed something was wrong” and started to force the plan down (e.g. “automated decision-making.”) The human pilot tried to intervene and stabilize the plane. The cyber-pilot overrode the human pilot, and the jetliner crashed, killing all 189 passengers.

Egan argues that AI and social media algorithms—our modern-day Frankensteins—have already usurped its human creators. The dystopian future has arrived, and donors would be wise to heed Egan’s warnings.

“It’s not Luddite,” he writes, “to see the be-careful-what-you-wish-for lesson from Mary Shelley’s era to our own, at the cusp of an age of technological totalitarianism. Nor is it Luddite to ask for more screening, more ethical considerations, more projections of what can go wrong, as we surrender judgment, reason and oversight to our soulless creations.”

Share with cohorts