The general trend in tech writing has been towards future-casting or synthesizing a leadership lens with return on integration (rather than investment). Verity Harding, however, turns on the heels of history to cast a glance forward while giving full treatment to our tech influenced past. Her respect for our inevitable spin into another revolution on the fulcrum of AI is among the most important underpinnings of her new book, AI Needs You: How We Can Change AI’s Future and Save Our Own. The impacts of what she terms nascent technologies like AI, often trade in the currency of prediction. They may rely on history for comparison but fail to rationalize the reliance upon a Magic-8-Ball in the creation of nation-shaping policy. Harding’s book demonstrates a deep respect for where we are: the nexus of another technological revolution, with a wealth of recent experience in the peaks and valleys of similar shifts.
Populist, Republican, and Conservative movements tend to call for small government, and the hue of twenty-first-century oligarchic brands of libertarianism demand even less regulation. Harding is a clarifying salve to balance dominant technological discourse that leans towards the ends of profit, rather than the means of an AI that might benefit everyone. Using the Space Wars, IVF and stem cell policy, as well as pre- and post-9/11 tech in service to national defense and “public safety” she tells the story of our technological moment. She also resists fearmongering narratives of what many see as a potential villain origin story of artificial intelligence and the beginning of the end of our humanity. AI Needs You delineates a historical precedent worth examining as we turn toward the future.
Historical champions of small government, Margaret Thatcher, George W. Bush, Ronald Reagan, Richard Nixon, and more feature prominently in AI Needs You. As the world takes a more conservative leadership turn that favors similarly minimal oversight, there is no small genius at work in an analytical treatment of prior outcomes in similar partisan dominance. One of the book’s strengths is speaking across the political spectrum. Harding’s book is a well-researched policy use-case for regulation that commands our attention.
It’s a book that is best understood by way of the questions it asks and tarries with, leaving enough room for applicability regardless of political file or socioeconomic rank. Among its questions are: What governance methods did polities use during previous technological leaps and how did they serve the interest of the people? How were governments influenced by corporate or outside interests, if not led by them altogether, and how has that worked out for us whether by direct outcome or legal precedent? Who got left behind when for-profit, deregulatory policies were prioritized during other society-altering technological revolutions? What bridges needed to be built between party majorities in government (i.e. Republican-Democrat) to ensure a consistent focus on the public interest amidst significant change? How did once hardline leaders change their tune over time (as was the case with Margaret Thatcher’s pivot with IVF legislation where it met with stem cell research)? Relatedly, are governments creating policies with fewer for-profit loopholes, and more windows of opportunity to pivot in the direction of majority-interest population rather than minority-funding interests as AI evolves and our understanding of its impact grows?
Harding reminds us that there is reason to hope: we’ve been here before (at least from a Global North perspective). She highlights the importance of advocating for regulatory policy that ensures AI does not integrate with a for-profit Surveillance Complex (more than it already has, a la Surveillance Capitalism and as outlined in Atlas of AI) absent of consent in the name of the ever-nebulous “public safety”. She also offers Canadian readers reason to hope that our leaders are more attentive to these issues than not. Harding cites a dinner with Prime Minister Trudeau’s former Chief of Staff Gerald Butts who spoke of an urgent need to make the right policy decisions regarding AI “before it’s too late” (p.57). In other words, before it is outside the reach of government policy and regulation with an eye towards the public good and its denizens, instead of private profit. Work has already begun, with the Blueprint for an AI Bill of Rights in the United States, and an AI Act in the European Union. Her book demonstrates the necessity of building shared definitions of the public good. Without it we may struggle building a consensus about what merits protecting (least of all honoring what it means to be human), and how we define a freedom to decide how we engage with AI.
The adage is that we tend to fear what we do not know, but Harding demonstrates that we have reason to fear political ignorance of what we do know. The book uses the Space Wars, IVF science, as well as pre-9/11 and post-9/11 encounters with technology to demonstrate that indeed, policymakers know more than they may let on (or at least that they have reason to know more). Harding ensures that with AI Needs You, it isn’t just government leaders that also have a reason to know more and ask better questions of our leaders as the future unfurls and folds into the present: we do too, thanks to her book. There is much uncertainty about whether we need AI, but Harding’s title acts as a clarion call. What we do know about AI is that its presence requires our active participation to ensure it carries us into the future rather than holding us hostage to our ignorance: AI needs us.