So What Should We Do About AI? Let’s Count The Ways

The Society Library
6 min readNov 23, 2023

--

We map debates. Currently, we’re mapping debates about AI. Here’s the hot take about the 11 main points of view.

For those who haven’t encountered us before, we’re a nonprofit which maps debates. That means we identify different points of view, and then we work to find all the evidence, argumentation, claims, and explanations which support and refute each side. Then, we put them in a graph of linked data, render it in an interactive briefing document, as well as other ways, and then make it searchable [links are to a 5k point collection about nuclear power in California]. We also link to all the references we extract from. Then we give the collections away for public educational use.

Why do we do this? Sincere inquiry after truth.

We map at the societal-scale. Basically, this means we aim to show what an entire state or country has to say about a debate, but we deduplicate, clean, steel-man, label, link, and perform as much fact-checking as we can. So it’s not filled with a bunch of redundant points. These graphs can be enormous (see below), so much so that we can’t even show it all unpacked at once.

This “debate map” was created by The Society Library, and shows only the first three layers of a map that contains 5862 arguments, claims, and evidence extracted from over 5k+ references. This expresses the collective economic, environmental, safety, political and social concerns of stakeholders in the debate over whether the last remaining nuclear power plant in California, Diablo Canyon, should remain open or closed.

So — Let’s Talk about AI

Right now, we’re working on mapping AI debates — particularly the ever-evolving AI Safety debate, while at the same time automating our debate-mapping process with AI itself. We’ve detected over 800 topics that are being debated about AI currently, but let’s focus on one simple question: what should we do about AI?

So, what should we do about AI?

So far, we believe there are about 11 points of view at the highest level of abstraction.

  1. We should advance the development of AI as much as possible, and/or make it as openly accessible as possible to both develop and use, with no restrictions.
  2. AI should continue to be developed naturally, meaning based on demand, actual use, market dynamics, and technological capabilities.
  3. We cannot allow unrestricted AI development, there needs to be some guidelines, restrictions, and/or regulations.
  4. We need to slow AI development beyond certain levels of capabilities to ensure its safety.
  5. We need to pause and AI development beyond certain levels of capabilities so we can find out how to ensure its safety.
  6. We don’t need to slow down or pause AI development, but we need to make sure AI safety and alignment capabilities keep up with development.
  7. There is no safe way to proceed with advanced AI development, therefore we should stop/ban all development beyond certain capabilities immediately
  8. We need to research more, learn more, engage with the public more, discuss more, and experiment more in order to figure out what to do about AI.
  9. We likely should pause, ban, control, or regulate [certain kinds of] AI, but it’s not likely, it’s impossible, and/or it’s too late; so we should prepare to mitigate the impact or prepare for the (potentially catastrophic) consequences.
  10. Without implying we should pause, ban, control, or regulate certain kinds of AI, we should prepare to integrate AI, because it will have an impact.
  11. We don’t know what’s the best course of action in regards to certain kinds of AI right now, and perhaps we never will — it’s difficult to know/predict what will happen or what to do; let’s just wait and see.

Or put more simply —

Of course, people can take many positions. For example, people may believe that we should both restrict and regulate AI after a pause or even after a ban. The reason why we disambiguate them is because the positions are meaningfully distinct in what they are calling for, and the argumentation and evidence which supports each position is also meaningfully distinct.

Ok, so what happens next?

Next, we identify the highest level arguments that support each of these positions.

For example —

In support of the position that we need to pause AI development, one argument is:

  • “We shouldn’t let AI labs and companies race to develop and deploy risky AI with human-competitive intelligence faster and faster without the guarantee that this AI will be developed and deployed safely, which can’t be guaranteed if they don’t understand or can’t predict the behavior of the technology, and many labs admit they don’t.”

Then what happens next? We break down the argument into premises related to truth and relevance. Each premise needs to be proved. Keep in mind, this is a very ‘high level’ argument (so relax, logicians)

Here’s the breakdown:

  • “AI systems with human-competitive intelligence can pose profound risks to society and humanity.”
  • “Contemporary AI systems are now becoming human-competitive at general tasks.” [needed to describe why this is relevant]
  • “There are incentives driving AI labs and companies to develop and deploy AI faster and faster.” [needed to describe why this is relevant]
  • “AI labs and companies themselves do not understand AI well enough to be able to control and predict it’s behavior.”
  • “Without being able to understand, predict, and control AI, AI labs and companies cannot guarantee the safety of the systems they are releasing to the world.”
  • “Therefore, we should implement a pause until we can establish the safeguards to ensure human-competitive AI will be developed and deployed safely.”

Note: this was deconstructed from this source.

Then what?

Then, we break down language into more precise claims, find evidence for and against each premise, and we link it all together. Here’s an example from our nuclear energy collection:

Recall the video up top — these debates are massive. This was just one simple high-level argument, supporting one simple high-level position. The chains of reasoning can get very long (see below, again nuclear energy):

So as you can see, we have a lot of work ahead of us. However, AI is helping us automate our debate mapping methods, which were — as of last year — mostly manual. To read more about our work automating this process, check out these updated:

Society Library founder, hosting AI hackathons at the Internet Archive.

Thanks for reading, we hope this was interesting. Let’s build an AI system to keep up with the AI Safety debate, it may be the most important AI we ever build, and the time to build it is now.

We also have a lot more to share, so please subscribe on our website. We also have a $150k matching grant offer. So if we raise $150k, we unlock $150k — and it will go towards mapping these AI debates. We are a nonprofit, so we hope you’ll include us in your 501(c)3 charitable giving this year.

P.S. worried about bias? Understandable, but we are only interested in inquiry after truth. We are a multi-year platinum star-rated transparent 501(c)3 dedicated to inquiry after truth and hearing from all sides. Please see our Mission and Vision, Virtues and Values, and Knowledge Policies.

--

--

The Society Library

A non-profit library of society’s ideas, ideologies, and world-views. Focusing on improving the relationship between people and information.