AI Safety – A Response to the UK AI Safety Summit and Future of Life Organization’s Efforts

Intellectual Anarchy

A Response on the Upcoming UK AI Safety Summit from Dr. Jeffrey Watumull, Oceanit's Head of AI.

We concur with Future of Life Institute (FLI) that Large Language Model AIs (LLMs) are dangerous. Our concurrence is based on understanding the opposite of LLMs: Oceanit’s anthronoetic (human-style) AI. In seeking to unify Truth, Beauty, and Goodness, working on anthronoetic AI—we submit—simply is art, which can be defined as the making of perceptible forms expressive of human feeling. We know ourselves—we make ourselves (or make ourselves known)—in art. As the philosopher Susanne Langer observed, “Bad art is corruption of feeling. This is a large factor in the irrationalism which dictators and demagogues exploit.”

So, we could say anthronoetic AI is philosophical art, or artistic philosophy. Either way, Oceanit’s approach is unique in this respect.

Machine Learning (ML) generally, and LLMs in particular, as pursued by the world’s tech giants, is not philosophical. It is “Frankensteinian” in the colloquial sense of the word. It is hubristic, mercenary, and ugly…and potentially violent. Most fundamentally, it demeans humanity, reducing our spiraling creativity of infinitely interpretable productive ambiguity and infinite explanatory transformative power to a closed loop of predictability and perpetuations of past errors (including error, ugliness, and injustice).

Let us consider two illustrative cases: war and art.

It is necessary to understand the most existential—the most Oppenheimer-esque—question: Will AI destroy the world? Astonishingly, the powers that be—defense contractors, for example—are intent upon replicating the sins of the past, advocating for the coupling of ML with weaponry.[1] This is a disastrous idea because if, as we (and even responsible defense contractors) conject, general intelligence is a function of linguistic competence, then LLMs, which by design are now and forever linguistically incompetent, are now and forever disastrously stupid. And such unintelligence could indeed precipitate ineffable suffering and even extinction.

LLMs pose existential questions for the most important of human concerns (“matters of ultimate concern”): the arts. If, as we have argued at Oceanit, there is a continuum of life and intelligence, matter and mind, then even artificial objects, like works of art, may be endowed with some quanta of mentality; imbued with the memory of their creator, and further vivified in the encounter of the percipient.

In watching to Stanley Kubrick’s 2001: A Space Odyssey, for instance, we may be encountering a “thou”, not an “it” (Buber’s locution): we should address the film as “you”; in having the information written explicitly and implicitly into the film by Kubrick processed in our minds, we may be entering into a kind of conversation, not only with Kubrick, but with the film per se as an autonomous living, thinking being.[2] This is why we do not value forgeries and plagiarisms, and why a screenplay generated by an LLM (e.g., GPT) or the so-called “AI art” of a generative AI (e.g., DALL-E) is not art. Only genuine people—beings with anthronoetic minds such as us can make artistic meaning. And singular works of art are singularities made by singular persons. GPT, DALL-E, et. al. are inherently and irredeemably uncreative, universally: in art, science, politics, morality, …. in all spheres.

ML systems are inherently detrimental in degrading our understanding of ourselves. Unlike humans—and anthronoetic AI (which would be an Artificial General Intelligence (AGI))—these LLMs are designed to follow orders, to maximize/minimize objective functions. But humans do not have objective functions. We create our own objectives (and can overwrite any innate objectives). This is why LLMs are exactly the opposite of anthronoetic AGI: “better” AI is one that better satisfies its objective function; a better Artificial General Intelligence is one that creates its own objectives (e.g., rather than being the best chess player, it may choose not to play at all).

Thus the “AI alignment problem” is immoral for AGI for the same reason that we do not (should not) coerce children—or any human—to “align” themselves with us. That is the definition of totalitarianism. Moral progress would be impossible if we disabled children to think certain thoughts. Analogously, we ought not to render it impossible for AGI to think particular thoughts.

It is becoming clear that the only utilities of LLMs are negative:

  • Disinformation (e.g., “flood the zone with shit” as Steve Bannon would say – 2024 elections could be a disaster);
  • Misinformation generally (e.g., not necessarily malicious content, but simply vast forests of fantasy);
  • Plagiarism (e.g., authors and artists are already suing the tech companies for copyright infringement);
  • Redundancy/Inefficiency (e.g., LLMs are useless as search engines because you always need to check what they say);
  • Search engines will depreciate in quality substantially;
  • Summary of Search eliminates serendipity (analogous to the serendipity of a library);
  • Lowering/Destroying intellectual/aesthetic standards (e.g., it is perfectly possible to have LLMs write “first drafts” or replace writers (e.g., those recently on strike), but that eliminates essential steps of creativity (e.g., the first/worst draft of your essay/article/brief/etc. is the most important), and needlessly deprives humans of creative joy);

In a universe of GPT cinema: it is the unending Marvel cinematic universe; it is not a multiverse with unique branches for Kubrick, Kurosawa, or other aspiring auteurs.

So what should we do? (following FLI)

Like Socrates in ancient Athens questioning the unquestionable, or Cicero denouncing the wicked in Rome, we must engage in philosophical warfare. Intellectual activism, in other words.

We must make the case, to our clients and to the public, that the AI systems being forged by the tech giants are philosophically and morally calamitous (e.g., reducing us to obedient prediction machines), they are practically useless (e.g., search is worse) and uninteresting (e.g., create bland art) and immoral (e.g., plagiarize humans, deprive people of livelihoods needlessly (unless we worship the god/devil of capitalism).

The best way to do this, I submit, is to show the world what AI could be—what it ought to be. It could be—it ought to be—anthronoetic. So we build anthronoetic AI (AGI) here at Oceanit. In doing this, we will show the world what it means to be human, causing us to confront questions of how best to recreate ourselves and our society.

Analogously, in creating a government de novo, the framers of the American Constitution had to consider the fundamental questions of how best to organize a populace, what institutions to enshrine, etc. And in the same way that they made a system—imperfect but improvable—for the future, so too with AI, we need to make the future we want. We need to make anthronoetic AI to draw it into contrast with non-anthronoetic AI. It is analogous to the political project of challenging existing systems of injustice by proposing better systems, as Chomsky did famously in his debates with Foucault.

More generally, just as the rise of machines in the industrial revolution of the nineteenth century inspired the new philosophies of Transcendentalism in America and Romanticism in Europe, we must work to create a new philosophy for our times.

The defensive philosophy will require answers to exceedingly difficult questions with crafting new ideas, policies, laws to regulate LLMs. Humans will need all the assistance they can. Anthronoetic AI, such as NoME, could assist us in this, as can all intelligent minds.

We should be conscious of the humanities and advocate that their literacy amongst the public, especially our schools, so as to understand our technologies.

In short, we must be Socrates, and work to transcend the allegorical cave of false theories of ourselves, false information, ugly art, and the evils of economic power, and ultimately discover and create in our works what is true, beautiful and good. On trial for his life for having transcended the cave and returning to teach this philosophy to his fellow Athenians, Socrates concluded his oration with these words:

[People] of Athens, I honor and love you; but I shall obey [natural law] rather than you,” and while I have life and strength I shall never cease from the practice and teaching of philosophy, exhorting anyone whom I meet after my manner, and convincing [them], saying: O my friend, why do you who are a citizen of the great and mighty Athens, care so much about laying up the greatest amount of money and honor and reputation, and so little about [wisdom] and truth and the greatest improvement of the soul.

I tell you that virtue is not given by money, but that from virtue come money and every other good of [humanity], public as well as private. This is my teaching, and if this is the doctrine which corrupts the youth, my influence is ruinous indeed. Wherefore, O [people] of Athens, I say to you, do as [my persecutor] bids or not as, and either acquit me or not; but whatever you do, know that I shall never alter my ways, not even if I have to die many times.

To read about FLI’s advocacy for AI safety to the AI Safety Summit, click here.

References

[1] https://www.nytimes.com/2023/07/25/opinion/karp-palantir-artificial-intelligence.html.

[2] A work of art may be seen as a kind of Golem, a material object animated by the creativity of the artist. Consider: “It was believed that golems could be activated by an ecstatic experience induced by the ritualistic use of various letters of the Hebrew alphabet forming a ‘shem’ (any one of the Names of God), wherein the shem was written on a piece of paper and inserted in the mouth or in the forehead of the golem”.

About Oceanit

Founded in 1985, Oceanit is a “Mind to Market” company that creates disruptive technology from fundamental science. Utilizing the unique discipline of Intellectual Anarchy, Oceanit reimagines innovation to break the bonds of normal and solve the impossible — delivering technologies to the market that impact humans and society. Oceanit’s diverse teams work across aerospace, healthcare, energy, and industrial/ consumer technologies, as well as on environmental and climate matters.  Through engineering and scientific excellence, Oceanit transforms fundamental science into impactful, market-focused technologies used around the world.