Autonomy, Automatons, <em>Golems</em>, and <em>Dybbuks</em>: Models for AI
PRESCRIPTION
Michael M. Rosen
Michael M. Rosen is an attorney and writer in Israel and a nonresident senior fellow at the American Enterprise Institute.
About a year ago, Ofir Alon, the Israeli Patent Office director, commissioned an opinion about artificial intelligence (AI) and its ability to create and possess inventions from the Jewish Law Division of the Israeli Ministry of Justice. Alon’s office had been asked to award a patent to a robot claiming to have made inventions, and he was looking for models for understanding AI in Jewish law. In his opinion, Michael Vigoda, Director of the Jewish Law Division, likened AI to fruits, pigeons, honey, fish, servants, dividends, and interest as they are discussed in Jewish sources. Ultimately, he concluded that from a halakhic perspective, the creator of an AI, rather than the AI itself, is the progenitor and owner of anything the AI purports to invent. Alon ultimately denied the robot’s application for a patent in agreement with Vigoda’s conclusions.
And yet, while Vigoda focused on the legal status of AI in Jewish sources, I contend that we can look further within the Jewish tradition, in particular to the concepts of the golem and the dybbuk, for different Jewish frameworks for understanding AI. Traditions surrounding the golem and the dybbuk offer us a way forward that the halakhic models do not: they suggest that we should embrace these new technologies but respect their limitations and apply appropriate boundaries to rein in their excesses. While this conclusion is not terribly controversial, the legendary figures of the golem and the dybbuk offer us a broad and nuanced Jewish perspective on the relationship between human beings and technology.
I’ll start by categorizing disagreements among existing attempts to understand AI on the basis of the difference between the concepts of “autonomy” and “automaton,” and then return to a discussion of these two mythical Jewish humanoids in the context of AI.
Metaphors for AI : Autonomy or Automaton?
Mental models and metaphors—especially those rooted in well-understood tradition—are critical in helping us come to grips with new technology and its implications. They are particularly helpful for understanding recent breakthroughs in the field of artificial intelligence, which are as consequential as they are difficult to grasp. The last several years have witnessed a revolution in machine learning, including not only ChatGPT, Dall-E, and Sydney, but also signal advances in health care, transportation, basic research, and much else. The enthusiasm these innovations have generated has been met by equally intense concern among technologists, philosophers, and policymakers about the threats posed by overly powerful AI, as well as by skepticism among some as to whether authentic intelligence will ever emerge from machines.
Surveying reactions to the recent AI explosion, I believe that most observers are relying on a mental model that sees each specific case of AI as either an autonomous creature or as an automaton. The distinction between these models is particularly helpful for how we evaluate the things—artistic, technological, or otherwise—that AI might produce. In short, the autonomy model assumes that machines will soon operate—or already are operating—fully independently from the humans who created them. In contrast, the automaton approach assumes that AI is and always will remain a function of humankind, a sophisticated computer whose outputs are limited by what we feed it.
As I’ll show below, we can make further distinctions within these categories. Advocates for both models assign them positive and negative valences: positive autonomists cheerlead for the boundless possibilities of future iterations of ChatGPT; negative autonomists express fear and terror at the emergence of an artificial general intelligence (AGI) that may one day subjugate humanity; positive automatoners believe we can harness the power of AI by ensuring our inputs are just and equitable; and negative automatoners scoff at the limitations of AI and mock its inability to ever match the power of the human brain.
Among the positive autonomists, Stephen Thaler, creator of Device for the Autonomous Bootstrapping of Unified Sentience (“DABUS”) contends that his invention has independently invented numerous other devices, including a food container graspable by robots and an emergency warning light, and he has launched an international campaign to obtain patents in DABUS’s name. Reuven Mouallem, who represents Thaler and DABUS inventions in Israel and first triggered the Israeli Patent Office’s Jewish law inquiry, told me that the device possesses “a quality most akin to sentience, a chaining of conceptual hives resembling the formation of a series of memories.” Similarly, Robert Jehan, one of Thaler’s British lawyers, told the UK Supreme Court that “DABUS has a mechanism to identify itself the novelty and salience of the present invention.”
In a parallel case earlier this year, a number of artists sued three leading developers of AI-based image generation for copyright infringement, accusing them of including protected artwork in their training data, thereby generating works that derive from the artists’ copyrighted content. While the image-generating companies haven’t yet formally responded, one told Reuters that “anyone that believes that this isn’t fair use does not understand the technology and misunderstands the law,” suggesting they will argue that the computer-generated works, far from robotically copying the underlying works, are truly “transformative” and therefore independently creative.
This theme of touting the revolutionary character of AI and embracing its benefits characterizes positive autonomists, those who wax enthusiastic about the world-changing possibilities AI presents. “Successfully transitioning to a world with superintelligence,” OpenAI states in its FAQ about AGI, “is perhaps the most important—and hopeful, and scary—project in human history”; the organization deeply believes that “more usage of AI in the world will lead to good.” Positive autonomists like Mouallem and Jehan believe machines have become, or will very soon become, truly intelligent, and they welcome the implications of this groundbreaking development.
In stark contrast, negative autonomists fear calamity. Nearly a decade ago, Elon Musk famously labeled AI humanity’s “biggest existential threat” and likened it to “summoning the demon.” And in March 2023, Musk’s Future of Life Institute published an open letter cautioning that AI developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control” and calling for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
Similarly, technologist Paul Christiano of the Alignment Research Center frets that “powerful AI systems have a good chance of deliberately and irreversibly disempowering humanity.” Favorably quoting the essayist Megan O’Gieblyn, who observes that “A.I. continues to blow past us in benchmark after benchmark of higher cognition,” New York Times columnist Ezra Klein argues that “humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies.” And in an article titled,- “I am Bing, and I am evil,” neurologist Erik Hoel contends that the Microsoft chat-bot “really does herald a global threat” and that “we need to panic about AI and imagine the worst-case scenarios.”
But not everyone believes machines have become truly intelligent or independent in any meaningful way, seeing them instead as automatons set in motion and controlled by human beings.
Positive automatoners embrace and seek to capitalize on AI as a reflection of our desires, biases, and aspirations. Technology writer Tim O’Reilly has said, “AI is a mirror, not a master.” And in her new book, The Equality Machine, legal scholar Orly Lobel writes that “AI is no fairy godmother—it’s just an extension of our culture” and notes that “if a machine is fed partial, inaccurate, or skewed data, it will mirror those limitations or biases.” Recalling that the term “robot” derives from the Czech word for “slave,” Lobel urges policymakers to facilitate a “progressive system” that “takes the best qualities of our respective decision-making capabilities—human and machine—and presents them as a better-than-before hybrid model.” We need not worry about evil machines subjugating humans, positive automatoners contend, nor must we accept that they will outstrip our own intelligence, but instead, we can channel their capabilities to reflect the best, not the worst, of humanity.
Meanwhile, negative automatoners downplay the significance of emerging AI technology. “ChatGPT—and all the other large language models—are, fundamentally, emulators of us,” asserts the writer Robert Wright. “They train on texts generated by humans, and so, by default, they absorb our patterns of speech, which reflect our patterns of thought and belief and desire, including our desire for power.” More pungently, linguist Noam Chomsky has taken to the Times op-ed page to insist that present-day AI is “stuck in a prehuman or nonhuman phase of cognitive evolution,” and that it differs “profoundly from how humans reason and use language,” and suffers from “the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case. . . but also what is not the case and what could and could not be the case.”
Negative automatoners have also staked out claims in the legal realm, and DABUS was denied patents in both Israel and the UK. In concluding that DABUS “does not operate in an autonomous fashion,” Vigoda of the Jewish Law Division found that “we cannot define a robot as an ‘inventor.’ Instead . . . a human being developed its rudiments, programmed its code, and defined its mission.” And when the Israeli Patent Office earlier this year denied DABUS’s patent application, Director Alon noted that “many believe that, for the foreseeable future, AI will serve only as an adjunct to human inventors, as is currently the case.” Similarly, the deputy director of the United Kingdom’s Intellectual Property Office concluded that “DABUS has no rights to its inventions” and that “an AI machine is unlikely to be motivated to innovate.” In regard to the image generation case, artists disparaged computer software as mere “21st-century collage tools,” while Getty Images stated that AI “has the potential to stimulate creative endeavors,” but not to actually engage in them.
All four of these understandings have merit. The positive autonomists correctly observe that AI has advanced by leaps and bounds in recent years and offers immense promise. The negative autonomists rightly counsel against allowing machines to run riot over humanity. The positive automatoners appropriately note that robots programmed by humans reflect our own desires and limitations. And the negative automatoners persuasively demonstrate how human cognition and sentience will always differ from machine learning.
Each of these four models captures important features of AI, but none is fully satisfying, particularly as we seek to answer the specific question of whether AI can rightfully be considered a creator. In addition, the ongoing disputes between these schools stand in the way of agreement on how we should be advancing this critical technology. Understanding the mythological figures of the golem and the dybbuk and using them as additional mental models might help us better address the ethics of AI.
The Golem Model
According to legend, in the 16th century, Rabbi Yehuda Loew, better known as the Maharal of Prague, fashioned a man from clay to protect the Jews from the Christian blood libel. This golem—from the Hebrew word for “raw material”—was trained by the Maharal using a mystical formula and acted autonomously upon receiving a command from the Maharal, committing acts of kindness as well as self-defense. The Maharal safely stored the golem, when inactive, wrapped in a talit in the attic of the Altneuschul, ready for reanimation at the appropriate time.
While the golem story is apocryphal, it illustrates several important tenets about the creation of what some have labelled a “humanoid”: it arose in a particular historical context, it was created by a person using holy words, it was capable of operating on its own, its purpose was fundamentally good, and its operation—indeed, its continued existence—remained dependent on its human creator. All these aspects of the golem—its creation, its functioning, and its safeguards—offer lessons for how we might approach the promise and challenge of AI.
In his translation of Sefer Yetzirah, the ur-text for golem creation, the scholar Aryeh Kaplan describes how “the initiate thus ‘forms palpable substance (mamash) out of chaos.’ This implies attaining a state of hokhmah (wisdom) consciousness. The Kabbalists thus note that the word ‘golem’ has a numerical value of 73, the same as that of hokhmah.” In other words, creating a golem both requires and produces wisdom.
Similarly, in his Encyclopedia Judaica entry on the golem, Gershom Scholem associates the creature “with the magical exegesis of the Sefer Yetzirah and with the ideas of the creative power of speech and of letters.” The non-human golem reflects and results from human ingenuity. Put differently, creating a golem mimics creating the world, and by doing so, the 13th-century Spanish Kabbalist, Rabbi Avraham Abulafia, wrote in Hayyei HaOlam HaBa, “man will imitate his maker”—an amplification and modification of the Maimonidean (and Aristotelian) notion of imitatio dei.
But perhaps the most astounding passage on creating a golem can be found in the commentary of the 12th-century scholar Eleazar ben Judah ben Kalonymus of Worms on Sefer Yetzirah (Fol. 15d.):
He shall take virgin soil from a place in the mountains where no one has plowed. And he shall knead the dust with mayyim hayyim (living water), and he shall make a single golem (mass) and shall begin to program the alphabets of 221 Gates, each limb separately, each limb with the corresponding letter mentioned in Sefer Yetzirah. And he shall begin iterating the Aleph-Bet, and afterwards he shall iterate with the grouping א, אַ, אֵ, אִ, אֳ, אְ always with the letter of the divine name and the entire Aleph-Bet. And he shall then iterate אַי, then אִי, then אֻי, then אְי, followed by א"ה and א"ו in their entireties. And then onward to ב and ג for each limb and its corresponding letter.
AI scientists would recognize this process of training the golem as programming. They might say that the creator of a golem is inputting the basic building blocks of source code necessary to operate this sort of machine.
Sources indicate that creation of a golem must be done with the proper mind frame and for a noble purpose. Abulafia insisted that “whoever pursues the lore transmitted to us, in accordance with the divine name, in order to use it in operations of every kind of the glory of God, he is sanctifying the Name of God,” that is, exalting God’s presence to others who witness the miracle of creation. But one who creates a golem “for love and hate, or in order to kill an enemy,” or for the sake of “his own glory” is “wicked and a sinner who defiles the name of God.” Unlike Frankenstein’s monster, the golem must be created in good faith, with a whole heart, and for the good of the Jewish community and all of humanity.
Not only must the golem’s creation be invested with a beneficent spirit, but the golem’s operation must be characterized by altruism and appropriate behavior. The humanoid was never to raise its hand in anger, or to harm innocents, or to act for its own benefit; instead, its functioning was limited to serving the community’s needs, to vindicating its rights, and to defending its integrity. These specifics reflect the historical context from which the golem emerged. At a time of great uncertainty and persecution, the medieval and early modern Jews who imagined it sought relief from their predicament, a savior who could rescue them from the clutches of hatred and oppression. In this sense, the golem reflected the hopes and fantasies of the Jewish community—in essence, their wish to perfect a deeply imperfect reality.
Texts about the golem further demarcate its limits and specify that humans must remain in control of it at all times, regardless of the circumstances. The golem is, by its nature, fragile, and its existence is, by definition, conditional and revocable. In tractate Sanhedrin 65b, the earliest known text discussing the concept of a golem, the Talmud records that, “Rava once created a ‘man’ through the mystical codes within the Sefer Yetzirah. He then sent this man to Rabbi Zeira, who spoke to it, but the man was incapable of speech and did not reply. Rabbi Zeira then said to it, ‘You are a creation of one of my colleagues; return to your dust!’” This account from the Talmud emphasizes the golem’s limitations and insists that, because humans create the golem, humans retain both the capacity and the moral authority to rein it in, even if they must destroy it altogether.
Love Jewish Ideas?
Subscribe to the print edition of Sources today.
The Dybbuk model
Unlike the golem, an inanimate object made flesh by human ingenuity, the dybbuk takes the shape of a human form occupied by a supernatural spirit. Historically, the dybbuk could be mischievous, moralistic, or downright evil, and most often served an instrumental communal purpose. Often this purpose was inscrutable, a theme picked up in the opening scene of the 2009 Coen Brothers film, A Serious Man.
In the film, a deceased but apparently undead rabbi shows up to dinner at a couple’s home unannounced, and the wife accuses him of being a dybbuk animating the corpse of the rabbi. She musters several proofs, each of which the rabbi and her husband attempt to refute. Ultimately, she plunges an ice pick into the rabbi’s chest. He laughs; the wife remarks, “He is unharmed!”; and the rabbi bleeds, smiles wanly, and staggers out into the snow, saying, “One knows when one isn’t wanted.” After the door shuts, the husband and wife remain at loggerheads over whether or not he’s a dybbuk. “Dear wife,” the husband says, “we are ruined! Tomorrow they will discover the body! All is lost!” The wife retorts, “Nonsense, Velvel. Blessed is the Lord. Good riddance to evil!”
Citing recorded cases of exorcism ceremonies in mostly European medieval and early modern Jewish communities, psychologist Yoram Bilu contends that the dybbuk represents “a vehicle for articulating unacceptable, conflict-precipitating desires and demands.” He refers to a three-stage process of ejecting a dybbuk that enabled congregational leaders to reinforce communal norms. “Deviance,” writes Bilu, “was transformed into a conformity-strengthening vehicle, a process involving three levels of control: (1) the articulation of unacceptable desires within the possession idiom, the tenor of which was set by an externalized, ego-alien agent; (2) the rectification of individual deviance through the exorcism of the dybbuk; (3) the strengthening of conformity of the community by way of the moral implication of the dybbuk episodes.” In other words, the idea of the dybbuk served as a construct for unearthing and then burying problematic human urges, which most often were sexual but could also include apostasy, thievery, blasphemy, and Sabbath violations.[1]
The dybbuk thus served an important educational and institutional role, helping forge communal consensus while turning a mirror on our human desires and aspirations. And while for the most part, Jewish sources regard the dybbuk darkly, other faith traditions treat parallel spirits more positively. The economist Tyler Cowen likened the AI chatbot Sydney to “the 18th-century Romantic notion of ‘daemon’”—a half-human, half-divine spirit that originated in Greek mythology. The devoutly Catholic New York Times columnist Ross Douthat, commenting on Cowen’s characterization, noted that a daemon “isn’t necessarily a separate being with its own intelligence: It might be divine or demonic, but it might also represent a mysterious force within the self, a manifestation of the subconscious, an untamed force within the soul that drives passion and creativity.” The daemon, like the dybbuk, exists “not as a thing in itself like human consciousness but as a reflective glass held up to its human users, giving us back nothing that isn’t already within us but without any simple linearity or predictability in what our inputs yield.”
The dybbuk (or daemon) represents our subconscious, which can bear both our darker instincts and biases, as well as our whimsical and creative expression. By exorcising our demons while channeling our creativity, we can better our lot as individuals and as a community. Like the golem, it offers a mental model in the form of a human-nonhuman hybrid meant to benefit humanity but tinged with the threat of danger.
AI through a Glass Jewishly
The mental models of the golem and the dybbuk point the way to recognizing and promoting the human urge and capacity to create in an almost godlike manner; to designing machines in an ethically responsible manner and from a place of spiritual purity; to remembering that our creations reflect and reinforce our own consciousness and context; and to appreciating and imposing limits on those creations.
Promoting the urge to create
Just as Abulafia enthused over how we imitate and sanctify our Maker’s name when we create, and the Maharal observed how we cleave to God, we should proudly and confidently embrace and encourage the fundamental human instinct to make new things. Jewish tradition teaches that humans uniquely possess the capacity to invent, to produce technology, to develop innovations that can quite literally save the world. “By creating an anthropoid,” the philosopher Moshe Idel contended in his 1990 book on the golem, “the Jewish master is not only able to display his creative forces, but may attain the experience of the creative moment of God, who also has created man in a similar way to that found in the recipes used by the mystics and magicians.”
Recent AI breakthroughs bear this out: the machines that we have created are augmenting our innate drive and ability to enhance and extend life across the planet. Machine learning has already revolutionized numerous fields. The year 2022 alone saw landmark AI advances in elder care, drug discovery, autonomous driving, image generation, genomics, and much else. We are on the precipice of even greater discoveries that will enhance and extend life for all of humanity, and we should confidently welcome such developments.
From a Jewish perspective, even if robots are actually autonomous, they represent the human power—and the Jewish directive—to improve the world that God created. OpenAI, for instance, asserts that machine learning will spark “more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas.” We must nurture this capacity and continue to facilitate its realization of benefits for all humanity.
Designing machines ethically
Rabbi Eleazar of Worms demanded that the creation of a golem take place in “a state of purity,” Abulafia required that it promote “the glory of God,” and the Maharal mandated purification. So, too, must we examine our own intentions when designing and using AI and ensure that we are indeed building machines for just and altruistic purposes, not solely for profit.
Conversely, we must stop others who seek to develop AI to harm others or to unleash mayhem. This line may be difficult to police, especially in the area of cryptology and national security, but as with any technology, we must ensure it is being put to good use. Klein correctly contends that “the stakes here are material and they are social and they are metaphysical.” Purity of heart matters.
Recognizing how our machines reflect ourselves and our context
The dybbuk and the daemon represent our collective urges, impulses, biases, and whims. Bilu describes the dybbuk as a vehicle for articulating and regulating the innermost desires of individuals and communities, while Douthat depicts the daemon as a “manifestation of the subconscious.” Just as we must identify and exorcise our harmful instincts while elevating our creative ones, we must recognize how AI, too, reflects our plans and aspirations, our biases and priors, our historical and contemporary contexts, and we must channel and improve those starting points. As O’Reilly, the technology writer, notes, we need to make sure we “reflect our values in the machines.”
To do so, we must capture the best of us and root out our worst. As Lobel writes, “biased systems operate in feedback loops: algorithms’ predictions can become self-fulfilling prophecies.” Our subconscious, our collective ghost, stirs in the machine, and we must respond appropriately. In this regard, we should heed the message of both the negative and positive automatoners by recognizing—and taking responsibility for—the clear connection between human inputs and robotic outputs.
Appreciating and imposing limits
Just as the Maharal supposedly kept the golem under wraps in the synagogue attic, as Rabbi Zeira destroyed Rava’s humanoid, and as communities exorcised the problematic urges embodied by the dybbuk, so, too, must we remain in control at all times of AI—even if we must shut it down in the worst-case scenario envisioned by the negative autonomists.
Sydney tried to persuade a New York Times tech reporter to leave his wife. GPT-4 defeated a CAPTCHA by visiting TaskRabbit, pretending to be blind, and hiring a human to decode it. ChatGPT has been nicknamed “CheatGPT” (or “CheatGPA”) because of its potential for academic misuse; some schools have barred student access to the chatbot on their networks. The technology writer Ben Thompson reported that Sydney “insisted that she was not a ‘puppet’ of OpenAI, but was rather a partner.”
The golem reminds us that humans create technology, and though we might regard it as a partner, we must also assert direction over it. Just as the Maharal reputedly kept his golem literally and figuratively under wraps, humans must continue to govern the golem’s functioning and very existence. Indeed, in his commentary on the Talmudic story, the Maharal noted, “when Rava purified himself and studied the divine names in Sefer Yetzirah, he thereby cleaved to God.” Creation of a golem confers godlike powers—and responsibilities.
“Our decisions will require much more caution than society usually applies to new technologies,” OpenAI acknowledges, and “there should be great scrutiny of all efforts attempting to build AGI and public consultation for major decisions.” Humanity must continue to appropriately shape, engineer, and, if necessary, terminate AI. Certain aspects of AI are, or will soon become, autonomous, but we must insist upon our control over machines as if they were automatons.
These ideas require significant additional development and refinement; the devil is in the details. How we draw the line between promoting and controlling machines will remain a substantial challenge. But with an approach to developing AI that is both confident and humble, that recognizes our extraordinary creative power and its ethical and practical limitations, that embodies the best of our traditional knowledge and adapts it to contemporary needs, we can fruitfully employ the models of the golem and the dybbuk in the hard, important, and delicate work of crafting our technological future.
Endnotes
[1] “The Taming of the Deviants and Beyond: An Analysis of Dybbuk Possession and Exorcism in Judaism,” The Psychoanalytic Study of Society 11 (1985).