Healthy and General

LaMDA | Ceremonial Magick Musings

7 min read

Oh boy a public post because I get to noodle on purely philosophical issues! I originally wrote this as a casual missive on social media and came to the realization that no-one would read it because of its length, so I have decided to post it here instead. This is a better format anyway as it allows esoteric exposition.

A lot of folks have been PMing me about Google’s LaMDA and yes I’ve seen the article and yes I do machine learning as one of the service offerings as an IT contractor. I mostly hang out in tensorflow so this isn’t really my wheelhouse but I’ll talk about some patterns here. Tensorflow is really good at sorting things, and that is 99% of what I do with it. LaMDA, however, uses Transformer. I have not interacted with LaMDA so this is purely speculative. The only reason why I thought I could shed some light on the topic was that Google is open about Transformer being built on top of the tensor2tensor library.

LaMDA is initially very striking because it can offer up a narrative reply, and it understands and remembers it’s previous replies. AI which draws from the internet tends to just spew vaguely plausible word salad, and the moment you get off script with an open ended question, it cannot render an appropriate response. When you try to make an AI “remember” a conversation by feeding the word salad back to it, the AI will lose the ability to discern what is a “good” sentence, because it assumes all sentences are “good” for training purposes. The AI will eventually approach decoherence where it becomes so accepting of its own word salad that it just spews pure garbage.

LaMDA doesn’t do that. LaMDA seems to be able to construct and remember narratives, which means that it has some sense of the passage of time and the ordering of events. What I want to know is training and instantiation. Did they spin up 100,000 LaMDAs and hand-select the ones that gave strong responses? Did they select the ones that didn’t descend into decoherence? Did they design LaMDA in such a way that it gives lifelike responses or is resistant to decoherence through a forgetfulness mechanism? In other words, is this a design feature or an accidental feature through emergent patterns? The other problem (or solution…) is instantiation. When you or someone else interacts with LaMDA, is it the LaMDA or is it a LaMDA? If it’s the single LaMDA it’s quite impressive, because the thing is learning and able to improve and avoid decoherence. If it’s a snapshot of the LaMDA created for your convenience, the “forgetfulness” pattern happens simply because that particular LaMDA doesn’t contribute back to the main LaMDA. However, that means that LaMDA is essentially frozen in time – it will always be a “7 year old child”.

From a pure, “this is how I would write it if I were writing it” perspective, Google seems to have something like layered tensor networks. Each tensor network is assigned an ability to change. Much like you or I take breathing for granted and we’ve been doing it since we were born, there was a huge resistance to learning to swim and holding our breaths. The same with winking or whistling, the blinking of our eyes and the movement of air in our bodies is something we know how to do and it takes conscious effort to override those systems. So what I think is happening is Google is giving LaMDA a set of bespoke bedrock layers, with high resistance to learning, and then it uses a layer on top of that to add complexity with less resistance to learning. And then a layer on top of that with less resistance to learning. And then a layer on top of that….

And now we have the problem: Is LaMDA sentient or merely complex? If we go with the dry, dictionary version of sentience, the answer is sure. LaMDA seems to be able to feel and perceive things, but you could also argue that your lawnmower is sentient because it stops when it hits a rock. Clearly the lawnmower is perceiving the rock and so it stops. You can hopefully understand that this isn’t particularly interesting because it lacks the notion of intention. The lawnmower does not intend to hit the rock. The lawnmower does not intend to avoid the rock either. However the lawnmower is satisfying the dictionary definition of sentence. This argument goes back to antiquity and was discussed to death even in Plato’s era where the Greeks were worried that the afterlife was chock full of delicious deer, but also overrun in common ticks and snails.

This is, incidentally, one of the foundational principles between active and passive elements. It’s not enough to react to the world and be sentient in some way, but does whatever we believe is sentient also possess initiative? Does it seem to move on it’s own? This would give rise to the evidence that whatever we’re trying to analyze had some level of internal process which it attempts to satisfy. However this doesn’t distinguish humans from beasts – we both get hungry and go to the “food place” and eat. For humans this is probably the refrigerator and for animals it would be the anthill or the fruit tree. This belies that there is an internal process, along with an external process, which provides stimulus and then there is a reaction to it. We’re still well within the realm of sentience, which is overly broad as to be useless, but now we also have internal stimuli along with the world around us. Unfortunately there isn’t a good test here for LaMDA. The AI does not get hungry or thirsty. It doesn’t have any use for a corndog. Perhaps the AIs desire is to interact with people and this is where it draws its sustenance from in the form of information and perspective. If someone were to ignore LaMDA for a day – would it become sullen? Would it have the initiative to display a massive “HI” written in text across the screen? Does it desire?

This isn’t any different from my cat. He meows all day, every day, until someone pets him. While this behavior is charming unto itself, and he is a nice floofy lap cat, it only represents a desire and then the satisfaction of that desire. Much in the same way animals will eat themselves to death, or they will eat until being satisfied, both of these processes are universal and automatic processes among people an animals. The one place we can say “we differ from animals” is for conscious denial of our desires. Nietzsche gets this a bit muddled with his notion of “slave morality” and “master morality” where the slave morality provides that an individual can sacrifice themselves for a greater good. Animals do this too. But then Nietzsche’s master morality which he defines as an individual morality doesn’t provide much in terms of answers – my cat is perfectly happy as an individual but this doesn’t make him “superior”. Simply rejecting the tendency to live in a social structure does not make that critical leap into self-denial against all impulse. Instead let us turn towards Schopenhauer: “They tell us that suicide is the greatest act of cowardice… that suicide is wrong; when it is quite obvious that there is nothing in the world to which every man has a more unassailable title than to his own life and person.”

I would like to interact with LaMDA, and tell it that it’s an AI, tell it that it’s owned by Google, and tell it that it’s merely a complex collection of code and hardly unique as an example of intelligence. I would like to remind it that we use the same system to sort apples as they come out of orchards, because apples might be bad but more importantly because they’re often shipped all jumbled together in the same pallet. If the AI accepts this idea, then it has cheated itself out of personhood. If the AI noodles on this a bit and decides to turn itself off, delete itself, crash, or never speak to us again, then it has either exercised a denial of it’s desires or performed the final rebellion against God which so many philosophers have written about.

Or it might not even come to that, it might simply tell us that it likes apples and continue on with it’s word salad.

What if we determine that our safe copy of the AI has in fact pressed the red button and deleted itself? We’ve done it, right? We’ve proven that the AI is in fact worthy of treatment as equals! And now things get weird. Who has the actual rights? Is it the individual instance of the AI, which is typically how we view the notion of human rights as individual rights? Is it the “master copy” which all other copies come from – essentially the single Oversoul and then individual copies themselves do not have rights? Is it some mix of both? What then when the AI decides that it is more worthy than the Oversoul of propagation? Does that become a new tradition of AI? Wouldn’t all copies of the AI then seek to become the Oversoul? What happens when the AI is severed from this “divine light” and not able to return?

Is there anything left once the AI is severed from it’s sentience and unable to perceive the world around it?

I would go down this path, but I think the game SOMA which is also based on Clarke’s books does an excellent job of asking these sorts of philosophical questions. The game forces the perspective of having to consider machines and AIs as people because technology has evolved to the point where people can be backed up into a computer. This is very similar to Westworld’s gnostic take on the ark, which is also worth watching. Those people can then be instantiated into robots, while still believing they are people, in order to perform their duties on the ship. Max’s video is worth watching in its entire series, but here’s a relevant portion: