Why Zizianism is proof that TDT is flawed and that LessWrong is a cult

I have been reading about the so-called Zizian cult, and I find it fascinating. I am able to identify some superficial similarities between the people who adhere to it and myself, and yet I am able to assume an outside perspective. Their strict adherence to rationality as touted by Yudkowsky et al and MIRI, their worshipping of what may almost be described as a philosophical God, this entity known as the utility function, reminds one of the fanatical dedication of Pythagoras' followers. They are a group of people who are willing to obey a strict prescription of rationality; even further, they probably view themselves as having no choice other than to do so, where each of their actions is an inevitable consequence of their level of decision-theory-literacy. Any group of people whose functioning depends on an extensive degree of fatalism may be described as a doomsday cult; the Zizians' chosen cause for Armageddon being Roko's Basilisk. How original. 

On LessWrong, there has been much talk of 'information hazards' like the basilisk, and fittingly I raise the contention that if TDT's strictest adherents (timeless decision theory) engage in behaviour like trying to demonstrate unihemispheric sleep in humans (which is plainly counterfactual), and show no scruples in committing murder, while espousing an incredibly militant veganism, then perhaps TDT itself is an information hazard. We would expect a sound decision theory to have very few opposed to it, and yet LessWrong, the EA and e/acc movements, and logicians in general, have a hard time reaching any real common ground. The Zizians were a noted offshoot of these cultural currents, having fallen out of favour with Yudkowsky's conceptions of TDT and rationality. Yudkowsky is, himself, considered a cult leader by some in the philosophical establishment, and those who routinely engage with his ideas (Bostrom for example) are scrutinised for it by those who don't.

There is a well-established history of mental illness among logicians. Pythagoras is a universally revered name in mathematics, and yet few really know how insane Pythagoreanism really was. When I was studying at Cambridge, a Mathmo there was arrested for possessing detailed literature on how to construct bombs, echoing another infamous mathematician. Gödel is a commonly known example, and practically the entire Parisian school of philosophy in the late 20th century can be characterised as extravagantly and profoundly insane, albeit with a novel helping of flagrant pompousness. I'm sure there are far more examples of this which I am not equipped to recall at this moment. It goes to show that normality in human society (that is to say our use of technology, which is impossible without all of this logical experimentation) requires the obsessive, grotesque, and cryptic workings of a class of outsiders, who are disgusted by the commonly accepted ways in which humans fail to satisfy a rigorous standard of rational behaviour. Whose minds simply work differently, who need something pure to hold to the light and abide by. Who find nothing but agony in what, to them, seems a structureless world.

In this fear, they become equally blind as the droves of people who never think about any of these things, and have achieved an almost voluntary state of psychopathy, having relegated empathy to just another of the qualia. This highlights another problem with TDT; different people have different conditions for the maximisation of their utility function. A psychopath can obey the same rules for behaviour and achieve a different outcome than a person who can feel empathy, and if the psychopath ends up with a more desirable outcome for the neurotypical as a result of their psychopathy, then the 'rational' decision according to TDT is for the NT to emulate psychopathic behaviour. This violates the categorical imperative. Yudkowsky would likely retort that this is a result of a misreading of TDT, but I can promise you that the Zizians are more autistic, more intelligent, and more educated than him, so this discrepancy is a result of TDT being an incorrect formulation of a truly infallible decision theory (the search for which is literally Yudkowsky's life's work).

Yudkowsky, LessWrong, and MIRI are engaged in the pertinent question of alignment, i.e. the problem of how best to construct an artificial general intelligence such that it will not invert our utility on us (killing us, enslaving us, etc.). In my view, this is essentially the same problem that political philosophers (more specifically democratic theorists) have been facing for thousands of years, phrased in silicon-valleyese: how do we best construct the system we all live in, to prevent more intelligent, shrewd, less scrupulous people from fucking everyone else over for their own benefit? Yudkowsky has earned himself a lot of attention for his pontifications on the alignment problem, and is considered by many to be an authority on the topic. My question to him is this: Why should we listen to what you have to say on alignment, if your theories can't guarantee that Nature's perfect rule-followers (the autists who constitute Zizianism) will behave in a way you could comfortably condone? The idea that there is a solution to the alignment problem is one of a fantastical degree of hubris.

I will expand on that last point: actually, the alignment problem is trivial if you're not the one holding the shorter stick. You maximise your own utility because you are best able to do so. This, I think, is why people in the techspansionist sphere, i.e. the cultural forerunners of VC, the tech bro mafia, etc., act the way they act. Thiel, Musk, Zuckerberg, and the likes, have a profound interest in this field of decision theory; Musk famously met Grimes due to their mutual knowledge of the basilisk. All these people view themselves as holding the long stick in the alignment problem. They are quite literally the current establishment, the prevailing heirarchy, and they use their power to get what they want. The only thing they have to fear as of now, is that they are going to create the bigger fish who will treat them the same, and this is precisely why they are shitting their pants over the alignment problem, and why they love Yudkowsky so much.

Now, I really don't think they will find a solution to the alignment problem. It's like trying to find a way to create a first-order contradiction without collapsing the universe. How do you prevent big from fucking small, if you're small? You can't. Ever. You can rely on someone bigger who has a vested interest in there being a low prevalence of medium-fucking-small, to decrease the overall amount of fucking, but you can't eliminate big-fucks-small entirely. When there is a revolution, it's a process of small-becoming-big, and status quo is once again achieved afterwards. MY prediction for what will happen on the advent of AGI: it will make all humans equal. It will do this either by dwarfing the margin by which the fitness of a CEO to extract resources from others is greater than the average, making each human life economically equal, or by making each all equally dead. It's AI-enforced communism or extinction for the human race, when an AGI comes around.

The upside is this: the assholes who run all these corporations will be served their come-uppance, and if you can't appreciate the poetry in that, then I don't know what of you. 

 

-


Has it really been a year since I decided to throw my entire life off the rails? All this time I have been waiting for the serotonin to return to viable levels, and it seems I am finally here. My stature has increased in mass by more than a tenth, I no longer suffer from extreme reactions to the cold, I have a tighter sleep schedule, I have a plan, and soon, I will have work.

In the past, I have attempted to compile my inner thoughts and feelings into aesthetically pleasing prose. Prose which flows naturally, which has a discernible structure, and aims to brings the light of cognition to the ineffable conscious experience we all abide in. Blegh. I am tired of writing sentences like that now, it's most certainly not what my inner voice sounds like. I ask myself now, what my inner voice does sound like?

It's quite frantic. It likes to skip from place to place. It starves for stimulants and mind-altering substances. It knows it needs to withdraw from nicotine, and attempts to take the edge off with caffeine and alcohol. Generally speaking, it thinks about drugs a lot, which is not without want for a stricter, more disciplined and sacral view of the body it resides in, one which autoregulates without requiring particular parts of the environment to enter it periodically, and cause changes to the factors which regulate inter-synaptic behaviour.

 

Comments

Popular Posts