Wednesday, 17 September 2014

When the Pattern Leaps Out

The trick to Portal 2 community chambers is to make needless complication look clever and obvious in retrospect.

The computer game Portal 2 is a three-dimensional puzzle game that relies heavily on a well-established physics. The player's character has a portal gun, which allows the player to create two linked wormholes on solid surfaces and pass from one to the other. As you pass through, you maintain your momentum, but gravity acts differently on your body, so you can do a variety of interesting things with these facts alone: you can launch yourself great distances by leaping from a raised platform onto a lower horizontally-placed portal which is linked to a portal on a vertical surface. Gravity increases your momentum on this side of the portal; on the side of the portal gravity now starts pulling you in a direction orthogonal to your trajectory, but momentum carries you pretty far. However, there are a number of other devices in the puzzle chambers with you: lasers and laser catchers (which can trigger other devices), bridges made of solid light, lethal turrets with friendly and forgiving artificial intelligences, platforms on moving pistons, and so on. (Sixty Symbols has some videos about the Portal physics.) There is a single-player narrative arc (which is excellent, but I won't get into it here) and a cooperative narrative arc (which I haven't played), but what I want to discuss is are the community test chambers: players can design their own stand-alone chambers and allow others to play them.
Screenshot of part of the solution to one of the puzzles in the Portal 2 game.
Although the portals, gravity, momentum, and whatnot are all functionally analogue, the other gadgets are often binary. They are either on or they are off. Triggers are either triggered or not. A laser stream is either obstructed or it isn't. And most devices can be triggered by another device: light bridges can be switched off, swinging panels can be opened or closed, moving platforms can be activated or deactivated. These consequences are also all binary. As a consequence, you can make elaborate systems with these devices.

You could, for example, make a computer.

You couldn't make a terribly fancy computer. You have a limit to the number of devices you can place in a chamber. But user TtHsa-1 has made a computer. First, some background. TtHsa-1 has always made interesting systems: for instance, in Test Cycle - Concept the player can place a weighted cube on a button, which triggers a series of reactions that ultimately remove the cube from the button and then puts a new weighted cube on the button, starting the system over again.

That chamber is the one that got my attention because I was working on a similar chamber at the time: having seen this video about iodine clocks, I wanted to make a Portal 2 chamber that repeated itself but with some remainder that eventually absorbed the entire process. In other words, I was planning on making a cycling process which degraded with each cycle. But I could not figure it out; what I did figure out in the meantime was how to make a puzzle which ran in part on an automated system. Unlike TtHsa-1's, mine started automating as soon as you entered the room. (It involves a system I just called "the engine," which involves buttons, weighted cubes, and tractor beams.) Once the cycle begins, the chamber has two states which alternate automatically, and the player needs to learn what each states looks like and incorporate those states, and their alternation, in her solutions to the puzzle.

So I was surprised to see TtHsa-1 working on a similar project, though he makes systems, as I said, and not puzzles. But then he started working in another direction: logic gates. With the devices Portal 2 provides, logic gates are actually fairly easy to make, though some of them are device-expensive. And TtHsa-1 must be far cleverer than I am, at least about this sort of thing, because he made then made a 6-bit adding machine. It's pretty slick: you get twelve cubes and two rows of six buttons. Each row represents a line of binary, so you can make two numbers by placing the cubes on the buttons: a pressed button is a 1 and an unpressed button is a 0. The output display is on the wall, made of seven flipping panels: the black (unportable) side of each panel is a 0 and the white (portable) side of each panel is 1. Yet again, it reads as a line of binary. Between the buttons and the panels is a very complicated system of lasers and laser catchers which make the necessary logic gates.

TtHsa-1's chambers are very clever and they're also quite clean and beautiful. And I appreciate all of those qualities in a puzzle, so much so that most of my pleasure comes from those qualities. But there's something his systems don't have that puzzles do: the retroactive delight of finding out that a complicated system makes sense after all. A good puzzle is much like a detective story: what appeared to be meaningless unconnected pieces are in fact coherent, but only once that coherence is discovered. It's the moment the pattern leaps out at me, in its cleverness and efficiency and beauty, that I really love. The fact that I worked for the pattern helps on a psychological level, I'm sure. Some of these puzzles are very relaxing, because the pieces just sort of fall in place as I interact with them; others are much harder and require quite a lot of mental modelling in order to work out what I'm going to do before I start doing it. I enjoy both, but with the first kind I enjoy the creator's cleverness, while with the second kind I get to enjoy the creator's cleverness while also feeling clever myself. TtHsa-1's systems can't offer any element of the latter, though it offers quite a lot of the former pleasure.

I've been trying to think of ways to use his inventions in a puzzle. The most obvious is to create a 6-bit calculator and then hook up the exit to a particular answer, which you can show (in Arabic numbers) on the ceiling. But I've tinkering with a set of rooms which rely on manipulating the logic gates, or actually the same logic gate over and over again; the final room would require you to have figured out how the logic gate works, because you would have to build one using movable pieces. It's...not going so well.

Two parting, not entirely related, thoughts:

1. The devices are nice and fun, but some of the best puzzles hinge on something obvious but easy to forget. For instance, a lot of nice puzzles mostly use devices throughout, but the final stage requires you to remember that gravity exists, or that nothing can pass through solid objects. In fact, I find that when I design puzzles I often forget that gravity is an element of the puzzle which I can incorporate into my design. Sightlines are also an important part of puzzle design.

2. I really like puzzle designs in which the same chamber involves multiple puzzles, where the different puzzles use the same pieces. For instance, one of mine has four chambers, in succession; you must solve each on its own to get to the next one, but when you solve the final rooms you then have access to all of the rooms and must solve a final puzzle which involves elements of all four rooms. Since I like four-room chambers, I'm also thinking of making a kishotenketsu-based puzzle which repeats the same kind of puzzle in the first two rooms, uses a whole different design philosophy/puzzle type in the third room, and blends the two in the fourth. (I hear this is what the makers of the Mario games did.)

Tuesday, 9 September 2014

The Singular Flavor of Souls

While I’m recording things I’ve read recently that do a far better job of articulating, and expanding, what I was trying to say last summer, I have two more things to mention that touch on what I was trying to get at with my posts on difference and the acknowledgement thereof. I assume there are many people for whom these ideas are well-trod ground, but they were new to me and it might be worth something to record my nascent reactions here.

In “From Allegories to Novels,” in which Borges tries to explain why the allegory once seemed a respectable genre but now seems in poor taste, he writes the following:
Coleridge observes that all men are born Aristotelians or Platonists. The Platonists sense intuitively that ideas are realities; the Aristotelians, that they are generalizations; for the former, language is nothing but a system of arbitrary symbols; for the latter, it is the map of the universe. The Platonist knows that the universe is in some way a cosmos, an order; this order, for the Aristotelian, may be an error or fiction resulting from our perfect understanding. Across latitudes and epochs, the two immortal antagonists change languages and names: one is Parmenides, Plato, Spinoza, Kant, Francis Bradley; the other, Heraclitus, Aristotle, Locke, Hume, William James. In the arduous schools of the Middle Ages, everyone invokes Aristotle, master of human reason (Convivio IV, 2), but the nominalists are Aristotle; the realists, Plato. […] 
As one would suppose, the intermediate positions and nuances multiplied ad infinitum over those many years; yet it can be stated that, for realism, universals (Plato would call them ideas, forms; we would call them abstract concepts) were the essential; for nominalism, individuals. The history of philosophy is not a useless museum of distractions and wordplay; the two hypotheses correspond, in all likelihood, to two ways of intuiting reality.[*]

But the distinction between nominalism and realism is not so keen as that, commentaries on Borges—Eco’s The Name of the Rose is notable—have noted, maybe missing that Borges might already have understood that.

I read Borges' essay months ago; Saturday, I read/skimmed the first chapter, written by Marcia J. Bates, of Theories of Information Behavior, edited by Karen E. Fisher, Sandra Erdelez, and Lynne (E. F.) McKechnie. In that chapter, I read this:
First, we need to make a distinction between what are known as nomethetic and idiographic approaches to research. These two are the most fundamental orienting strategies of all.
  • Nomothetic – “Relating to or concerned with the study or discovery of the general laws underlying something” (Oxford English Dictionary).
  • Idiographic – “Concerned with the individual, pertaining to or descriptive of single or unique facts and processes” (Oxford English Dictionary).
The first approach is the one that is fundamental to the sciences. Science research is always looking to establish the general law, principle, or theory. The fundamental assumption in the sciences is that behind all the blooming, buzzing confusion of the real world, there are patterns or processes of a more general sort, an understanding of which enables prediction and explanation of particulars.  
The idiographic approach, on the other hand, cherishes the particulars, and insist that true understanding can be reached only by assembling and assessing those particulars. The end result is a nuanced description and assessment of the unique facts of a situation or historical event, in which themes and tendencies may be discovered, but rarely any general laws. This approach is fundamental to the humanities. […]
Bates goes on to describe the social sciences as being between the two, the contested ground; at times, social sciences tend to favour one approach and then switch to the other. It is in the context of the social sciences that she talks about library and information science:
LIS has not been immune to these struggles, and it would not be hard to identify departments or journals where this conflict is being carried out. My position is that both of these orienting strategies are enormously productive for human understanding. Any LIS department that definitively rejects one of the other approach makes a foolish choice. It is more difficult to maintain openness to these two positions, rather than insisting on selecting one or the other, but it is also ultimately more productive and rewarding for the progress of the field.
I don’t think it’s difficult to see the realism/nominalism distinction played out here again, though it’s important to note that realism v. nominalism is a debate about the nature of reality, while the nomothetic v. idiographic debate concerns merely method (if method can ever be merely method).

Statistics, I think, is a useful way forward, though not sufficient. The idea of emergence, of patterns emerging at different levels of complexity, might also be helpful. Of course, my bias is showing clearly when I say this: Coleridge would say that I am a born Aristotelian, in that it is the individual that exists, not the concept. And yet it is clear that patterns exist and must be accounted for, and we probably can’t even do idiography without having ideas of general patterns, and it’s better to have good supportable patterns than mere intuitions and stereotypes. So we need nomothety! (I don’t even know if those are real nouns.) Statistics, probability, and emergence, put together, are a way of insisting that it's the individuals that are real while still seeking to understand those patterns the cosmos won't do without.

(And morality has to be at least somewhat nomethetic/realist, even if the idiographic/nominalist informs each particular decision, or else it literally cannot be morality, right?)


As you can tell from the deplorable spelling of flavour, the title is a quotation, in this case taken from a translation of Borges' essay "Personality and the Buddha;" the original was published at around the same time as "From Allegories to Novels." The context reads like this:
From Chaucer to Marcel Proust, the novel's substance is the unrepeatable, the singular flavor of souls; for Buddhism there is no such flavor, or it is one of the many varieties of the cosmic simulacrum. Christ preached so that men would have life, and have it in abundance (John 10:10); the Buddha, to proclaim that this world, infinite in time and in space, is a dwindling fire. [...]
But Borges writes in "From Allegories to Novels" that allegories have traces of the novel and novels, traces of the allegory:
Allegory is a fable of abstractions, as the novel is a fable of individuals. The abstractions are personified; there is something of the novel in every allegory. The individuals that novelists present aspire to be generic (Dupin is Reason, Don Segundo Sombra is the Gaucho); there is an element of allegory in novels. 

* What is strange about Aristotle and Plato is that Plato was Aristotelian when it comes to people and Aristotle, Platonic. Plato admitted that a woman might be born with the traits of a soldier or a philosopher-king, though it was unusual, and if such a woman were born it would be just to put her in that position for which she was suited. Aristotle, however, spoke of all slaves having the same traits, and all women the same traits, and all citizens the same traits, and thus slaves must always be slaves and women subject to male citizens. I want to hypothesize, subject to empirical study, that racists and sexists are more likely to be realists and use nomothetic thinking, while people with a more correct view of people (at least as far as sex and race are concerned) are more likely to be nominalists and use idiographic thinking... but the examples of Aristotle and Plato give me pause. Besides, is not such a hypothesis itself realist and nomothetic?

Tuesday, 2 September 2014

Reasons to Read

A Using Theory Post

In my discussion of literary theory and interpretation, I made a particular assumption about reading which isn’t universally supportable: I assumed that the point of reading was either to 1) discover what the text’s meaning really is or to 2) gain a particular reading experience (be that challenge or distraction or pleasure). However, there are almost certainly other reasons to read something and other expected results.

For instance, a person might read a text in order to learn something about the author. This is a fraught process, as I outlined early in my argument. But it’s a common and unavoidable reason to read: I don’t read letters as independent objects of reading, but as correspondence, one person’s attempt to communicate their ideas to me. I don’t listen to politicians’ speeches as pure rhetorical performances, meant to be enjoyed in themselves; I listen to politicians’ speeches in order to understand what the politician is thinking about the issues that face our body politic. And so, in what are maybe the most quotidian and ubiquitous acts of reading, we violate the intentional fallacy, a cornerstone of interpretation.

A person might also read in order to learn something about the world. When I want to discover something new about koalas or dobsonflies or argonauts, I do not go out and try to find examples of them for observation; I read about them. (I might ask someone about them, but this is no different: the text is auditory rather than written, and while that changes some of the ways we need to interpret, the fundamental principles are the same.) This is a strange sort of thing, when you think about it, because it takes two kinds of trust: the first is that my interpretation of the text is valid, and the second is that the text’s information is valid. Or perhaps it doesn’t: I can imagine a situation in which the text is ambiguous or outright inaccurate, but I still learn what I need to learn because I can compare this text with what I already know about koalas or dobsonflies or argonauts and “correct” it in my interpretation, just like I can usually read around typographic errors.

I might also read as a way of generating ideas. This is the book club sort of reading: we read a common text, and then discuss what we think of the character’s actions or the book’s depiction of some facet of reality. Often these arguments are not really about what the text actually says; the book only serves as a focal point or as common ground for a discussion or argument about ethics, politics, philosophy, and so on. If we ask of Watership Down, “Do you think that rabbits in Cowslip’s Warren would really act the way they do in Watership Down?”, we are not asking a question about the novel but about people, and it’s not a question of interpretation but of anthropology. Nonetheless, this is a reason that people read.

Some people read in order to improve their own writing. They want to inspire themselves, and so they go back to what inspired them to write in the first place. But, as Bloom makes clear, this process does not require accurate interpretation at all. He suggests that it positively benefits from inaccurate interpretation; whether he’s right or wrong, we can notice that this is a different thing from interpretation.

And there are other things that one might do as an academic studying literature. One of the sillier errors David Deutsch makes in The Beginning of Infinity is when he seems to think that literature departments ought to be working on the problem of beauty rather than meaning. Deutsch in interested in explanations that have reach, and if he’s noticing that what literary critics do sometimes lacks reach, he’d be right; his desire to see critics figure out what makes a poem beautiful might be an attempt to get this field back into conjecturing universal explanations. But he’s wrong that universality of reach is the only measure of an explanation; particularity matters too. The question of beauty is probably one that’s worth answering, however, even as the questions literary analysis currently asks are also worth asking. So this is maybe another reason to read: to figure out beauty’s mechanism. (I suspect this is a task for psychology, though, and not the humanities.)

I want to affirm all of these reasons to read. Some of these activities are necessary; some of them are excellent. But they aren’t interpretation; they do not contribute to interpretation, they are not the ends of which interpretation is only one of the means, and they are not what people do in English departments (or at least not primarily what people do in English departments). Of course an interpretation of a text might note that the text seems especially well suited to one of these tasks, but that’s not it’s job. Of course some of these tasks rely on interpretation to some degree, and so they benefit if that interpretation is expert rather than amateur, proficient rather than inept. And of course insofar as these tasks rely on interpretation they are also subject to interpretations limitations. But it’s still important to make distinguish between these activities, because the skills and methods involved in one are not always the skills and methods involved in another.

Let’s go back to that first example: reading a letter. I care what the person wanted to say, so literary interpretation isn’t going to cut it. I could do that work, of course, but it isn’t going to get me the result that I want. Trying to discern authorial intent is a somewhat harder task: instead of working out the meaning of the text in itself, I try to anticipate what meaning a person would want to impart when they chose those words. It is a much more speculative activity than literary interpretation. The result is far less certain when trying to discover intent than when trying to discover meaning; ambiguities must be resolved, not acknowledged and incorporated into the reading. Prior knowledge of the person, however, counts as evidence here, which means that you do have more data to work with—unless you don’t know the person very well, in which case reliance on the person’s personality becomes a liability.

And, I think, this goes back to the questions in the second half of my post on John Green, Twilight, and Paper Towns. If we’re holding people accountable for what they wrote, we luckily have all of the evidence we need in the text itself. If we’re holding people accountable for what they intended to write, our project is in trouble from the outset. If we’re holding people accountable for which misinterpretations they could anticipate…that seems difficult, indeed.  But, whatever we do, our understanding of the text must be an understanding of the text, and not anything else. That’s why I’m making these distinctions.

(For more on literary theory, see this index.)

Friday, 29 August 2014

A Mature Philosophy

Is Personal Epistemology What I’ve Been Looking For?

Through the research I’m doing as an RA, I encountered the idea of personal epistemology; the Cole’s Notes version is that different people have different measurable attitudes towards how people gain knowledge, what knowledge is like, and so forth. In general, research into personal epistemology fit into two streams: 1) research into epistemic beliefs addresses the particular individual beliefs a person might have, while 2) developmental models of personal epistemology chart how personal epistemology changes over a person’s life. Personal epistemology and its developmental focus are the invention of William G. Perry with his 1970 Forms of Intellectual and Ethical Development in the College Years, but these days Barbara Hofer and Paul Pintrich are the major proponents and experts.

Perry studied college students for all four years of undergraduate school, asking them questions designed to elicit their views on knowledge. What he determined is that they gradually changed their views over time in a somewhat predictable pattern. Of course, not all students were at the same stage when they entered university, so the early stages had fewer examples, but there were still some. Generally, he found that students began in an dualist stage, where they believe that things are either true or false, and have little patience for ambiguity or what Perry calls relativism.* In this stage they believe that knowledge is gained from authorities (ie. professors)—or, if they reject the authority, as sometimes happens, they do so without the skills of the later stages and still tend to view things as black and white. As the stages progress, they start to recognize that different professors want different answers and that there are good arguments for mutually exclusive positions. By the fifth stage, they adopt Perry’s relativism: knowledge is something for which one makes arguments, and authorities might know more than you but they’re just as fallible, and there’s no real sure answer for anything anywhere. After this stage, they start to realize they can make commitments within relativism, up until the ninth stage, where they have made those commitments within a relativist framework. Not all students (or people) progress through all of the stages, however; each stage contains tensions (both internally and against the world/classroom) which can only be resolved in the next stage, but the unpleasant experience of these tensions might cause a student to retreat into a previous stage and get stuck there. Furthermore, with the exception of the first stage, there are always two ways to do a stage: one is in adherence to the authority (or the perceived authority), and the other is in rebellion against it.** It’s all quite complicated and interesting.

The 50s, 60s, and 70s show clearly in Perry, both in his writing style, his sense of psychology, and his understanding of the final stage as still being within a relativist frame. His theory foundered for a while but was picked up Hofer and Pintrich in the early 2000s. They, and other researchers, have revised the stages according to more robust research among more demographics. Their results are fairly well corroborated by multiple empirical studies.

According to contemporary developmental models of personal epistemology, people progress through the following stages:

Naïve realism: The individual assumes that any statement is true. Only toddlers are in this stage: most children move beyond it quite early. Naïve realism is the extreme gullibility of children.
Dualism: The individual believes statements are either right or wrong. A statement’s truth value is usually determined by an authority; all an individual must do is receive this information from the authority. While most people start moving out of this stage by the end of elementary school or beginning of high school, some people never move past it.
Relativism: The individual has realized that there are multiple competing authorities and multiple reasonable positions to take. The individual tends to think in terms of opinions rather than truths, and often believes that all opinions are equally valid. Most people get here in high school; some people proceed past it, but others do not.
Evaluism: The individual still recognizes that there are multiple competing positions and does not believe that there is perfect knowledge available, but rather gathers evidence. Some opinions are better than others, according to their evidence and arguments. Knowledge is not received but made. Also called multiplism. Those people who get here usually do so in university, towards the end of the undergraduate degree or during graduate school. (I’m not sure what research indicates about people who don’t go to university; I suspect there’s just less research about them.)

This link leads to a decent summary I found (with Calvin & Hobbes strips to illustrate!), but note that whoever made this slideshow has kept Perry’s Commitments as a stage after evaluism (which they called multiplism), which isn’t conventional. As with Perry’s model, there are more ways not to proceed that there are to proceed. Often people retreat from the next stage because it requires new skills from them and introduces them to new tensions and uncertainties; it feels safer in a previous stage. Something that’s been discovered more recently is that people have different epistemic beliefs for different knowledge domains: someone can hold an evaluist position in politics, a relativist position in religion, and a dualist position in science, for instance.

All of this pertains to our research in a particular way which I’m not going to get into much here. What I wanted to note, however, is that I am strongly reminded of Anderson’s pre-modern, modern, post-modern trajectory, I outlined just over a year ago. It’s much better than Anderson’s trajectory, however, for two reasons: 1) it’s empirically based, and 2) in evaluism it charts the way past relativism, the Hegelian synthesis I had been babbling about, the way I’d been trying to find in tentativism (or beyond tentativism). Perry’s model may or may not do this (without understanding better what he means by relativism, I can’t tell what his commitment-within-relativism is), but Hofer, Pintrich, et al.’s model does. Evaluism is a terrible word; I regret how awkward tentativism is, but I like evaluism even less. However, in it there seems to be the thing I’ve been looking for.

Or maybe not. It reminds me of David Deutsch’s Popper-inspired epistemology in The Beginning of Infinity, but it also reminds me literary interpretation as I’m used to practicing it, and so I can see a lot of people rallying under its banner and saying it’s theirs. That doesn’t mean it is theirs, but it often might be, and what I suspect is that evaluism might be a pretty broad tent. It was an exciting discovery for me, but for the last few months I’ve started to consider that it’s at best a genus, and I’m still looking for a species.

But this leads to a particular question: which comes first, philosophy or psychology? Brains, of course, come before both, but I’ve always been inclined to say that philosophy comes first. When, in high school, I first learned of Kohlberg’s moral development scheme, I reacted with something between indignation and horror: I was distressed at the idea that people would—that I would—inexorably change from real morality, which relies on adherence to laws, to something that seemed at the time like relativism. Just because people tended to develop in a particular way did not mean they should. What I told myself was that adults did not become more correct about morality but rather became better at rationalizing their misdeeds using fancy (but incorrect) relativistic logic. Of course I was likely right about that, but still I grew up just as Kohlberg predicted. However, I still believed that questions of how we tended to think about morality were different questions from how we should think about morality.

And yet it is tempting to see personal epistemology as the course of development we should take. Confirmation bias would lead me to think of it this way, so I must be careful. And yet the idea that there are mature and immature epistemologies, and that mature ones are better than immature ones, makes a certain intuitive sense to me. I can think of three possible justifications for this. An individual rational explanation would imagine this development less as biological and more as cognitive; as we try to understand the world, our epistemologies fail us and we use our reason to determine why they failed and update them. Since this process of failure and correction is guided by reason and interaction with the real world, it tends towards improvement. An evolutionary pragmatic explanation is also empirical and corrective: during the process of human evolution, those people with better epistemologies tended to survive, so humans evolved better epistemologies; however, given their complexity, they developed later in life (as prefrontal cortices do). A teleological explanation would suggest that humans are directed, in some way, toward the truth, and so these typical developments would indicate the direction in which we ought to head. I’m not sure I’m entirely convinced by of any of these justifications, but the first one seems promising.

So what comes first: psychology or philosophy? And should we be looking less for the right epistemology or a mature one?


*I’m still struggling to understand what Perry means by relativism, exactly, because it doesn’t seem to quite be what I think of relativism as being: it has much more to do with the mere recognition that other people can legitimately hold other positions than oneself, and yet it seems to be overwhelmed by this acknowledgement. It seems more like a condition than a philosophy. I'm still working it out.
**Perry writes about a strange irony in the fairly relativistic (or seemingly relativistic) university today.
Here’s the quotation:
In a modern liberal arts college, […] The majority of the faculty has gone beyond differences of absolutist opinion into teachings which are deliberately founded in a relativistic epistemology […]. In this current situation, if a student revolts against “the Establishment” before he has familiarized himself with the analytical and integrative skills of relativistic thinking, the only place he can take his stand is in a simplistic absolutism. He revolts, then, not against a homogeneous lower-level orthodoxy but against heterogeneity. In doing so he not only narrows the range of his materials, he rejects the second-level tools of his critical analysis, reflection, and comparative thinking—the tools through which the successful rebel of a previous generation moved forward to productive dissent.

Sunday, 24 August 2014

Symbol Confusion

When I visited my (first) alma mater a season after graduating, I had tea with some of the staff from my old fellowship, and one of them told me he thought of the recent-grad situation as being rather like a swamp. I think he was trying to say that people tended to get lost in that time period, perhaps even stuck, without knowing which way to go; maybe he was trying to evoke unstable ground, and general lack civilization or guideposts. But I had to shrug and say, “You know, I’ve always liked swamps.”

Churchill has famously called depression a black dog. The black dog visited when Churchill’s depression became active. But I like dogs quite a lot, including black ones. If a literal black dog were to visit me, it would make my periods of depression far more tolerable. Sometimes, if I need to distract or comfort myself, such as when I am getting a painful medical procedure, I imagine there is a large black dog lying next to me.

Sometimes, if I feel like depression might come upon me in the near or near-ish future, I think of it as a fogbank approaching from the horizon. The image has the merits of specificity, and I feel like it would communicate to other people what I am feeling. However, I like fogbanks rather a lot, so the image feels inauthentic to me.

This morning at church we had a baptism, and during the service the deacon lit a candle and passed it to the baby’s mother, saying, “Receive the light of Christ, to show that you have passed from darkness to light.” But I don’t like the light so much, or anyway I prefer periods of gloaming and overcast, light mixed with darkness. To save electricity I will sometimes move about the house without turning on any lights, and I do not mind this darkness. Apparently I did this often enough that a housemate once called me a vampire. Darkness, I find, can be a balm.

Heaven is often depicted as being celestial, in the sky; Hell is subterranean, in the ground with the graves. The rich are the upper class, and the poor are the lower class. Revelations are sought and received on mountaintops. Thrones are placed on a dais, above the crowd. In pyramid schemes, those at the top benefit from those at the bottom. I, however, dislike heights. As with Antaeus, I feel stronger on the earth.

Do not misunderstand: when I affiliate with the low, the shadowed, the misty, the marshy, the canine, I do not mean to paint myself as a friend to victims and outcasts and wretched sinners, as much as that sort of thing appeals to me. Rather, I’m just affiliating with the low, the shadowed, the misty, the marshy, and the canine, with no regard for their uses as symbols. More, I am not sure why they symbolize what they are used to symbolize: truth cannot be a light if it is so often unseen; power cannot be high in the air if it is so often entrenched. Some of these are said to be universal metaphors, which show up in every culture (that the anthropologists who made this argument studied): height always indicates status, size always indicates superiority, and so forth. It may be true that all cultures run on such symbols, but I doubt all people do. I sometimes do not.

I wonder how important a skill it is to be able to confuse symbols, to break the equivalences of the symbol set you’ve inherited.

Friday, 22 August 2014

Six Principles

Last summer I wrote about how I sometimes try to understand a worldview by mentally outlining an epic espousing its attitudes and assumptions. This forces me to ask specific questions about it, ones which I might not otherwise think to ask: what characteristics would the protagonist need to exhibit if he or she were to embody the community's values? which major event in history would be most appropriate as the epics subject?  what would its underworld look like, and what would it mean to descend there? if the worldview does not have gods which might intervene, what would be analogous to divine intervention in this epic? what contemporary discoveries or discourses would the epic mention? and so on. I also discussed how choosing the best genre for a worldview entailed asking similar questions: is this worldview more interested in individuals, communities, or nations? is this worldview especially interested in the sort of origin-stories that epics tend to be? is this worldview interested in the ways social order can be breached and repaired, as mysteries tend to show? and so on.

Well, I've been trying a similar thought exercise for understanding worldviews, which I'm calling the Six Principles approach. Basically, I'm trying to boil a position down to six principles, and while those principles do tend to have multiple clauses I try for some semblance of brevity. There are two points to this severe summary: the first is to try to shuck off a lot of the unnecessary details, and the second is to try to collapse multiple elements into one. Collapsing multiple elements into one principle forces me to figure out how different elements relate to one another (for example, satire and decorum in neoclassicism).

What I've found is that the Six Principles approach works far better when I'm trying to figure out things like intellectual movements rather than specific positions. For example, Romanticism and Neoclassicism were easier than Existentialism; Existentialism was easier (and likely more valid) than Quasi-Platonism; Quasi-Platonism was just barely possible while something like Marxism probably wouldn't have worked at all. Trying to describe a particular person's position is far harder. Movements, however, consist mostly of the overlap between different thinkers' views, which makes them easier to summarize. Further, it's easier to render attitudes rather than theories this way (though, of course, the distinction between the two isn't fine and clear).

Of course, I'm including a suppressed criterion for this exercise I haven't mentioned yet. See, I came up with the exercise while imagining a world-building machine which had limited granularity: for instance, you could designate what climate a nation had, and what sapient species populated it, but you couldn't get right in there and write their culture from the ground up. So you'd have to define cultural trends for the machine and give them names so you can apply them to nations (for instance, Nation A is temperate and wet, populated by humans and gorons, and is mostly Romantic with a minority Neoclassic culture). It's something I might use in a novel or similar project some day. Anyway, I wanted to see if I could actually define movements in a limited set of principles for the purposes of said possible novel, and it would have to be legible to the world-building machine and therefore couldn't depend on references to specific historical events (ie. romanticism is in part a reaction to increasing urbanization and industrialization in Europe).

Here are some of my attempts:

Neoclassicism (you saw this the other day)
  1. Reason and judgement are the most admirable human faculties.
  2. Decorum is paramount.
  3. The best way to learn the rules is to study the classical authors.
  4. Communities have an obligation to establish and preserve social order, balance, and correctness.
  5. Invention is good in moderation.
  6. Satire is a useful corrective for unreasonable action, poor judgement, and breaches of decorum.


  1. Spontaneity of thought, action, and expression is healthier than the alternative.
  2. A natural or primitive way of life is superior to artificial and urban ways of life.
  3. A subjective engagement with the natural world leads to an understanding of oneself and of humanity in general.
  4. Imagination is among the most important human faculties.
  5. An individual's free personal expression is necessary for a flourishing society and a flourishing individual.
  6. Reactionary thought or behaviour results in moral and social corruption.


  1. Individuals have no destiny and are obliged to choose freely what they become.
  2. The freedom to choose is contingent on and restricted by the situation in which the individual exists, both physically and in the social order.
  3. Authenticity means the ability to assume consciously both radical freedom and the situation that limits it.
  4. Individuals are responsible for how they use their freedom.
  5. Individuals have no access to a pre-determined set of values with which to give meaning to their actions, but rather must create their own.
  6. Humans are uniquely capable of existing for themselves, rather than merely existing, and of perceiving the ways they exist for others rather than for themselves.

Humanism (that is, Renaissance humanism, though its current secular homonym has some overlap)

  1. It is important to perfect and enrich the present, worldly life rather than or as well as preparing for a future or otherworldly life.
  2. Revival of the literature and history of the past will help enrich the present, worldly life.
  3. Education, especially in the arts, will improve the talents of individuals.
  4. Humans can improve upon nature through invention.
  5. Humanity is the pinnacle of creation.
  6. Individuals can improve themselves and thus improve their position in society.


  1. No perfect access to truth is possible.
  2. Simulations can become the reality in which one lives.
  3. No overarching explanation of theory for all experiences (called metanarratives) is likely to be true.
  4. Suspicion of metanarratives leads to tolerance of differing opinions, which in turns welcomes people who are different in some way from the majority.
  5. Individuals do not consist of a single, unified self, but rather consist of diverse thoughts, habits, expectations, memories, etc., which can differ according to social interactions and cultural expectations.
  6. Uncertainty and doubt about the meanings one's culture generates are to be expected and accepted, not denied of villainized.

Nerdfighterism (referring more to what John and Hank Green say than what their fans espouse)

  1. Curiosity, the urge to collaborate, and empathy are the greatest human attributes.
  2. Truth resists simplicity.
  3. Knowledge of the physical universe, gained through study and empirical research, is valuable.
  4. Individuals and communities have an obligation to increase that which makes life enjoyable for others (called awesome) and decrease oppression, inequality, violence, disease, and environmental destruction (called worldsuck).
  5. Optimism is more reasonable and productive than pessimism.
  6. Artistic expression and appreciation spark curiosity, collaboration, and empathy.

Quasi-Buddhism ("quasi-" because I make no mention of Buddha or specific Buddhists practices, as per the thought experiment)

  1. Suffering exists because of human wants and desires.
  2. The way to end suffering is discipline of both thought and action, especially right understanding, detachment, kindness, compassion, and mindfulness.
  3. Nothing is permanent and everything depends on something else for existence.
  4. Meditation frees the mind from passion, aggression, ignorance, jealousy, and pride, and thus allows wisdom to develop.
  5. Individuals do not have a basic self, but are composed of thoughts, impressions, memories, desires, etc.
  6. Individuals and communities can get help on their path to the end of suffering by following those who have preceded them on that path.

Quasi-Platonism ("quasi-" again because I do not refer to Plato and try to generalize somewhat, but this is still pretty close to Plato rather than, say, the neo-Platonists)

  1. All things in the physical world, including people, are imperfect versions of the ideal form of themselves.
  2. Knowledge is the apprehension of the ideal forms of things using reason.
  3. Individuals consist of reason, passion, and appetite.
  4. It is best to subordinate passion and appetite to reason, even though passion and appetite are not in themselves bad.
  5. Those who are ruled by reason ought to be in charge of society.
  6. It is best if things in the physical world, including people, act or are used in accordance with their ideal form.

Charistmaticism (as elucidated here)

  1. An individual or community open to the unexpected will receive surprising gifts.
  2. The natural world is enchanted, and what some may call the supernatural is merely the intensification of embedded creative (or corrupting) forces already present in a particular place or experience.
  3. Deliverance from evil entails satisfaction of both bodily and spiritual needs.
  4. Emotional and embodied experiences of the world are prior to intellectual engagement, which is dependent on the former.
  5. Right affection and right action require training in gratitude.
  6. Truth is best spoken by the poor and marginalized.

I'd be interested in seeing other attempts, if anyone would like to try their hand at it.
Blog Widget by LinkWithin