Friday, July 12, 2019

Some Theories Of Being

Euclid Of Alexandria

The best known discussion of being is in Euclid - that is to say,  from the schools of Eudoxus and Plato.

There exist infinitely many prime numbers! Is there a more beautiful list of concepts to a Platonist? The proof is simple: Take the least common multiple of a finite set of primes and add one. This new number must only have prime factors not on the list, because anything on the list would have a remainder. Therefore, your old list was too small. By induction on list sizes, one sees the set of primes is infinite.

To say “there exist infinitely many” one ought to be able to say what "there exist" means. In truth, Being is one of the oldest problems of philosophy, a problem so difficult that some of the greatest philosophers in history despaired of getting a handle on it.

Aristotle Of Stagira

Perhaps it really was Aristotle who first recognized that what the convincing arguments of Socrates had over the unconvincing pleas of the Sophists was a certain combinatoriality, but certainly Aristotle is credited with “discovering logic” anyway. For thousands of years students have stared glassy eyed as their professors dully intoned the sacred logoi:
  1. Major Premise: Each man is a mortal.
  2. Minor Premise: Socrates is a man.
  3. Conclusion: Socrates is a mortal.
Aristotle was a genius. His examples are always correct, even if some of his definitions cry out for cleanup. Of course, our copies of Aristotle's works are posthumous lecture notes and it is quite possible that Aristotle wouldn't have been an Aristotelian in the modern sense.
But what may Aristotle have sounded like if he were a fool?
  1. Major Premise: Each man exists.
  2. Minor Premise: Socrates is a man.
  3. Conclusion: Socrates exists.
Aristotle knows this doesn’t work. Aristotle used “quantifiers”, counting words, to handle existential statements. Aristotle introduced a variety of counting words - “Each”, “No”, “Some” and “Not every”. He showed two quantifiers - “Each” and “No” - were sufficient for his purposes.

Already we are in deep trouble. “Some set of primes is infinite” is a perfectly well formed sentence in term logic. But there is no syllogistic proof. Induction cannot be captures syllogistically because some sequences of true syllogisms are true (the sequence "Is there a number less than n" for n > 2) and some are not (the sequence "Is there a number greater than n" for n > 2). Presumably Aristotle discussed this in the lost "On Mathematics". Equally troubling, Euclid’s proof is not syllogistic. Valid reasoning about being went beyond Aristotle even before him.

Aristotle was a genius, but not a logician. Aristotle was a philosopher. His account of being, of existence, of a proof of “Socrates is a man” was philosophical rather than logical. Aristotle’s account of being was genetic as well as linguistic. A definition is words that govern the what it was to be (this is difficult even in the original greek). Each object is a hylomorphic compound of defined things (some of which may be themselves hylomorphic compounds). This compounding “causes” (or “explains”) the object. Ultimately, this chain goes back - in finitely many steps - to the (possibly unique) unmoved mover (Aristotle relies on religious faith for the unity).

This is a complicated and tortured account of being, and I have not touched on one tenth of the subtleties Aristotle analyzes. Gathering Aristotle into a single, memorable and inaccurate slogan:

TO BE IS TO BE CAUSED!

One could further assume, with the early Donald Davidson, that each object is uniquely determined by it’s causes. Aristotle himself does not assume this, at least not as a logical principle. He admits could be multiple unmoved movers, for instance.

Before moving on from Aristotle, let’s look at a semi-Aristotelian account of the standard syllogism.
  1. Major Premise: Each man is a mortal. 
    1. Each man is a complex. Each complex is hylomorphically composed of elements. Each man is hylomorphically composed of elements.
    2. No thing composed of elements out of their sphere is immortal. Each thing hylomorphically composed of elements is a thing composed of elements out of their sphere. No thing hylomorphically composed of elements is immortal
    3. No thing hylomorphically composed of elements is immortal. Each man is hylomorphically composed of elements. No man is immortal.
  2. Minor Premise: Socrates is a man.
    1.  Each thing with a rational soul is a man. Socrates has a rational soul. Socrates is a man.
  3. Conclusion: Socrates is a mortal.
Of course, one could continue to make finer and finer subarguments until - Aristotle believes - one hits the bottom. Aristotle further has very strong opinions on what the bottom will look like, but I pass over the remainder.

Santo Tommaso d'Aquino

I can not and will not give an account of post Aristotelian theories of being. I will say that Aristotle’s followers did not follow even his carefully wrought example. They tried to reason about existence as a predicate like “a man” or “a mortal” - “a being”.

Immanuel Kant

Immanuel Kant told woke us from our sub-Aristotelean slumbers. He told us “Existence is not a predicate!” or more verbosely “Being is evidently not a real predicate, that is, a conception of something which is added to the conception of some other thing. Logically, [Being] is merely the copula of a judgement”.

This can be illustrated by considering an incoherent description, such as the false geometric sentence “ABCD is a round square.”. No such shape exists, on pain of contradiction. Consider the following sentence of sophistical geometry “ABCD is an existent round square.”. On pain of contradiction, such a round square must exist but also on pain of contradiction the shape must not exist. Sophistry is sophistry after all.

As wise as Kant’s argument is, his analysis has certain flaws. One is that it conflates two parts of a sentence: the quantifier and the copula. The extremely simple structure of a term logic encourages this confusion. Every sentence in term logic has exactly four distinct parts: one quantifier, one subject, one copula and one predicate. If one was simply careful about order, the copula could be dropped (since it never changes).

A more fundamental flaw is that the argument is essentially negative. It tells us what being is not but little about what being is. Kant’s argument read positively is essentially a gloss on Aristotle’s logical/genetic hybrid conception of being.

The same is true of post-Kantian philosophy. Schopenhauer, for instance, relies explicitly on Aristotle’s genetic/logical hybrid to give his account of being (or as he phrases it, his “root of the principle of of sufficient reason”). Hegel, for another instance, tries to purify being into a purely genetic account.

William Shakespeare

Euclid’s argument shows we ought not leave well enough alone. Shakespeare (a former classics teacher) was a famous fan of modern philosophy. The Bard was certainly thinking of Aristotle when he said “There is more on Heaven and Earth than dreamt of in your philosophy...”.

I can not and will not cover the history of being in post-Aristotelean philosophy. Instead, I will simply outline a Quinean theory of being. I will begin by introducing a short begriffsschrift.

Jan Łukasiewicz

Let

\[ R_n^m \]

Be a relation with arity of n (or 0 is n is blank). The symbol m is either blank or a numeral. This gives us us a potentially infinite list of relations to choose from. I use the Łukasiewicz or “Polish” style. In addition to removing lots of needless syntactical blathering about parentheses, this notation forges a direct connection between relation syntax and trees. As syntactic sugar, I allow other capital letters to represent relations as well.

The variables/constants of relations will be denoted by lower case letters \( x_i \) where \(i\) is a number. As further syntactic sugar, I allow other lower case letters as well.

There are now many ways of writing “Socrates is a man.”. For instance,

\[ M_1s \]

Where \( M_1 \) is a 1-ary relation (a predicate) “is a man” and \( s \) is the constant Socrates. Closer to Kant’s analysis is

\[ E_2sm \]

Where \( E_2 \) is the coupla relation, s is the constant Socrates and m is the constant Man. That is, Socrates is in the set of men (see Quine’s New Foundations and Mathematical Logic for his analysis of coupla as set membership).

Augustus De Morgan


The "logical constants", so called because they don’t change their meaning across different theories, are particularly important relations. Examples of logical constants are: and, or, not, material implication and logical equivalence. I will denote these by \( A_2xy \), \(V_2xy\), \(N_1x\), \(T_2xy\) and \(L_2xy\). The syntax here is quite intuitive.

The modern idea of the logical constant dates to the followers of the algebraist George Boole: Jevons, De Morgan, Venn. Interestingly, Boole’s own conception was quite different.

As an example, let’s look at congruence in geometry. This relation - \( C_2 \) - is an equivalence relation. This means that

  1. Reflexivity: For all \( a \), \(C_2aa\)
  2. Symmetry: For all \(a,b\), \(T_2C_2abC_2ba\)
  3. Transitivity: For all \(a,b,c\), \(T_2A_2C_2abC_2bcC_2ac\)
With all this in mind, we can now state Quine's slogan for being:

TO BE IS TO BE THE VALUE OF A VARIABLE

That is: Łukasiewicz notation suggests objects as first class citizens and relations as hylomorphic combinations of object. David Lewis, for instance, endorses this reading in his defenses of both modal realism and Humean Superveience. The root of this is an analogy between a constant \( s\) for Socrates and a variable \(x_i \).

Hermann Weyl

Łukasiewicz notation is beautiful, but it has some unfortunate aspects. I will name two. First, it isn't easy to understand this analogy between constants and variables. Second is that identity of argument is denoted syntactically by identity of variable.

This syntatical choice doesn’t only introduce unnecessary verbosity - the relation in transitivity has only three variables but six places for them to enter - but also creates “scope” ambiguity. It is perfectly licit to let \(x_i\) mean different things in different parts of a sentence, but not always.

To get rid of scope, we will use functors which map relations to relations. The general idea of a “variable free” notation was originally invented by Weyl. Unfortunately, his notation was a bit clumsy and was profoundly uninfluential. A very powerful variable free logic with an attractive begriffschrift called “Combinatory Logic” was introduced by the unfortunate Moeshe Schoenfinkel.

The functors we will use were christened “predicate functors” by Quine. I will denote these with boldface letters. These break up into three groups. The simplest are the variable permuters:
  1. Major Inversion: \( L_2\mathbf{maj} R_n^p x_1 ... m_nR_n^p x_n x_1 ... x_{n-1} \)
  2. Minor Inversion: \( L_2\mathbf{min} R_n^p x_1 ... m_nR_n^p x_1 ... x_n x_{n-1} \)
  3. Reflection: \( L_2\mathbf{ref} R_n^p x_1 ... m_{n-1}R_n^p x_1 ... x_{n-1} x_{n-1}\)
Notice that the arity subscript in the case of reflection is the arity of the relation in the range of the functor.

The next group are the ‘logical’ or connective functors
  1. Negation: \(L_2\mathbf{neg} R_n^p x_1 ... x_nN_1R_n^p x_1 ... m_n\)
  2. Cartesian Product: \(L_2\mathbf{car} R_n^p R_m^q x_1 ... x_n y_1 ... y_mA_2R_n^p x_1 ... x_nR_m^q y_1 ... y_m\)
Finally, there is the all important Derelativization operator
  1. Derelativization: \(\mathbf{der}R_n^p x_1 ... x_{n-1}\) if and only if there is something x_n such that \( R_n^p x_1...x_n \)
Notice again, the arity subscript is the arity of the range. The arity of the basic functors is one, except for Cartesian Product, which has an arity of two.

We take as an example the congruence relation \( C_2 \) in geometry. It is an equivalence relation, that is
  1. Reflexivity: \(\mathbf{neg \ }\mathbf{der \ }\mathbf{neg \ } \mathbf{ref} C_2\)
  2. Symmetry: \(\mathbf{neg \ }\mathbf{der \ }\mathbf{neg \ }\mathbf{neg \ }\mathbf{der \ }\mathbf{neg \ }\mathbf{maj \ }\mathbf{ref \ }\mathbf{maj \ }\mathbf{maj \ }\mathbf{ref \ }\mathbf{car \ }C_2\mathbf{neg \ }C_2\)
  3.  Transitivity: \(\mathbf{neg \ }\mathbf{der \ }\mathbf{neg \ }\mathbf{neg \ }\mathbf{der \ }\mathbf{neg \ }\mathbf{neg \ }\mathbf{der \ }\mathbf{neg \ }\mathbf{maj \ }\mathbf{ref \ }\mathbf{maj \ }\mathbf{ref \ }\mathbf{maj \ }\mathbf{ref \ }\mathbf{maj \ }\mathbf{min \ }\mathbf{neg \ }\mathbf{car \ }\mathbf{neg \ }\mathbf{car}C_2C_2\mathbf{neg}C_2\)
In the low probability I did the predicate functors right, this demonstrates the algebraic structure of a general declerative sentence - a list of relations (usually with a tedious list of predicate functors) followed by a, possibly empty, list of variables. The verbosity is only apparent, one can easily e.g.  defining a proper syntax for equivalence, defining new functors out of combinations of old ones to reduce necessary repetition, etc..

Quine's method gives us a new slogan, in many ways superior to the first two:

TO BE IS TO DERELATIVISE A RELATION!

This slogan has certain advantages. First of all, the notion of a "variable" is secondary to the notion of a relation ("There exists an x such that x" is word salad). Next, perplexities of scope, quantification and so forth are absorbed into the algebraic properties of the functors. But the most important advantage is that this definition easily generalizes into a modern algebraic theory not tied to special assumptions about first order logic.

William Lawvere


Categorical semantics is a vast generalization of the preceding section, a generalization which is Weyl in neither sense of the phonemes. Introduced by Lawvere in a paper so popular that I have heard him complain "You know that I have written other things, right?", the concept is essentially to abstract the predicate functors while preserving the structure.

Major Inversion and Minor Inversion, for instance, are just ways of capturing the concept of a permutation group. The Cartesian Product is a way of getting, well dang products not a big shock there. These can obviously be generalized to groups and products in general.

What about the "all important Derelativization operator"? Well there is a general reading very natural in category theory that I have yet to translate into simple words. We have category with finite products. Some functor F from this to the category of truth values. Let X be the type of some variable. Derelativizaiton is a particular case of a left adjoint of the natural transformations from F(X x -) to F(-) (i.e. it gets rid of one variable of some type by returning a truth value).


If the "category of reality" (perhaps a sort of category of relations) has an initial object (perhaps the empty relation), then existence has a less natural but conceptually simpler gloss: each morphism from an initial object is denotes an object.


These briefs notes of course don't amount to anything like a history of theories of being. But it's really very simple, as the fellow says

There is nothing you can see that is not a flower;
There is nothing you can think that is not the moon.

Monday, July 9, 2018

Shalizi On Jaynes’ Arrow Of Time

Though early, grossly Orientalist, commentators* described Jnana Yogi Swami Cosma Rohilla Shalizi in terms similar to Descartes, it is clear from the historical record that Sw. Shalizi wrote his works in robes of saffron and kermes. As a small contribution to restoring historical reality, this paper is demonstrates how the recent publication by Carnegie-Mellon of Sw. Shalizi’s deutero-canonical writings throws new light on one of the canonical Shalizi texts in the Cornell Canon.

Some historical background is necessary. It is well known that Sw. Shalizi was a controversialist in the school of Guru Josiah Gibbs, opposing the faction led by Adhyapakah Edwin Jaynes. Gershom Scholem’s wonderful essay on the politics of saints “Religious Authority & Mysticism” gives a wonderful examination of how Sw. Shalizi likely felt about his work.

Adhyapakah Jaynes proclaimed, in seeming harmony with the tradition from Laplace to Schopenhauer, that the so-called “thermodynamic functions” - heat, pressure, etc. - had a mere conventional existence. Reality - the microstate - is something much stranger.

Against this preaching, Sw. Shalizi proposes that the true tradition would affirm the existence of heat even when there was no observer present. There seems to be an error in transcription in the Carnegie-Mellon text, as sometimes the text seems to be implying that without thermodynamic functions there would be no history prior to consciousness. But evolution of the objective microstate is not a function of the subjective macrostate. Fortunately, examination of the Cornell Canon is sufficient to demonstrate that Sw. Shalizi was aware of this.

Sw. Shalizi’s objection was much more subtle. Adhyapakah Jaynes essentially denied Mahapandita Ludwig Boltzmann’s claim that large numbers of particles play a role in thermalization, arguing that subjective ignorance is sufficient. Of course, Sw. Shalizi could not stand such a rejection of tradition.

We now move to the main text, Sw. Shalizi’s controversial article “False Jnanachakra”. In it, Sw. Shalizi imagines a contest between a mighty asura Andhaka and  Bhagvan Shiva. The Mahayogi bets that if Andhaka can turn the Jnanachakra then Andhaka may have a night with Parvati. Of course, Andhaka cannot resist such a carnal opportunity. The price he pays is this: if Andhaka cannot but Shiva can, then Andhaka must die and live the rest of his lives in non-violence and celibacy.

The Jnanachakra is a weightless Jade wheel with a single needle along it’s rim which may be lowered or raised. It has no effect on the pendulum. The Jnanachakra, as its name suggests, is merely a metaphor for the player’s knowledge. Beneath the Jnanachakra is an n-ary pendulum whose fobs beyond the first are invisible. Based only on macroscopic observation, the player attempting to turn the wheel must raise or lower the needle which will be pushed by the pendulum in the correct direction.

The observable state space of a single visible fob is a cylinder with one axis, up-down, as the velocity of the fob and the the other axis, around, as the velocity of the fob. The same for the Jnanachakra. Holding the current velocity of the Jnanachakra constant gives us a circle around the phase space. We want the needle down only when the fob is above the current circle in phase space. 

Andhaka’s strategy is to lower the needle in front of the fob when the fob is swinging right faster than the Jnanachakra. He believes this will allow him to rotate the Jnanachakra counterclockwise. For a short time, this works. But the hidden fobs cause the visible fob to bounce randomly on a long timescale - a timescale which can be calculated from coupling of the fobs (cite pg 44 Chaos & Coarse Graining In Stat Mech). Eventually, Andhaka’s predictions of will be no better than chance. Therefore Andhaka’s strategy will fail in the long run.

Now the Mahabuddhi goes to the wheel. He follows a similar strategy. At first the Jnanachakra seems to thermalize - bounce randomly. But soon the wheel begins turning counterclockwise. Why? Shiva can *learn* the motion of hidden fobs by observing the system at large. Since the state space of the system is fixed, in the long run Shiva learns the whole system perfectly. Following the teachings of Guru Shannon, Sw. Shalizi calculates the rate at which one may learn and finds it is either 0 (for Andhaka) or exponentially rapid (for Shiva).

Now Sw. Shalizi makes his attack. Nrityapriya’s dance requires only learning a finite state space and converges rapidly. Why cannot Andhaka do the same? Sw. Shalizi challenges the anti-Boltzmannian to explain without reference to the enormous size of the state space. If he cannot, then the anti-Boltzmannian has admitted that an objective property of reality - the dimension of the state space - plays the role Adhyapakah Jaynes has denied.

The case for Bhagavan Shiva and against the asura Andhaka can be made explicit. Start by recalling that the rate of learning is either zero or exponential. 
Andhaka’s capacity to learn is constant. As dimension of the state space - the number of fobs - grows large large, one soon finds that the asura cannot converge on the true state because he doesn’t have enough memory to hold the state in his mind. Therefore his learning rate is zero.
But Akshayaguna is different. His capacity to learn grows large holding the dimension of the state space constant. Therefore his learning rate is exponential.

There is no denying that Sw. Shalizi’s logic is absolutely sound. But before the publication of Three Toed Sloth, the full extent of his argument couldn’t be appreciated. Sw. Shalizi's implicit claim is that subjective arrows of time must have a consistent direction only if the state space of possibilities is much larger than the capacity any relevant learner. But Sw. Shalizi’s Cornell Canon piece doesn’t explicitly argue that why the learning capacity of a mere asura is insufficient.

Only now are we learning that Sw. Shalizi intended this work as part of a campaign of Arhat Ashby against the Jaynesians. Sw. Shalizi intends to use Ashby’s Law Of Requisite Variety, showing that a model of a system must be as complex as the system itself. This demonstrates that nobody but one who is unified with Brahman may have a backwards arrow of subjective time.

In sum, Carnegie-Mellon’s publication of Shalizi’s deutero-canonical work has opened new fields for scholarship. Already his brief note has been revealed to be more than a mere grumbling of a lover of controversy. His is a philosophical distinction between the monotonic subjective arrow of time of an ordinary being and the free arrow of time of an extraordinary one. All philologists of this canon  must take note.
*In case it isn't clear: this is written as a parody of Western commentary on Eastern religion - thus all the Sanskrit jargon despite none of the figures speaking the language. Why? Well, it seemed funny at the time. Of course, this blogpost shouldn't be considered scientific, much less serious theology.

Saturday, March 17, 2018

Two Dogmas Of Bayesianism

W V O Quine

There are different levels of disagreeing with a theory. To illustrate, we can imagine ourselves the central planner dispensing funds for researchers. The lowest level of disagreement is outright rejection - if a plan for a spaceship begins with "Assume we can violate the laws of thermodynamics..." I would simply not disperse any funds. Above this, there is non-fundamentalness. If I see a numerical simulation for a spaceship engine which I know doesn't respect energy conservation exactly, I might not dismiss it out of hand but would pay for investigation into whether the simulation flaws are fundamental. Off to the side, above rejection and below non-fundamentalness but not between either, there is suspicion. To speak ostensively: WVO Quine's attitude toward Carnap's dogmas was suspicion - he neither thought them to be wastes of paper nor without flaw. The notion of "analyticity" - words being true by definition - seemed both worth investigating and a bad foundation for the reconstruction of scientific investigation.

As Quine was suspicious of an any empiricism that took analyticity as atomic, I am suspicious of the philosophies that in the main call themselves "Bayesianism". What exactly Bayesianism consists of and when it began is not easy to say because self-described Bayesians do not all agree with one another.

The followers of de Finetti and Savage would trace Bayesianism to the logicians Ramsey and Wittgenstein. And indeed, whoever you think deserves credit as the origin of Bayesian philosophy, one must grant that Ramsey & Wittgenstein gave unusually clear statements of Bayesian philosophy. In Wittgenstein, the intuition of Bayesian philosophy were expounded in set of propositions 5.1* of his Tractatus. Ramsey gave these notions a more explicit development paper Truth And Probability. The particular style of the arguments on pages 19 - 23 of this paper has become known as "Dutch Book Arguments".

The general concept is simple. We start with a Wittgensteinian metaphysics: each possible state of the world corresponds to some proposition p. The propositions can be "arranged in a series" - there is an ordering Rpq such that there is an isomorphism between the propositions and the real numbers that respects the probability calculus. This ordering is something like "q is at least as good as p". Further, Ramsey realized, the relation R comes pretty near to an intuitive idea of rational behavior. And even more interesting is that the converse also holds (or nearly holds): "If anyone's mental condition violated these laws ... [he] could have a book made against him by a cunning bett[o]r and would then stand to lose in any event". This philosophical interpretation of two way equivalence between orderings on propositions and probability calculus is what I call the first dogma of Bayesianism.

Many other Bayesians - such as the followers of Jaynes - would trace it to the great physicists Laplace and Gibbs. The Bayesianism of early physicists is, granting that it exists at all, implicit and by example. Gibbs asked us to imagine a large collection of experiments floating in idea space. We should expect* that our actual experiment could be any one of those experiments, the great mass of which are functionally identical. Consider a classical ideal gas held in a stiff container. In the ensemble of possible experiments, there are a few where the gas is entirely in the lower half of the bottle. But the great mass of possible experiments the gas has had time to spread through the bottle. Therefore, for the great mass of possibilities, the volume the gas occupies would be the volume of the bottle. This means that if we measure the temperature, we get the pressure for free. Using Iverson Brackets, we see that for each possible temperature t and pressure p, Prob(P=p|T=t) = [p =(nR/V)t] or near abouts. The information we get out of measuring the temperature - that is, putting a probability distribution on temperature - is a probability distribution on pressure. The lesson the Gibbs example teaches ostensively is - supposedly - that what we want out of an experiment is a "posterior probability". This is the second dogma of Bayesianism.


Both dogmas are open to question. Let us question them.


It is almost too easy to pick on the Dutch Book concept. Ramsey himself expresses a great deal of skepticism about the general argument - "I have not worked out the mathematical logic of this in detail, because this would, I think, be rather like working out to seven places of decimals a result only valid to two.". Ramsey even gives a cogent criticism of the application of Dutch Book Argument (one foolishly tossed aside as ignorable by Nozick in The Nature Of Rationality):

"The old-established way of measuring a person's belief is to propose a bet, and see what are the lowest odds which he will accept. This method I regard as fundamentally sound; but it suffers from being insufficiently general, and from being necessarily inexact. It is inexact partly because of the diminishing marginal utility of money, partly because the person may have a special eagerness or reluctance to bet, because he either enjoys or dislikes excitement or for any other reason, e.g. to make a book."

The assumption "Value Is Linear Over Money" implies the Ramsey - von Neumann - Morgenstern axioms easily, but it is also a false empirical proposition. Defining a behavioristic theory whose in terms of behavior towards non-existent objects is the height of folly. Money doesn't stop becoming money because you are using it in a Bayesian probability example.

This brings us to the deeper problem of rationality in non-monetary societies. Rationality was supposed to reduce the probability calculus to something more basic. But now it seems to imply rationality was invented with coinage. Did rationality change when we (who is this 'we'?) went off the gold standard? These are absurd implications but phrasing a theory of rationality in terms of money seems to imply them. What about Dawkins' "Selfish Gene", isn't its rational pursuit of "self-interest" (in abstract terms) one of the deep facts about it? Surely this is a theory that might be right or wrong, not a theory a priori wrong and not a theory a priori wrong because genes don't care about money.

Related to this is that Dutch Book hypothesis seems to accept that people are "irrationally irrational" - they take their posted odds far too literally. As phrased in the Stanford Encyclopedia Of Philosophy article on Dutch Books

"An incoherent agent might not be confronted by a clever bookie who could, or would, take advantage of her, perhaps because she can take effective measures to avoid such folk. Even if so confronted, the agent can always prevent a sure loss by simply refusing to bet."

This argument is clarified by thinking in evolutionary terms. Let there be three kinds of birds: blue jays, cuckoos and dodos. Further assume that we've solved the problem from two paragraphs ago and have an understanding of what it means for an animal to bet. Dodos are trusting and give 110%. Dodos will offer and accept some bets that don't conform to the probability calculus (and fair bets). Cuckoos - as is well known - are underhanded and will cheat a dodo if it can. They will offer but never accept bets that don't conform to the probability calculus (and accept fair bets). Blue jays are rational. They accept and offer only fair bets. Assuming spatially mixed populations there are seven distinct cases: Dodos only, Blue Jays only, Cuckoos only, Dodo/Blue Jay, Dodo/Cuckoo and Dodo/Cuckoo/Blue Jay. Contrary to naive Bayesian theory, each of the pure cases is stable. Dodo-dodo interaction is a wash, every unfair bet lost is an unfair bet won by a dodo. Pure blue jay and pure cuckoo interactions are identical. Dodo/blue jay interactions may come out favorably to the dodo but never to the blue jay, the dodos can do favors for each other but the blue jays can neither offer nor accept favors from dodos or blue jays. The dodo is weakly evolutionarily dominant over the blue jay. Dodo/cuckoo interaction can only work out in the cuckoo's favor, but contrariwise a dodo can do a favor for a dodo but a cuckoo cannot offer a favor to a cuckoo (though it can accept such an offer). We'll call this a wash. Cuckoo/blue jay interaction is identical to cuckoo/cuckoo and blue jay/blue jay interaction, so neither can drive the other out. Finally, a total mix is unstable - dodos can drive out blue jays. The strong Bayesian blue jay is on the bottom. Introducing space (a la Skyrms) makes the problem even more interesting. One can easily imagine an inner ring of dodos pushing an ever expanding ring of blue jays shielding them from the cuckoos. There are plenty of senses of the word "stable" under which such a ring system is stable.

What does the above analysis tell us? We called Dodo/Cuckoo interactions a wash, but it really depends on the set of irrational offers Dodos and Cuckoos accept & reject/offer. As Ramsey says, for a young male Dodo "choice ... depend[s] on the precise form in which the options were offered him...". Ramsey finds this "absurd.". But to avoid this by assuming that we are living in a world which is functionally only blue jays seems an unforced restriction.


The logic underlying Bayesian theory itself is impoverished. It is a propositional logic lacking even monadic predicates (I can point to Jaynes as an example of Bayesian theory not going beyond propositional logic). This makes Bayesian logic less expressive than Aristotlean logic! That's no good!

Such a primitive logic has a difficult time with sentences which are "infinite" - have countably many terms - or even just have infinite representations. (An example infinite representation of the rational number 1/4 is a lazily evaluated list that spits out [0, ., 2, 5, 0, 0, ...]) Many Bayesians, such as Savage and de Finetti, are suspicious of infinite combinations of propositions for more-or-less the same reasons Wittgenstein was. Others, such as Jaynes and Jeffery, are confident that infinite combinations are allowed because to suppose otherwise would make every outcome depend on how one represented it. Even if one accepts infinite combinations, there also is the (related?) problem of sentences which would take infinitely many observations to justify.


An example of a simple finite sentence that takes infinitely many experiments to test is "A particular agent is Bayesian rational.". This is a monadic sentence that isn't in propositional logic. Unless one tests every possible combination of logical atoms, there's no way of telling the next one is out of place. The non-"Bayes observability" of Bayes rationality has inspired an enormous amount of commentary from people like Quine and Donald Davidson who accept the Wittgensteinian logical behaviorism that Bayesianism is founded upon.


So, is this literature confused or getting at something deeper? I don't know. But I believe that one can now see the reason Dutch Book arguments are hard to knock down isn't because there aren't plenty of criticisms. The real reason is the criticisms don't seem to go to the core of the theory. A criticism of Dutch Book needs to be like a statistical mechanics criticism of thermodynamics - it must explain both what the Dutch Book gets as well as what it misses. There does seem to be something there which will survive.

What You Want Is The Posterior

Okay okay okay, be that as it may, isn't it the case that what we want is the posterior odds? If D is some description of the world and E is some evidence we know to be actual, then we want P(D|E), right? Well, hold your horses. I say there's the probability of a description given evidence and probability of the truth of a theory and never the twain shall meet. To see it, let's meet our old uncle Noam.

Chomsky was talking about other people. Who he thought he was talking about isn't important, but he was really talking about Andrey Markov and Claude Shannon. Andrey Markov developed a crude mathematical description of poetry. He could write a machine with two states "print a vowel" and "print a consonant". He could give a probability for transferring between those states - given that you just printed a vowel/consonant, what is the probability you will print a consonant/vowel? The result will be nonsense, but look astonishingly like Russian. You can get more and more Russian like words just by increasing the state space. Our friend Claude Shannon comes in and tells us "Look, in each language there are finitely many words. Otherwise, language would be unlearnable. Grammar is just a grouping of words - nouns, verbs, adjectives, adverbs. By grouping the words and drawing connections between them in the right way we can get astonishingly English looking sentences pretty quickly.".

Now our "friend" Chomsky. He says "Look, you have English looking strings. They're syntactically correct. But you don't have English! The strings are not semantically constrained. There is no way to write a non-trivial machine that both passes Shakespeare and fails a nonsense sentence like 'Colorless green ideas sleep furiously.'! In order to know that sentence fails the machine needs to know ideas can't sleep, ideas can't be green, green things can't be colorless and one cannot sleep in a furious manner.".

Now, this is a scientific debate about models for language. Can it be put in a Bayesian manner? It cannot. The Markov-Shannon machine - by construction - matches the probabilistic behavior of the strings of the language. But it cannot be completely correct (otherwise, the set of regular expressions would be the Turing complete languages). Therefore, the statement that "What we want is the posterior odds!" cannot be entirely true.

I have said before that I am in the "Let Ten Thousand Flowers Bloom" school of probability. I think these arguments are interesting and worth considering. But it is also clear that despite books like Nozick's The Nature Of Rationality and Jaynes' similar tome, Bayesian philosophy does not replace the complicated mysteries of probability theories with simple clarities.

There is a further criticism that applying behavioristic analysis to research papers is unwise but I will make that another day.

* Both Ramsey's paper and Gibbs' book assume that the expectation operator exists for every relevant probability distribution. This is a minor flaw that can be removed without difficulty, so I will not mention it again.

Friday, January 12, 2018

Absolute And Comparitive Advantage



Adam Smith

Adam Smith imagined two firms with a choice of production of two products*. The two firms' entrepreneurs are Alice and Bob. Alice and Bob have the choice of producing two independent goods - maybe apples and blackboards. The firms have what has become known as "constant technology", the ratio of their outputs and their inputs is a constant independent of said quantities (different for each firm and product to avoid indeterminacy). Unemployment is ignored in both inputs and outputs - all inputs are bought and all output sold (Say's Law). The law of one price holds.

Each entrepreneur is then left with only one decision: how much of each good shall I make?

Leonid Kantorovich


The fastest way to this answer is through the theory of linear programming. Start by denoting \( L_{firm,product}\) the amount of labor Alice or Bob uses to make apples or blackboards. If \(L \) is the amount of labor available in society, then we have as a constraint

\[ L_{Alice,apples} + L_{Bob,apples} + L_{Alice,blackboards} + L_{Bob,blackboards} = L \]

The outputs are denoted \( Y_{firm,product} \) and the prices are \(p_{product}\) so that the total output is

 \(Y = p_{apples} (Y_{Alice,apples}+Y_{Bob,apples})+p_{blackboards} (Y_{Alice,blackboards}+Y_{Bob,blackboards}) \).

Finally, the technical coefficients are \( a_{firm,product} = Y_{firm,product} / L_{firm,product} \) . Our goal then is to maximize the above equation We know from LP theory* that only the vertices matter. Why? Well, the short answer is interior point optimization. Look at this picture:


The dark lines are the constraints and the darkened point is a guess at the optimum. The line through that point has a slope equal to the price ratio. It's obvious that more output by moving to a guess in the direction of the arrows that is still feasible. The triangle  made by the price ratio and the constraints (and its interior) are the "interior points". If an interior point set is one point, it must be the optimum. It's obvious that unless the price ratio is exactly the same slope as one of the boundaries, the only way to squeeze the interior point set down to one point is to chose one of the verticies.

Okay, so let's cycle through those verticies. The solution where no labor is used is the global minimum (the entire feasible set is the interior point set!). This also doesn't match the full employment constraint, so toss it.

Next, there's the where only one firm is buying labor to produce output (so that three \( L_{i,j}=0\) and one \( L_{i,j}=L\). We'll call this the \(Y_{one}\) solution as in "Why one?". Mathematically, it is written

\[ Y_{one} = max_{i,j}(p_j a_{i,j}L) \]

The next class of solutions is the instructive one. In these vertices, the non-negativity constraint is active on two possibilities - that is to say: labor is being hired for two reasons. But there's a problem. If both entrepreneurs  are making apples, then \(Y = p_{apples} (a_{Alice,apples} L_{Alice,apples} + a_{Bob,apples}L_{Bob,apples}) \). But if Alice is more efficient, then this can't be a maximum, because moving some labor from Bob's firm to Alice would increase output. This knocks out the two competitive verticies and leaves the four non-competitive verticies. Either one firm allocates labor two both products (and the other is dead) or two firms specialize in one product.

Read the above paragraph again, slowly. It's important to understand the next part. If you look at the verticies with three or all four labor hiring reasons, then the accounting for \( Y \) always has at least one term like the above. For instance, one of the three term verticies is \(Y = p_{apples} (a_{Alice,apples} L_{Alice,apples} + a_{Bob,apples}L_{Bob,apples}) + p_{blackboards}L_{Bob,blackboards} \) . But this can't be a maximum, because of the above paragraph - if Alice is more efficient in apple growing we can get more output by moving some labor from Bob's firm to hers.

There are therefore only possible three classes of extreme verticies: Alice or Bob make one thing, Alice or Bob make everything and finally Alice and Bob specialize.

David Ricardo

Phew! The great economist David Ricardo was uncomfortable with Smith's story. The vertices where one firm did everything and the other was non-existent troubled him.

Why would a non-existent firm trouble someone? Well, Ricardo was exploring an analogy between firms and nations. Nations have different technical coefficients - nobody needs to explain why California produces more wine than Utah or why Kazakhstan produces more uranium than Italy. But the idea that a country could "dominate" and produce everything was very troubling - not to mention that it seemed contrary to the facts.There was a political economy problem as well. Absolute advantage seems to suggest that global and local output could be at cross terms - global output might be maximized at the minimum of local output.

Ricardo "solved" this problem by ... restricting free trade! Yes, the classic argument for free trade assumes trade restrictions. Ricardo supposed that labor and capital couldn't move over national borders, but that consumption goods can.

John von Neumann

In the language of Linear Programming, Ricardo breaks the one labor constraint in the above problem into two:

\[ L_{Alice,apples} + L_{Bob,apples} =L_{Alice} \]

\[ L_{Bob,apples} +L_{Bob,blackboards} = L_{Bob}\]


Now the one firm produces everything equilbria are knocked out - they don't satisfy the above constraints. How they will specialize depends on the price ratio and technical coefficients, but they must always produce.




Comparative advantage is often thought of as a "long run" view - this is the supposed justification for the assumption of full employment and full consumption. But if capital and labor can flow over borders, it is a short run view that ignores unemployment and underconsumption.


Or is there a better interpretation of how comparative advantage works?

* The Adam Smith of my imagination.

Thursday, December 28, 2017

Who Cares About Ergodic Systems?

This is a quick teaching post. This stuff is high school level, but to make it formal can push it beyond the research level into high level philosophy.

 
The creator of Dr Pepper, Dr Alderton
 
First, some physical intuition. I pour some Dr Pepper into a cup. What shape does the fluid take on? There are really three fluids - Dr Pepper (largely water, basically incompressible), carbon dioxide (which takes the shape of tiny, interacting bubbles) and ambient air. There are many forces - solid resistance, buoyancy force, skin and interaction forces on the bubbles, gravity and (since the Dr Pepper and carbon dioxide are colder than the ambient temperature) thermal forces. The short run question of what happens to the fluids is complicated and depends on many tiny factors.

Despite this, the long run solution is easy - basic fluid statics tells us that the Dr Pepper will take the form of the cup and basic thermostatics tells us it will be the same temperature as the ambient atmosphere.

 
John von Neumann

How do we capture this intuition that - roughly - in the short run history matters but in the long run only structure matters? For many years, physicists and mathematicians have turned to Ergodic Theory to answer this question. Ergodic theory doesn't exactly have a great reputation.

Many people - including high powered top level experts - think that not only does ergodic theory require the formal manipulation skills of a von Neumann, the geometric insight of a Clerk Maxwell and the engineering experience of a Shannon - it doesn't even solve the problem.

But really ergodic theory is very simple - except for all the parts that are hard. Shannon's paper can be polished off in a couple days, and (with all due respect to Joe Doob) it's not clear that there is more to the theory than that.

You don't want to take a few days. Well, here's the few minutes version.
 

A connected and a disconnected network

We start with the intuitive idea of a network. We call the nodes the state. There are finitely many states and the each have a name. From a given state, there is a rule to transfer to one of the other states to which that node is connected. The rule and the network together are called the system. In theory the rule can be anything, for instance it might be "always go as far down as possible" where down is defined geometrically or topologically. The rules can be probabilistic.

A system is called "ergodic" if the long run amount of time spent at each node is independent of which node you start at. The idea of state gives us the short run detail dependence and the ergodicity gives us long run structure dependence.

For my deliberately dumb "go as far down as you can" rule on the above connected network, I have six possible runs

Pink, Purple, Purple, Purple...
Brown, Purple, Purple, Purple...
Blue, Black,  Purple, Purple, Purple...
Orange, Purple, Purple, Purple...
Black, Purple, Purple, Purple...
Purple, Purple, Purple...

No matter where I start, the long run relative frequency \(f_{purple} = 1\) and all others are \( 0 \). Therefore, this dumb system is ergodic. If we try the same thing on the disconnected network:

Black, Brown, Pink, Pink, Pink...
Brown, Pink, Pink, Pink...
Purple, Orange, Orange, Orange...
Blue, Orange, Orange, Orange...
Orange, Orange, Orange...

For the first two starting places the relative frequency of pink goes to one, for the second three, the relative frequency of orange goes to one. This dumb system is non-ergodic. But notice it is two ergodic pieces. In general, a non-ergodic system can be severed into ergodic components (in this case, the two connected subnetworks).

The underlying being connected isn't in general sufficient for being ergodic. On the above left graph, sever the Orange-Purple connection and follow the "go down" rule (question to check if you understand: what are the two ergodic subsystems?). It turns out* kinds of rules that are of physical interest are usually of the form "Given that I am on state N, I go down each connection NM with a certain probability \( p_{NM}\neq 0 \)". For such a rule, being connected is sufficient for ergodicity**. So in this informal blog post I'll choose rules and networks such that connectedness and ergodicity are equivalent.

The ergodic distribution tells us the long run behavior of the system, but it also teaches us about the medium run behavior. We know that if the frequency at a state is "too low" (compared to the ergodic frequency), then we will see a flow into that state. This is more or less a definition of what a flow is!


This is all well and good, but what does it have to do with physics? A continuous system is ergodic if the one can cut up the possible states of the system into a discrete ergodic system. Let's make a pair of networks out of a physical model - a billiards model. I mentally divide a square billiard table into four regions A, B, C and D


Being in a region isn't sufficient to fix the dynamics - I need to know the velocities. I think that it's obvious that velocity digitizes into four chunks based on the number of regions away from the starting region you end up in after a time step. So the states are really:

A0, A1, A2, A3
B0, B1, B2, B3
C0, C1, C2, C3
D0, D1, D2, D3

Each 0 connects only with itself (remember, the billiard isn't necessarily staying still, it could be through all four blocks in one tick). There's a cycle A1 connects to B1 connects to C1 connects to D3 connects to C3 connects to B3 connects to A1. There are three cycles of length two, A2 connects to C2 connects to A2, B2 connects to D2 connects to B2 and A3 connects to D1 connects to A3. This is illustrated below



This particular digitization of the underlying continuous system isn't ergodic. If you start off with A0, then \( f_{A0} =1 \), if you start off with a non-zero velocity state, then \( f_{A0} =0 \). That's enough to show that this isn't an ergodic system.

It turns out that there is no nontrivial digitization of this system that is ergodic. That's because this system is exactly solvable... and I won't tell you why that's connected to ergodicity***.


Let's put a circular block in the middle of the square. Now the graph isn't disconnected. A particle that started four blocks per tick can have it's angle of attack by hitting the circular block to now be 3 blocks per tick (that is, it may be turned around because 3=-1). I don't know if this particular graph is really ergodic and I'm not going to check. In a tour de force, Yakov Sinai proved that this system has an ergodic digitization. This shows that the system is itself ergodic.

That means if the particle isn't in, say, C enough (compared to the ergodic distribution) we will see a flow towards C, just as in the discrete case. This is how ergodicity connects to physical quantities.

Finally: wasn't that Black Thought freestyle great?

*By the magic of symbolic dynamics
** By the magic of Markov Chains
*** I'll let wikipedia do it

Thursday, December 21, 2017

What We Talk About When We Talk About Food

The vast majority of Discourse about health is lies. Go to a supermarket and look at the "health shakes". Beyond the outright lies that are the overwhelmingly greater part there are the tentative part-truths - usually represented as absolute certainties. Supposedly there could logically could be definite truths. I have never seen them.

How do we talk about food? In episode 2 of Frasier, Frasier Crane discusses his preferred breakfast with his father. A "low fat, high fiber" breakfast with (terribly expensive) plain black coffee. There's a lot to be said about the implicit social views of eating in this scene. But instead imagine this: out of the woodwork I wander onto the show and say "Actually Dr Crane, you would be better off substituting that bran muffin for bacon.".

What does that mean? A calorie neutral substitution? A mass neutral substitution? A subjective substitution? The second is not a joke - it's clear (is it?) that a person who is fed via IV would "feel hungry" and attempt to eat whether the IV was sugar or oil. In the language of economics such a person would hit their first order conditions for diet optimality (I.e. calories would be right) but not their second order conditions. They're at a minimum utility. This cannot last. Believing in diet advice unstable to perturbations is unscientific - completely methodologically unsound. The third option is not necessarily unscientific - subjective feelings of fullness are related to the health relavent properties of food. Attempting to lose weight via a simplistic, objective calorie/mass accounting system may again put you at an unstable equilbrium. These kind of yo-yo unstable diets aren't obviously healthy.

The market for food is broken, possibly irrevocably broken. There has never been, in all the history of mankind, a society where farm labor is economically valued. As a result, all industrial societies prop up production - often in highly distortionary ways. It is obvious that, for instance, the US overproduces corn carbohydrates. This is a Bad Thing.

On the consumer side, it must be admitted by any person who desires to be taken seriously that branding and monopolistic competition more generally are real. Government intervention has been ineffective at policing this, even when it has been pointed in the right direction. This is unsurprising - Coase on the right and Stiglitz on the left are always fond of pointing out that the conditions in which governments/markets can fail are the exact conditions markets/governments can fail.

This brings us back to the first point. How do we talk about food? There is an enormous signaling problem here. Just as every prospective worker has an incentive to appear to be valuable to a prospective employer, so every prospective meal has an incentive to appear to be healthy to a prospective eater. (Healthiness & productivity of course defined variationally) Eaters therefore statistically discriminate, choosing foods with a few easily observed outward signs of "healthiness". Sugary cereals put photos of nutritious meals on the box. Having blueberries and a green container turns a malted into a diet food. More informative statistics - carbohydrate and protein and fat measures, total calories, ingredients, etc. - are buried in confusing, neutrally colored, small type statistical abstracts.

Adding exercise to a given diet is generally good, since exercise to a large extent determines the distribution of variable masses for a person of a given mass. We may not be indifferent between being Akebono and Bob Sapp. As Kimball notes above & every bodybuilder knows - total weight balances the mass of food that comes in and goes out. (Also, cardio health is good, even though the mass of the cardio system isn't particularly variable in mass) But there are huge difficulties here. First, it isn't methodologically sound to assume a person can vary exercise and not vary their diet (it assumes that their old equilibrium was unstable, which is exactly what it is not for an obese person who can't lose weight). The market for exercises is not obviously healthy. Like with food, every prospective exercise plan has an incentive to seem healthy & sustainable even if it is not. Survivor bias is endemic here - everyone who keeps up My Super Special Program long enough achieves their weight & weight distribution targets and everyone who doesn't leaves.

You can't sit around thinking about your diet all day. Simplistic accounting techniques can beat advanced techniques just because they're easier to understand. It may not be easy to account for carbohydrate, protein or fat intake simply because their are so many kinds of carbohydrates, proteins and fats that eating the "wrong" kind may not give the eater noticeable feedback. Fasting can then empirically outperform theoretically superior modes because it's easy to notice when you cheat.

These are only the simplest economic metaphors. Beyond this there are cultural and even political factors. But that'll have to wait for another post.

Saturday, December 16, 2017

Nozick On ... Inequality?

Robert Nozick was a Harvard philosopher, a political philospher among other things. He was an odd duck with an interesting sense of humor - speculating that autofellatio plaid a role in classical Hindu yoga was typical of his crass jokes. But he was very serious about one thing - Nozick sincerely believed in a philosophical theory of social desert - that one should be allowed to own all and only the goods entitled to you. Nozick traced this theory to the Lockean theory of production & distribution. Locke believed that one owned those goods which one mixed with one's labour (if you were European anyway).

Nozick brought his beliefs to so-called "libertarian" ends. If you are entitled to those goods to the extent which you mixed your labor into them, then it is not clear that you are entitled to any public goods at all. Nozick had an argument - the Utility Monster argument - that utilitarianism (the philosophical position that public policy should aim at some measure of aggregate happiness) could not be a priori true. Consider a society consisting of two consumer classes, one with decreasing returns to consumption (normal people) and one with increasing returns to comsumption (utility monsters). Holding a nations's output constant, the utilitarian political advisor says it is always worth it to tax the normal people and subsidize the utility monsters, which seems unjust a priori. Nozick says, shortly, that utilitarianism is false because it can excuse income inequality.

But here Nozick reaches an impasse. You see, it's not clear that a entitlement theory of desert avoids income inequality. In fact, Nozick argues that entitlement is true despite the fact that it can justify income inequality. His argument is not complex. First of all, he assumes that the set of just objects is closed - any outcome that is reached by individually just actions is just. Next he constructs a possible world where income inequality seems justified. In this imaginary Pittsburgh, everyone starts with $1. But one person is special - he's Wilt Chamberlin. If even one person pays 1¢ to see an exhibition from Wilt The Stilt then he becomes - through no fault of his own - the richest man in Pittsburgh. Why is this unjust?

There are several responses to this. One is tu quoque - why is this income inequality obviously good but utilitarian income inequality obviously bad? But there are sharper critiques. Despite the appearance of dollars and cents, Nozick's example is not economic. The people of imaginary Pittsburgh are not given alternate uses for their money. Why would they hold cash? Why does Chamberlin hold cash? This points us to the deeper problem arising from Nozick's economic ignorance - income is a flow but he treats it as a stock. It is of interest to Nozick that Chamberlin's capital recieves dividends, but it's clear he hasn't placed those dividends in a society - his "possible world" is unworthy of the name.

The choice of Wilt Chamberlin is careful rhetoric. Most capital that pays dividends as the Wilt Chamberlin case can be transferred - in the case of extreme regimes, land can be nationalized, factories can be seized, etc.. But Nozick clearly believes Chamberlin's God given talents are just that - in born natural talent (that Nozick thinks this about Chamberlin brings up the issue of race in ways I won't address) that can't be transferred. There might be a similar point about transferable utility in the utility monster case. The only way to redistribute the returns on Chamberlin-capital is to tax the income - right?

Well, maybe maybe not. Most modern theorists on inequality concentrate on wealth inequality rather than income inequality. Unlike income inequality, wealth inequality doesn't correlate with Wilt Chamberlin like capital - not with things that a priori seem justified. Not IQ, for instance. This is a positive, not a priori case but it is still a hole in Nozick's point. Even if one believes that Chamberlin deserves the income he recieves from his non-transferable capital, that doesn't mean that one believes he deserves his stock of wealth. So Nozick's argument is all a little bit old fashioned.

In sum, I think that Nozick's argument is unpersuasive for two reasons. It isn't obvious that he solves what he thinks are problems with other theories and even the internal coherence of his story is questionable. Still, Invariances is a good book.