Sunday, August 29, 2010

What is a gate? (Part 1)

It's been a bit since my last "what is" post, but I'd like to return to talking about science by taking a pause from my build-up to quantum states and quantum computation to instead discuss something more classical: the notion of a logic gate.

One way of modeling classical computation is as a sequence of operations performed on some data. We can then consider each operation independently. Just as we can build up complicated equations from simple arithmetic operations, these computational operations, typically called gates, can be used to build up arbitrarily complicated computations.

Take a specific example, the NOT gate, also written ¬. This gate takes a bit and produced a bit with the opposite value. Since each bit can only have one of two possible values (either 0 or 1), we can completely specify the behavior of the NOT gate by listing what it does to each of these inputs. That is, if I tell you that ¬ 0 = 1 and that ¬ 1 = 0, then in principle, I have told you everything that there is to know about the NOT gate. If this reminds you of a basis, then your intuition serves you well— we will explore that connection more in due time.

For now, though, I would like to discuss a few more examples of gates: the AND and OR gates, often written as ∧ and ∨, respectively (if these symbols seem arcane, it may think of them in terms of set unions and intersections). Each of these gates takes two bits as inputs and produces one output. AND produces 1 if and only if both its inputs are 1 (1 ∧ 1 = 1, 0 ∧ 0 = 0 ∧ 1 = 1 ∧ 0 = 0), while OR produces 1 if and only if at least one input is 1 (0 ∨ 0 = 0, 0 ∨ 1 = 1 ∨ 0 = 1 ∨ 1 = 1). Finally, the XOR gate (short for exclusive or and written ⊕) returns 1 if and only if exactly one input is 1 (0 ⊕ 0 = 1 ⊕ 1 = 0, 0 ⊕ 1 = 1 ⊕ 0 = 1).

With these four gates, we can build up any arbitrarily complicated Boolean function; that is, a function from strings of bits to a single bit. Functions returning multiple bits can in turn be built up by representing each output bit as a Boolean function. We could actually do with less kinds of gates, but that's besides the point. Rather, the point is that together, NOT, AND, OR and XOR are universal for classical computation.

It takes some effort to prove this, but an example helps to make things concrete. The full adder circuit in particular can be used to add two one-bit numbers, and is built up entirely from two XOR gates, two AND and one OR gate, as shown in this circuit diagram from Wikipedia. These full adders in turn can be combined to add arbitrarily long integers. From addition, one can get to subtraction and multiplication, demonstrating the usefulness of the gate model in capturing arithmetic.

Even more compellingly, we can efficiently simulate Turing machines with these few gates, meaning that NOT, AND, OR and XOR are at least as expressive as Turing machines. Thinking about gates, then, is a powerful way of thinking about classical computation. As we shall see, this power carries very nicely to the quantum case as well.

Saturday, August 28, 2010

On loyalty of a peculiar kind.

The Scientific American podcast highlighted today research showing that members of "Generation X" (why is that Godawful name still around?) are on average more loyal to religion than are members of their parents' generation. Setting aside the question of how reliable this report is, since there are very few details given, let us instead treat the article as a launching point for discussion.

Indeed, it is uncontroversial that loyalty to religion exists in some sense. What does such a loyalty mean, however? What courses of action are demanded by such a loyalty? This is at best a problematic question to answer, as it is belief in a set of material claims that may be taken as comprising a religion. Under this view, religions are not adopted as matters of principle, but rather as a matter of that belief. One may as well ask what a loyalty to The Lord of the Rings implies.

Not to put too fine a point on it, but it is a bizarre notion that one can be loyal or disloyal to a set of claims about material reality. Such claims are ideally decided by consulting that self-same material reality, rather than loyalty to one or another set of claims. One that is loyal to a set of religious claims is then someone that is compelled by this loyalty to assert the primacy of their claims over evidence. Such loyalty is distinct from loyalty to a person, ideal or value in that it cannot be a matter of principle without falling into the trap of argument by appeal to consequences.

This sort of religious loyalty is, on the other hand, one onto which principles can be grafted. If some of the material claims to a religion are that certain modes of conduct are inherently morally superior to others by divine edict, then adoption of principles reinforcing those modes of conduct is a consequence of loyalty to those claims. Since correspondence to reality is not demanded from these religious claims about reality, such claims may be manipulated so as to imply any of a wide range of mutually contradictory principles. That is, religious loyalty is not a matter of principle so much as a vessel into which principle can be poured.

Witness, for instance, the latest absurdity from Glenn Beck, his I Have A Scheme speech. As others have noted, the attendees were indeed fiercely loyal, but to no particular principle. Rather, the Tea Party, in buying into and expressing loyalty towards material claims that are demonstrably false, has made themselves into so many empty vessels into which the hateful principles of the GOP may be poured.

Just as with religious loyalty, this political loyalty neither demands nor exhibits correspondence with reality, neither demands nor exhibits principle. As such, it is difficult if not impossible to apply reason to these views. This sort of blind loyalty, made without deference to what actually is, must be seen as a problem if we are to progress in our moral thinking as a society.

All that is sacred.

(adj) sacred (worthy of respect or dedication) "saw motherhood as woman's sacred calling" [WordNet]
Leaving aside the implicit misogyny of the example given, the citation from WordNet for the word "sacred" demonstrates something very important: we can divorce what it means to be sacred from any sort of religious sentiment. Indeed, if we are to leave irrationality behind us, I assert that we must do so. Thus, I'd like to talk a bit about what is sacred to me. That is, what I find to be inherently worthy of respect or dedication.

In that spirit, then, knowledge is sacred to me in ways that nothing else is. Were I to be asked to identify the most quintessentially defining aspect of all that is good about humanity, I would likely respond that our ability to accumulate and record knowledge is what allows us to transcend not only the ignorance into which we are all born, but also the limits of our physical brains. All other human achievements are enabled by our accrual of knowledge in ways that outlast any individual human. At once, the acquisition of knowledge is a highly individual and highly collective pursuit, epitomizing what it means to achieve something of permanence. Towers crumble, words remain.

It is difficult for me to adequately justify my valuation of knowledge as uniquely sacred, as it is fundamental to the person that I've become. As someone that values rationality, however, I must work to increasingly do just that. If this valuation cannot be supported on its own merits, then it is no better than faith and other such anti-virtues. That said, I am in the awkward position that every rational person eventually finds themselves in of not knowing all the answers. Like everything else in my life, this valuation must be amenable to rational analysis, and yet I must have some notions which guide my actions in the interim. Put differently, I must employ some set of heuristics that I use to evaluate both my own choices and those of the society around me. These heuristics must then be refined by learning additional facts and must be discarded to the extent that they contradict reality.

On a more tangential note, I tend to suspect that it is these heuristics that often get confused with religious-minded beliefs, driving the "science is faith" fallacy that I find so detestable. The key difference is that the heuristics adopted by someone that values rationality are recognized as being mere approximations, and thus are malleable to the extent that the underlying reality is not known. Thus, while such heuristics superficially resemble beliefs, they are quite different in practice.

Of course, it's very easy to simply say that something is sacred; a more pressing question for someone dedicated to rationality and materialism is what this assertion implies. A heuristic which does not either directly imply action or imply other values and heuristics which in turn imply action is by hypothesis a vacuous and useless heuristic. Exploring this notion, then, consider what a heuristic of sacred knowledge leads me to aspire to.

Though it is somewhat circular, the first and perhaps most important consequence of this heuristic is the valuation of science, the formalization of the pursuit of knowledge. Disentangling this apparent bit of circular reasoning would take me still further afield, so I will be content to leave it for now with the claim that the sacred-knowledge heuristic and the scientific method are synergistic rather than truly circularly dependent.

Another important consequence of this heuristic is the additional heuristic that knowledge should be shared-- after all, knowledge locked away is knowledge that cannot help in the further pursuit of other knowledge. This is a large part of why the open access and open source movements excite me so, and why I oppose the locking away of human knowledge behind paywalls, military secrecy or other such artificial barriers. Additionally, knowledge kept secret is knowledge that is much more difficult to preserve.

With new approaches to information storage, computation and communication, we are blessed (if you'll forgive the pun and not read too much into it) with new opportunities to safeguard our knowledge against the relentless march of time. To exercise these opportunities, however, archivists must be recognized as a critical part of our societal infrastructure and knowledge must be accessible for preservation.

While I could continue in this vein, I think that this short exploration of the consequences of my sacred-knowledge heuristic is sufficient to demonstrate an essential point: rationality requires rather than precludes the adoption of strong principles to be applied to the world around us, insofar as these principles are derived from sources amenable to rational analysis. We cannot afford for religion to maintain a cultural monopoly on the respect and dedication that underlie the word "sacred," but rather must build our own sacredness in a rational way. All that is sacred, in short, must still lie within the realm of that which can be reasoned about if we are to maintain the primacy of rationality.

Thursday, August 26, 2010

Poisoned ethics and used video games.

Thanks to @saverqueen (blog) for inspiring this discussion.
Recently, I bought some used video games. I love buying used, as it saves me money and keeps perfectly fine games out of landfills, not to mention preventing more from having to be printed in the first place. These days, at least half of the video games I buy are used. Similar goes for me and movies.

With this particular purchase, however, some mixed feelings were brought forth. You see, these games were purchased as a gift. It seems almost instinctual that one doesn't give used games, movies, books, etc. as gifts. To do so is almost as bad a sin as playing with a toy before gifting it, or wearing clothes intended for someone else. At least, that's what the societal norm seems to be. Love is buying new things, goes the chorus.

I think we must, however, take a step back and ask if that is really the kind of ethical norm that we wish to adopt as our own. Why should our love for one another be expressed by continuing a destructive consumeristic cycle, where newness is its own reward? It is not even consumerism itself that I find so objectionable as the pointlessness of making consumerism the goal rather than the means. The philosophy of buying new for its own sake seems dangerously close to the vapid philosophy once espoused by a classmate of mine: "the meaning of life is to have kids!"

This is why my family has made a decision: used gifts are just fine with us! That decision affords me new opportunities to find unexpected gifts, such as classic video games for my brother that he wouldn't have found on his own, or out-of-print novels for my parents. Occasionally, yes, I do buy new things as gifts, even within the family, but when I do, I'd like to think it's because it's my decision to and not because I have let my sense of ethics become poisoned by the obsession with a growth-based economy. I give gifts to loved ones to bring them happiness; isn't that enough?

Sunday, August 22, 2010

Accommodationism: A vexing asymmetry.

In my last argumentative post, I slipped in a bit of a sarcastic point at the end that I feel is worth treating more seriously. In that post, I said that:
It is truly unfortunate, however, that [his] approach to arguing for this controversial claim is to build such silly and distorted strawmen of atheists who might otherwise be more inclined to ally themselves with him in fighting the woo that he so rightfully expresses a passion to fight.
When I originally put those words to pixels, I intended only a cheap laugh at the thesis that atheists should keep quiet so as to not scare off the religious from the goal of (for instance) science education. This thesis, broadly called accommodationism by its detractors, including myself, has been been quite pervasive as of late (making it all the way to the AAAS, for instance), and has been the center of much discussion.

What bothers me most about accommodationism, however, is something that is too seldom remarked upon: its strange and vexing asymmetry. While it is often claimed that anti-religious sentiment scares off the religious from worthwhile causes, irregardless of how well or poorly it is supported by rational argument, I have never heard it argued that people need to be more accepting of atheists for fear of scaring us away from these same worthwhile causes. Does it not cause accommodationists consternation that referring to "fundamentalist atheists" may be the precise kind of incivility that that they fear poisons communities? Anecdotally, at least, I can confidently state that I have a harder time partaking in communities where my atheism is rejected out of hand and treated with derision rather than argued against.

Don't get me wrong, however, as I wouldn't dream of asking for special privilege and exemption from criticism. Criticism, when delivered in an honest and clear manner, is the lifeblood of an intellectual community. Rather, I find disturbing the comparative lack of concern at the derision pointed at atheists that one would expect from an intellectually consistent position. Is it the case, then, that atheists are seen as less desirable by such accommodationists than are the religious? Is it that atheists are seen as a more direct threat to the goals of promoting science in society than are the true fundamentalists?

There is another possibility that seems much more palatable to me. Atheists are seen as mature enough being able to take such derision in stride along with the criticism. For obvious and self-centered reasons, I should like to think that this is the case. Why, then, is the assumption that people of faith are less able to deal with both legitimate criticism and the sort of derision that comes with any emotional issue? Such an assumption seems to me to be more insulting than any of the derision thrown about by the atheists.

All this is a long-winded way of saying that I think we should not let the valid and laudable pursuit of civility and mutual respect lead us into the sort of asymmetric mire that is accommodationism.

What is a matrix? (Part 2)

Now we have a new kind of mathematical toy to play with, the matrix. As I said in the previous post, the easiest way to get a sense of what a matrices do is to use them for a while. In this post, then, I just want to go over a couple useful examples.
Suppose you wish to make all vectors in ℝ² longer or shorter by some factor s ≠ 0. You can represent this by a function f(v) = sv. With a moment's work, we can verify that this is a linear function because of the distributive law. Thus, we can represent f by a matrix. To do so, remember that we calculate f for each element of a basis. For simplicity, we will use the elementary basis {x, y}. Then, f(x) = sx and f(y) = sy. By using coordinates, we can write this as f([1; 0]) = [s; 0] and f([0; 1]) = [0; s]. The matrix representation of f then becomes:
Note that if s = 1, the function f doesn't do anything. Representing f(v) = v as a matrix, we get the very special matrix called the identity matrix, written as I, 𝟙 or 𝕀:
The identity matrix has the property that for any matrix M, M𝟙 = 𝟙M = M, much like the number 1 acts.

Of course, there's no requirement that we stretch x and y by the same amount. The matrix [a 0; 0 b], for instance, stretches x by a and y by b. If one or both of a and b is negative, then we flip the direction of x or y, respectively, since -v is the vector of the same length as v but pointing in the opposite direction.

A more complicated example shows how matrices can "mix up" the different parts of a vector by rotating one into the other. Consider, for instance, a rotation of the 2D plane by some angle θ (counterclockwise, of course). This is more difficult to write down as a function, and so a picture may be useful:

By referencing this picture, we see that f(x) = cos θ x + sin θ y, while f(y) = - sin θ x + cos θ y. Thus, we can obtain the famous rotation matrix:
As a sanity check, note that if θ = 0, then Rθ = 𝟙, as we would expect for a matrix that "does nothing."
One very important note that needs to be made about matrices is that multiplication of matrices is not always (or even often) commutative. To see this we let the matrix S swap the roles of x and y; that is, S = [0 1; 1 0]. Then, consider A = SRθ and B = S. Since applying S twice does nothing (that is, S² = 𝟙), we have that BA = Rθ. On the other hand, if we calculate AB = SRθS, we find that AB = R:
(Sorry for the formatting problems with that equation.)We conclude that ABBA unless sin θ = 0, neatly demonstrating that not all the typical rules of multiplication carry over to matrices.

I'll leave it here for now, but hopefully seeing a few useful matrices makes them seem less mysterious. Until next time!

Saturday, August 21, 2010

What is a matrix? (Part 1)

Functions are an important tool in mathematics, and are used to represent many different kinds of processes in nature. Like so many mathematical objects, however, functions can be difficult to use without making some simplifying assumptions. One particularly nice assumption that we will often make is that a function is linear in its arguments:
One can think of a linear function as one that leaves addition and scalar multiplication alone. To see where the name comes from, let's look at a few properties of a linear function f:
This implies that f(0) = 0 for any linear function. Next, suppose that f(x) = 1 for some x. Then:
This means that if f represents a line passing through 0 having slope m = 1 / x.

So what does all this have to do with matrices? Suppose we have a linear function which takes vectors as inputs. (To avoid formatting problems, I'll write vectors as lowercase letters that are italicized and underlined when they appear in text, such as v.) In particular, let's consider a vector v in ℝ². If we use the {x, y} basis discussed last time, then we can write v = ax + by. Now, suppose we have a linear function f : ℝ² → ℝ² (that means that takes ℝ² vectors as inputs and produces ℝ² vectors as output). We can use the linear property to specify how f acts on any arbitrary vector by just specifying a few values:
This makes it plain that f(x) and f(y) contain all of the necessary information to describe f. Since each of these may itself be written in the {x, y} basis, we may as well just keep the coefficients of f(x) and f(y) in that basis:
We call the object F made up of the coefficients of f(x) and f(y) a matrix, and say that it has four elements. The element in the ith row and jth column is often written Fij. Application of the function f to a vector v can now be written as the matrix F multiplied by the column vector representation of v:
We can take this as defining how a matrix gets multiplied by a vector, in fact. This approach gives us a lot of power. For instance, if we have a second linear function g : ℝ² → ℝ², then we can write out the composition (gf)(v) = g(f(v)) in the same way:

That means that we can find a matrix for gf from the matrices for g and f. The process for doing so is what we call matrix multiplication. Concretely, if we want to find (AB)ij, the element in the ith row and jth column of the product AB, we take the dot product of the ith row of A and the jth column of B, where the dot product of two lists of numbers is the sum of their products:

To find the dot product of any two vectors, we write them each out in the same basis and use this formula. It can be shown that which basis you use doesn't change the answer.

If this all seems arcane, then try reading through it a few times, but rest assured, it makes a lot of sense with some more practice. Next time, we'll look at some particular matrices that have some very useful applications.

Rebuttal: The Difference Between Religion and Woo

I have tried to resist writing about science and religion for awhile; at least, dial back the frequency a bit. My ideas are not hidden, but they're also not terribly unique. Much of the time, I suspect my voice only marginally adds to the conversation, if at all.

All this aside, there are times when I find it extremely difficult to resist. It is particularly hard for me to let something lay when someone else makes an issue of it. This is precisely the case of Rob Knop's latest post, in which he attempts to insert a wedge between religion and woo while still maintaining the validity and importance of science. His post is such a quintessential example of that protected status for religion that I find so harmful to our society that I find myself drawn into yet another Web-delivered argument. I don't write this post with the hopes that my argument with Knop will go any better than last time, but rather because it is important to me that I try.

Without further ado, then, let us look at what Knop has to say. It's a long post, so by necessity I will pick out the bits I feel most deserving of response-- go read it for the full context of his remarks.
Why do I mention this? Because I see a lot of those who call themselves skeptics making exactly the same mistake— judging another field of intellectual inquiry on what they believe to be the one true way of reason. They dismiss things as trivial or childish based on criteria that fail to be relevant to the field of human intellectual activity they’re trivializing. Specifically, there are a lot of people out there who will imply, or state, that the only form of knowledge that really can be called knowledge is scientific knowledge; that if it is not knowledge gained through the scientific method, it’s ultimately all crap.
At this point, Knop has made it clear that he intends on revisiting his false equivocation between religious fundamentalists and "fundamentalist atheists" (full disclosure: Knop apparently considers me to be a member of this group). By using phrases like "one true way of reason," Knop conveniently ignores that skepticism, atheism and rationality have no central dogma beyond a sort of pragmatic honesty: if you are going to claim that your methodology (or way of reason, in Knop's vernacular) works, then it had damn well better work. As a part of that, yes, you must be able to verify that your "way of knowing" produces useful results, or else you cannot legitimately say that your methodology is a valid one.

The scientific method, then, which Knop elevates to the level of dogmatism in order to build his straw man, is not a dogma at all but a formalization of those ways of learning that have been shown to work. Far from being immutable or the "one true" way, science is adaptive and self-correcting. Already, then, Knop's equivocation fails on the basis that he's not describing skepticism as is espoused by the atheists he is so reviled by, but rather his own funhouse mirror version. We've got a lot of post left to cover, though, so let's press on:
What makes Robert Frost so much more important to human culture than the stories I wrote when I was 7? It’s not a scientific question, but it is a question that is trivially obvious to those who study literature, culture, and history. And, yet, using my 7-year-old story to dismiss all of literature as crap makes as much sense as using the notion of believing in a teapot between Earth and Mars as a means of dismissing all of religion.
If there is one sure way of pissing me off, it's to tell me that something "isn't a scientific question." Given that science is the methodology of pragmatism, such claims are no more than a way of giving up reasoned analysis. As someone who has made a career out of cultivating and exploring his own curiosity, few things are more offensive to me than someone putting such ultimate limits along my path. I don't expect that Knop refrain from doing things that offend me, however, as that would make the world a much more boring place--- rather, I would hope that as a fellow scientist, Knop would feel the same curiosity and lust for knowledge that renders such a claim so offensive to me.

The burden, however, of demonstrating that analyzing Frost versus the 7-year-old writings of Knop lies within the realm of science is one that I shall have to take on to truly make my point. In that vein, then, note that in addition to the "hard" sciences such as physics and chemistry, we have a full array of social sciences that are dedicated to applying scientific (that is, useful) methods to social questions. Such questions inevitably deal with the behaviors of entities each composed of many more than 10²³ particles, so that the "hard" sciences are completely overwhelmed by the sheer scale of the questions. Thus, we have found it useful to develop alternate methodologies that sacrifice some degree of exactness and objectivity in exchange for an enhanced ability to cope with such overwhelming questions as that proposed by Knop.

Ultimately, though, we must expect that the methods of analyzing Frost must lie within science for one simple reason: Frost existed within this reality, was a physical being and produced tangible objects that are amenable to study. Frost was, just like you or I, a citizen of the physical universe. Even if one posits the existence of a soul to try and escape this fact, the soul then influences the physical world by some mechanism that is not completely random, and thus can be examined. Knop's question is as scientific a question as any that could be asked, in that it is a question that concerns physical objects and that can be answered using useful and robust methodologies.

Of course, this is all a distraction from Knop's apparent point to mentioning Frost and his younger self. Rather, Knop accuses those who make reference to Russell's teapot of being akin to those strawmen that would discard Frost as useless due to the apparent uselessness of stories written by seven-year-old children. Indeed, Knop makes this accusation quite clear:
If you cannot see the difference between Russell’s teapot and the great world religions, then you’re no more qualified to talk about religion than the fellow who thinks that cultural bias is the only reason any of us believe in the Big Bang is qualified to talk about cosmology.
Pray tell, then, what is the difference between Russell's teapot and, just to make the discussion concrete, Christianity? Besides, of course, that the teapot is a gendanken intended to provide an easy example of the kinds of arguments that can and should be made against religion. All of Knop's strawmen aside, I have never heard of anyone claiming that Russell's teapot invalidates all of the world's religions, but rather that the gendanken explains why we should insist upon claims being testable. Religion is, in actuality, a complex and multi-faceted thing which many atheists and skeptics take a great deal of effort to understand. That along the way we find such examples as Russell's teapot useful is far from using the teapot as "a means of dismissing all of religion."

If Knop is interested in dragging atheists through the mud for overly reductionist arguments, then perhaps he should start by not reducing us to such a caricature of our actual arguments. That would include, for instance, not saying things like this:
There are quite a number of skeptics who openly say that they cannot see the difference between religion and belief in UFOs, Homeopathy, or any of the rest of the laundry list of woo that exists in modern culture.
There is of course a difference between religion and homeopathy: there's a hell of a lot more religious people in the world. Mind you, that's not the only difference, but the most immediately important one. As a consequence, religion alone has earned itself a special status in our society as immune to rational analysis and criticism. The point that I and others that agree with me tend to make isn't that religion and woo are the same, but rather that they draw from there is an important commonality to be found in their mutual rejection of rationality. This hypothetical reductionist that is blind to anything but that commonality, important as it is, is no more representative of actual atheists than any other strawman presented thus far.

On the other hand, Knop pretty much nails it with his next claim:
The assertion is that being religious is a sign of a deep intellectual flaw, that these people are not thinking rationally, not applying reason.
Yes, that is precisely what I have said here and in many other venues, though presented in much more judgmental terms than I find are appropriate to the assertion being made. Rather, I would put it differently by asserting that religion is not philosophically compatible or logically consistent with rationality.

Of course, the part of this assertion that  people repeat far less often is that religion is not unique in that regard. There are many other intellectual "flaws," a great many of which I will admit that I am afflicted by. Why I focus on religion, then, is that it is relatively unique in being celebrated and enshrined despite that it is defunct as a means of learning-- of accumulating accurate knowledge.

I could go into much more detail on this point, but for now let me leave it for now, as I would like to get onto Knop's next point:
It’s fine to believe [that religion is a sign of a deep intellectual flaw], just as it’s fine to believe that the Big Bang theory is a self-delusional social construction of a Judeo-Christian culture. But it’s also wrong.
Read that again, please. Knop is saying that it is fine to believe something that is wrong, and is it is with that assertion that I most passionately disagree with him. In my life, I strive to ensure that I believe only things which are true, and so I will admit that I have very little basis for understanding Knop's assertion here. Even moreso, when Knop continues thusly:
Yes, there is absolutely no scientific reason to believe in a God or in anything spiritual beyond the real world that we can see and measure with science.
This is a statement which is not new to me, but which I have made no recent progress towards understanding. I doubt that Knop intends to say that his god is impotent in that it is incapable of affecting the material world, and so I presume that Knop is asserting the existence of an untestable and yet still physical phenomenon. As I said before, however, this is where I must take earnest and profound offense: learning does not stop where it is convenient for the religious, and so we should not impose a priori limits on understanding the world just because of someone's god. Either Knop's god is impotent or it is material in the sense that it affects the material world; if we insist upon the latter, than the methods of science (sometimes called "methodological naturalism" in this context) must be able to study the patterns by which his god affects the world.

It is in the context of this assertion that I find Knop's closing comments so difficult to agree with:
But that does not mean that those who do believe in some of those things can’t be every bit as much a skeptic who wants people to understand solid scientific reasoning as a card-carrying atheist.
Knop has admitted in his post that there are a priori and impregnable limits to the limits of rationality, something which I do not admit or agree with. In doing so, there is at least one "bit" with which I am more willing to be a skeptic than is Knop. While overall, Knop may be more or less skeptical than I am (I really don't know which is the case), I cannot agree with the claim that his endorsing of religion is compatible with the skepticism he practices elsewhere in his life.

It is truly unfortunate, however, that Knop's approach to arguing for this controversial claim is to build such silly and distorted strawmen of atheists who might otherwise be more inclined to ally themselves with him in fighting the woo that he so rightfully expresses a passion to fight.

Note: Rob Knop said a great many things in his post I did not address, in the interests of brevity (believe it or not). Please don't take this posting as being a fair summary of the entirety of his argument, as it is intended only as a response to those points I found most objectionable.

Thursday, August 19, 2010

Role for initiative.

Since I can't resist doing more cultural and philosophical blogging, I figure I should chime in on one more of the recent topics to set the Internet abuzz. In particular, Greta Christina wrote a pair of excellent articles on ten unfair and sexist things expected of men (part 1, part 2). As is her custom, Greta Christina nails beautifully the points that she attempted to hit in her columns. For my part, then, I'd like to expound just a bit, and to be presumptuous enough to add a stupid thing of my own to the list of stupid things society expects of men.
Preemptive Apology: The rest of this post will be unintentionally heteronormative, as that is the set of experience from which I can best draw. As such, I apologize to those outside the narrow range of sexual identities discussed here for neglecting their experiences.

To put it bluntly, modern society still seems (anecdotally, anyway) to maintain the antiquated expectation that men take the initiative in forming relationships. It is common that we put the pressure on men to initiate relationships ("have you asked her out yet?"), and that we encourage women to wait. While this standard is manifestly unfair to women, as it strips them of yet mode of personal decision making, I posit that it is just as manifestly unfair to men. Taking the first step is bloody hard, after all. You must be willing to put your feelings on the line, to be honest in the face of intimidating awkwardness, and perhaps most frighteningly, to be wrong about your feelings.

All these is leaving aside, of course, the mire of ambiguities and potential misinterpretations built up from cultural expectations of a male privileged society. In any action, one must take into account the cultural context of that action in order to respect the humanity of those around them. It is no different in the case of romantic initiative, save for that the context is that much more overwhelming.

Thankfully, we see the signs of this unfair standard starting to break down, as both men and women alike are encouraged to seek the pleasure of another person's company. In time, then, and with the introspection granted by such discussions as that started by Greta Christina, perhaps we can decouple the role from the initiative.

Looking back at what is.

Over the past few weeks, I've tried to tunnel through the potential barriers to actual science blogging, with mixed success. One of my bigger oversights thus far has been to omit any sort of a "big picture" from my posts, leaving them as little islands in a vast sea of scientific ideas. Today, I'd like to correct that.

My most immediate goal in science blogging has been to explain how physical states function in the beautiful formalism of quantum mechanics. The language of quantum mechanics, however, is one of probabilities, of complex numbers and of linear algebra. Arguably the most fundamental part of the language of quantum mechanics, linear algebra may be roughly thought of as the study of vectors, and how they transform. As I shall discuss in a future post, by using the idea of a basis, we can represent a special kind of vector transformation by an object called a matrix (more generally, an operator). Thus, what we have discussed thus far is not a set of disparate islands so much as a set of stepping stones. If you prefer a more concrete metaphor, we have poured a foundation for future discussions, including a discussion of the quantum state itself.

Once we have the idea of a quantum state, the horizon opens wide for exploration. The quantum state gives us a language in which we can understand seemingly arcane consequences of a world described by quantum mechanics, such as entanglement or superposition. With the formal tools of mathematics at our disposal, we can overcome the limitations of our intuition, so that we can understand even such tricky concepts as these.

One downside to my stepping-stone approach, however, is to seemingly put the concept of a quantum state on a pedestal, inaccessible without a high degree of mathematical maturity. Little could be further from the case. Indeed, the mathematics with which we understand quantum states are not so difficult as they are esoteric. It is my own opinion that these areas of math need not be esoteric, save for that it has been arbitrarily decided upon (at least in my home, the United States) that Math Is Hard, and that concepts such as those discussed here Should Be Left To the Professionals. Bollocks. We live in a probabilistic world, and one in which statistics guide nearly every aspect of society, so why should understanding probability be so inaccessible? While complex numbers are not so manifestly real, even to the point that i is called the imaginary unit, it takes but a small amount of study to see that the complex numbers form an integral part of how we describe reality. Similarly, the concept of a vector may seem too far removed from reality for the layman to pursue, but in many ways, vectors formalize and encode much of our intuition about geometry, and are just as accessible as the sort of geometry that is taught in many grade schools.

No, quantum states are there for those who want them. My goal is to bring the concepts just a little bit closer, and to let the mathematical beauty underlying them shine through just a little bit brighter. In doing so, I won't always go from point to point in most straightforward way, but I ask your patience, for I am going somewhere. With a bit of looking back at what is, I hope you'll agree that we're going somewhere interesting.

Tuesday, August 17, 2010

What is a basis?

Consider a vector. Just to make things concrete, consider a vector on the 2-D plane. In fact, let's consider this one (call it v⃑):
It's a vector, to be sure, but it's hardly clear how one is supposed to work with it. It doesn't make sense to pull out a ruler and pencil every time we want to add our vector to something; mathematics is supposed to be a model of the world, and thus we should be able to understand things about that model without recourse to physical measurements. To solve this problem for vectors on the plane, we can introduce two new vectors, x̂ and ŷ, then use vector addition to write v⃑ as a sum:
Now we can write v⃑ = ax̂ + bŷ̂, which doesn't at first seem to buy us much. Note, however, that we can write any vector on the 2D plane as a sum of these two new vectors in various linear combinations. Mathematically, we write this as ℝ² = span {x̂, ŷ}. Whenever a space can be written this way for some set of vectors B, we say that B is a basis for the space.

Once we have a basis picked out, we can work with the coefficients (a and b in our example) instead of the vector itself, as they completely characterize the vector. For example, adding vectors becomes a matter of adding their respective coefficients.

In spaces other than the 2-D plane, we can also apply the same idea to find bases for representing vectors. Consider, for instance, the space of column vectors such as [a; b] (pretend they're stacked in a column, OK?). Then, a perfectly fine basis would be the set:
It's easy to see that we can write any other 2-dimensional column vector as a sum of the form a[1; 0] + b[0; 1] = [a; 0] + [b; 0] = [a; b].

A point that can get lost in this kind of discussion, however, is that there's absolutely nothing special about the bases I've given here as examples. We could just as well used [1; 1] and [1; -1] as a basis for column vectors, or just as well used a different pair of vectors in the plane:
Put differently, a basis is a largely arbitrary choice that you make when working with vectors. The relevant operations work regardless of what basis you use, since each of the vectors in a basis can itself be expanded. For example, [1; 0] = ½([1;1] + [1; -1]) and [0; 1] = ½([1; 1] - [1; -1]), so that we have a way of converting from a representation in the {[1; 0], [0; 1]} basis to the {[1; 1], [1; -1]} basis.

While there is much, much more to be said on the topic of bases for vectorspaces, I'm happy to say a few words about bases. As we shall see when we get into discussing linear operations, the existence of bases for vectorspaces is a large part of what gives us so much power in linear algebra. We shall need this power in the quantum realm, as linear algebra may well be said to be the language of quantum mechanics. Hopefully I'll get a few more words in on the subject before my vacation!

Sunday, August 15, 2010

What are vectors?

As I've said before, science is social-- oops. Wrong mantra. What I meant to say is that vectors are an abstract way of describing a pattern. Specifically, the vectorspace axioms formally describe a kind of mathematical object, the vector, that encapsulates the geometric and algebraic properties of a large class of seemingly disparate objects. By using the vectorspace axioms, we will be able see that lists of numbers such as are vectors, as are arrows on the 2D plane.

Rather than describe how to do so myself, though, I will try something different. Vectors are important in much of physics, and so lots of people have already written much about them. Thus, for the bulk of the work in describing vectors, I will defer to these other writings. A very physics-oriented approach can be found over at Dot Physics, starting with a trig-based introduction to vectors, followed by a discussion of how to represent vectors. An alternate physics-motivated discussion of vectors can be found at HyperPhysics.

For the more mathematically motivated amongst us, Wikipedia has a good page describing a very special family of vector spaces called ℝn that is used to describe points in Euclidean space. MathWorld has a few good articles on vectors, including a technical definition and listing of properties and a more concise listing of the vectorspace axioms. Finally, the Unapologetic Mathematician derives vectorspaces from a more general construction called a module (warning: not for the feint of math).

To understand why we care about vectors in quantum information and computation, however, takes one more observation. A quantum state can be written as a linear combination of some set of basis states. For example, an arbitrary qubit state can be written as . This important property means that quantum states are a kind of vector in what we call a Hilbert space. This has some profound implications for how we think of and manipulate quantum states, as we shall explore in forthcoming posts.

Joining the fray, despite better judgment.

After reading Jerry Coyne's take on the NYC mosque project known as Cordoba House, I feel compelled to contribute my own unsolicited views. Let me be perfectly (and perhaps painfully) clear on this: I do not support the building of this mosque any more than I support the building of any other mosque, church, temple or other monument to that source of perpetual irrationality that is religion. Of course, what's brilliant about living in a country that respects human rights is that they do not need my support to build their mosque any more than, say, a Catholic group needs my support to continue expanding that particular institution.

There is an inherent and celebrated right to express viewpoints, including those that I find deplorable and unconscionable, and I am not about to stop celebrating this right simply because of some contrived notion of "ground zero" as being somehow sacred, as is currently being pushed by the right-wingers in my home country. My personal revulsion at a religious institution that offers no better than a compromise with the hateful and inhumane extremes of the worst and most fundamentalist strains of religious practice do not figure in to the equation at all. I cannot, by the same reasoning, let the hatred expressed as an article of faith by the Mormon church in the recent California Proposition 8 debacle lead me to reject the inherent rights of Mormons to practice their faith.

Rather, as I have said before and will continue to say, such institutions are made irrelevant through eroding the cultural support for irrationality. The law has no place in deciding questions of this kind of cultural imperative-- not even zoning laws. To throw away the recognition of fundamental human rights in a case like this is nothing other than embracing the very irrationality that I argue is so damaging.

Note: Thank you to the person who helped me perfect the writing on this one. Left anonymous for search engine reasons.

Friday, August 13, 2010

Intuition, mathematics and the conspicuous absence of "vs."

I am now at the point in my academic career where some people see fit to ask me for advice. As shocking as this change is to me, I have found that when people ask me for advice, they mean it sincerely and take my advice seriously. It is thus incumbent upon me to be just as serious and sincere in what advice I give. Tonight, I'd like to ponder one particular kind of advice I have found myself giving in recent weeks.

In physics, we use mathematics. A lot. Math is the language of physics and the method by which our science is performed. How, then, can physics ever advance in directions unanticipated by math? Perhaps more to the point, since mathematical thinking is not inherent but a skill, how can we check ourselves against mistakes in our mathematics? In physics, we often find that intuition is the answer to these problems. A good physicist will have developed a keen intuition that guides them to new and novel discoveries, checks them against mistakes and that helps with the interpretation of very abstract concepts.

This answer, as correct as it may be, is nonetheless incomplete, however. Our intuitions are ill-equipped to deal with phenomena outside the "middle world" in which nothing is too big, too small or too fast. Thus, when we first encounter quantum mechanics, for instance, our intuitions often betray us, leading us to reject such beautiful facets of the theory as entanglement. As we learn, we must fight our intuitions as much as we utilize them.

It is common to paint this quandary as being about intuition versus mathematical truth, but herein lies my perhaps not-so-humble advice: trust your intuition, but trust the mathematics more. Our intuitions are wonderful tools for doing science, but at the end of the day, it is mathematics upon which our theories must be founded. When we encounter something unintuitive, it does indeed behoove us to exercise extra caution and skepticism, but these must ultimately give way to experiment and to theory.

As we learn to put our trust (not faith) in the explanatory power of mathematics and the veracity of our experiments, then we can build up a new intuition that serves us more faithfully. In short, by trusting in mathematics, we turn that most admirable of human qualities to our intuitions: the capacity for self-improvement. Rather than thinking of intuition and mathematical truth as being at odds, we can see conflicts between these as opportunities to learn and to improve. The two tools, when used to their fullest, work together in a virtuous cycle that results in the expansion of knowledge.

Thursday, August 12, 2010

All our relations laid bare.

Update: Thank you to Diandra for the kind words about this post!
The Internet is abuzz recently with a depressing bit of news about the state of math education in the United States: the vast majority of adults in the US do not understand what the equals sign means. For a particularly good take on this, please see the excellent article on Cocktail Party Physics. As for my part, I'd like to take this as an opportunity to expand on a point that the author, Diandra , made in the Cocktail Party post. Specifically, I want to elaborate on the use of the equals sign to indicate a relation.

Fundamental to mathematics is the idea of a relation, which is a formal way of stating that two objects are related in some specific way. For instance, the object "2 + 3" is related to the object "5" by the equality. This notion, however, can hide that something very important has occurred. We have taken a conceptual process, addition, and restated it in terms of a statement about static relations. No matter what I do, I cannot break the relation "2 + 3 = 5." By contrast, if I constrain myself to thinking about the addition process, then it is harder to separate that statement about Platonic ideals from the perhaps imperfect implementation of the addition process. The equality relation, then, tells us about what is.

To take a tangent for a moment, mathematics can be thought of as the process of identifying and abstracting patterns. The concept of addition, for instance, is an abstract way of discussing and modeling a very common pattern in the natural world. We need not specify whether we are adding apples or planets; the pattern is the same. Thus, taking the relational view is a natural step in this methodology of abstraction, as relations tell us about the patterns that we can identify in other patterns. We recognize that the pattern "three objects" is indistinguishable from the pattern "one object and two objects added together," and so we say that these two patterns are equal to each other. In doing so, we make no statement about which pattern precedes which in a process, meaning that we can represent the "three objects" pattern as the "1 + 2 objects" pattern should we find the latter more convenient.

Of course, processes exist, and so mathematics would be much less useful were it not able to describe them. One may well point out, for instance, that the concept of a function has a very clean intuitive description as a mathematical formalization of a process. That is, the expression "f(x)" means "take x and do f to it." Notice, however, that we ultimately rely on the idea of a relation to make sense of functions. We say things like "y = f(x)," meaning that the result of f acting upon x is related to y such that the two objects are indistinguishable. We can thus remove any notion of dynamics from our description, focusing on the pattern that our process introduces. One can even go as far as to think of a function f as a kind of relation between other objects, so that "x f y" means that the objects x and y are related by the action of f.

This shift to relational thinking is very powerful, and underlies not only much of mathematics, but also much of our language. When I say "I am Chris," there is no naming process implied, but only the statement that the concepts "I" and "Chris" are related by the verb "to be." That is, in announcing my name, I am relating the concepts of self and name. Recognizing this has been a boon to the Semantic Web, and is used to express concepts in terms of abstract relations, such as is done in Notation3. The entire concept of the World Wide Web comes back to the concept of a very special relation known as the hyperlink.

Understanding the notion of a relation can be seen to be critical to understanding the means by which we understand and model the world around us. A key part of being human is our capacity to identity patterns and relationships between objects around us, and it is precisely this capacity that we bring to bear in mathematics.

PS: This is why ":=" is often preferred over "=" for indicating "set equal to" by mathematicians.

Wednesday, August 11, 2010

The Power of Conversation

Today, I had a conversation with a friend of mine. One of those wonderful, roaming conversations that managed to touch both on the merits of various tabletop roleplaying game systems and on the utility of deriving quantum mechanics from quantum field theory. You know, the kinds of conversations that you can have with awesome people. Along the way, we spent a while discussing a topic near and dear to my blogging interests: science and rationality in society.

Perhaps unsurprisingly, we agreed on most of what we talked about with respect to this important topic. Looking back has gotten me thinking about the pragmatic value of keeping the rationality conversation alive, even with like-minded people. There are many reasons that often come up in such meta-discussions, of course. Conversing with others can help you to realize that you aren't alone in everything, just as Darwin fish can serve to raise consciousness about the existence of freethinkers. Such conversations can help one to hone their arguments and develop confidence in their conclusions. Through conversation, we can also expose ourselves to novel motivations and arguments, even for positions we may already take. For the emotional reasons alone, it's worthwhile to keep talking about rationality and science, to say nothing of these kinds of pragmatic benefits.

That said, there's one purpose that conversation can serve that I feel deserves special emphasis. Through conversing with other people, whether like-minded or diverse, we can learn quite a bit about when we are wrong. We are all human (I presume, anyway), and are fallible in our applications of logic. As such, it is a wonderful opportunity for self-improvement to be able to hash through our views with those honestly seeking truth. Indeed, just as science is inherently a self-correcting enterprise that must deal with the limitations of human minds, so too must rationality in general co-exist with a self-awareness of our capacity for mistakes. In conversations with others, we are often called upon to defend, explain or otherwise make explicit those private ideas which shape how we understand the world. When we are in the wrong on an issue, this process can be a tremendously helpful mechanism for self-improvement. Even with like-minded people, one's justifications can be flawed in ways discoverable through open and honest debate.

Conversation, then, is an essential tool in the rationalist's toolbox. Without healthy conversation, we are each isolated and limited to the power of our own brains. This is a large part of why I appreciate having a healthy and vibrant discussion take place on articles that I post here, such as that attached to Context Isn't Everything. By the time that you, dear readers, invest in this nascent community, I am given a precious opportunity to discover the depths of my own wrongness, as well as to find points of agreement with my intelligent, rational and compassionate peers. The rationality conversation is everywhere, and I'm glad for what parts of it find their way here, conveyed by vibrant minds.

Sunday, August 08, 2010

P ≠ NP?

I don't have much to say on this one yet, but it's too important to stay silent on. Someone is claiming to have proved that P ≠ NP, which wouldn't be new but for the fact that the claimant is actually a credible computer scientist, such that Stephen Cook is saying it may be a correct proof. Since not everyone here is a complexity-head, let me spell out the significance of that: Cook was one of the people behind the Cook-Levin Theorem, which established the existence of NP-complete problems. Without Cook, then, the P vs. NP problem would be much less relevant, so his opinion on the matter should carry no small amount of weight. Of course, this is a science, so that can't be the last word-- hence the question mark in the title. It'll be interesting to see what light the morning brings to this one.

Many others have written on the issue, including: R. J. Lipton, Dave Bacon and Joe Fitzsimmons.

Deciding What Science Is

Recently, someone at a major newspaper wrote an article about science blogging that is perhaps best described as trolling. This article, penned by Virginia Heffernan, has already led to many pixels being spilt over its careless dismissal of the state of science blogging. See, for instance, PZ Myers' post on the matter wherein he dispatches her arguments quite handily. I don't wish to add to this issue, as too much has already been said about her article.

Rather, I want to address one of the more poisonous and wrong attitudes upon which her article seems to have been predicated. This attitude is far from limited to Heffernan's flamebaiting, and has been reflected in much of the writing that has followed. For instance, one blogger writing on "Christian faith, society, science and culture" had this to say:
One of my chief complaints about many science blogs is that there really isn’t much science to be found there. Many of the most popular consist primarily of diatribes about various political issues (gay marriage, immigration, the tea party, etc) and the personal religious beliefs of the blogger (who is more often than not an evangelistic New Atheist). What one will find very little discussion of science, as in information about current research, particular papers, or the state of various scientific fields. [emphasis in original]
The author, Jack Hudson, has implicitly made the same kind of error that so many face when discussion science: he has decided a priori what is and is not within the realms of what can be discussed as science. Let's take gay marriage, for instance, since Hudson seems to think that this issue has no place on a blog about science. It also serves as a very timely example to pick at, seeing as how the fate of the anti-gay marriage amendment Proposition 8 was just found to be unconstitutional in the decision Perry v. Schwarzenegger. Much of the decision came down to deciding if the law had any rational basis, and thus involved a substantive discussion of material claims made by anti-gay advocates. These claims were found to be wanting, as they were not grounded in scientific fact. Thus, we see that the final decision in the case came down to a scientific question: is gay marriage at all quantitatively different from heterosexual marriage?

Gay marriage, then, is a perfect example of something within the purview of science, as in deciding, we must apply the ideals of rationality upon which science ultimately rests. Advocating against gay marriage is simply not rational from any standpoint with which we should find acceptable in society, and so any blogger advocating for the public role of science has not only the right, but the responsibility to speak out against anti-gay actions like Prop 8. If science bloggers stick had by and large stayed out of the issue, then they would have been complicit in the further separation of science and society.

The trouble that Heffernan, Hudson and other writers run into, then, is that they don't get to set aside some issues as being outside of science. Deciding what science is and is not is not so trivial as to be settled by consulting with one's political positions. We cannot appease the Hudsons of the world by shying away from "whining about creationists," but will only further weaken how science is seen and accepted in society at large.

P. S.: I have to point out that Hudson filed his article under the "Atheist Contradictions" category, begging the question of what exactly is so contradictory here.

Saturday, August 07, 2010

Test: a new way to math.

Note: If this test does not work for you in Firefox 3.5 or later, please follow use "about:config" to change "html5.enable" to true. (details)
Trying something out here, so please humor me... x = b ± b 2 4 a c 2 a

Two Book Recommendations for QI/QC

Uncharacteristically, this shall be a short entry. Instead of talking about my thoughts on politics or science or anything at all, I'd like to point my readers at a wonderful book that serves as a good introduction to quantum information and computation. I don't claim to be such an expert in these fields as to be able to write a complete book's worth of introductory materials, and so my own writings will be, by comparison, incomplete. If you have an interest in these subjects, then, you will almost certainly want to find a better introduction than that which I have been working on here.

The seminal textbook of the QI/QC research community is undoubtedly Nielsen and Chuang (az, goog), due to it's incredible completeness, depth of content and clear writing. As such, I cannot help but recommend it, as it truly deserves its place as the bible of quantum computation. That said, it can be daunting to read through as a first book on the subject, and so I would also recommend Kaye, Laflamme and Mosca's excellent introductory volume (az, goog). While KLM do not delve nearly so deeply into each topic, their text does a wonderful job of making QI/QC not only concrete, but accessible.

Starting out in a field can be hard, as it is difficult to even know what a first step towards self-education might be. Thus it is that I hope that these recommendations help lower that potential barrier, even if only slightly.

Notes: There are other excellent books, I'm sure. I chose two with which I am sufficiently familiar to make a personal recommendation, but that is not to say that these two books are the only books worth considering. Also, I get a kickback on books purchased via the links to

Thursday, August 05, 2010

Open Source and Science: A match made in pragmatism.

Note: This is an issue I care a lot about, and so I've written about it before. This is kind of a "take two," where I hope to expand on previous writings.
I am an idealist of a peculiar kind, in that one of my highest ideals is that of pragmatism. For instance, I posit that society should not spend immense amounts of resources in efforts that evidence has shown to be futile. As I wrote about in my post on science and faith, this is one of the quintessential features of science. So much so, in fact, that one may define science as the set of those means of learning which expand human knowledge.

This pragmatic ideal is what determines much of the way that we do science. We have found that, as the sphere of human knowledge grows, it has quickly transcended the capacities of any one human brain. Thus, in order to continue to do science, we have recognized that science must be a social enterprise. This then requires that we have some means for accepting that what another scientist tells us accurately reflects reality.

It is this point at which many will claim that science must base itself on faith; specifically, faith in the goodwill and honesty not simply of our peers, but of all who came before us. Through pragmatism, however, we see that this is not the case. Rather, necessity has driven a complicated system of social protocols for communicating science by which everything is reproduced and verified such that errors due to misplaced trust are minimized. A key aspect of this social system is that science is done in the open. While truly taking seriously the ramifications of such a principle is an effort still in its earliest stages, we have long recognized that science cannot go beyond that which is communicated (put differently, science cannot exist in a vacuum). Thus, secrecy has no place in the development of scientific knowledge. In order to truly succeed in the sciences, we must wholly embrace the social and open nature of science.

Of course, science in the abstract is not the only place that we find such concerns. Consider, for instance, a computer. It was not too terribly long ago that a single person could in principle understand every aspect-- perhaps even every circuit component-- of a computer. Despite their immense physical size, computers of this age were small in the sense that they fit into the human mind. Now, however, there has been so much technological progress that it is ludicrous to think that a single person could design a modern computer from first principles. Rather, the development and manufacture of computers is a social enterprise, and not just to the extent that it overlaps with science as we have discussed it so far.

To make the discussion still more concrete, we can consider the enormity a modern operating system. The Linux kernel alone has grown from 10,000 lines of code to about 13,000,000 lines, representing far more work than any one individual can master. Such a task is undertaken with open collaboration and communication, whereby each contributor can focus on some subset of the immense whole that is the Linux kernel. This is to say nothing of all of the other parts required to make up an operating system, such as a desktop environment and low-level userspace utilities. The modern operating system must be a true community effort, if only due to the proportion of the task.

In order, then, to develop software commercially, one must create within their company a microcosm of this sort of community. Undoubtedly, this can be done, as the evidence exists in the form of closed and proprietary software. Science, however, serves in this instance to show us the value of an open flow of ideas. We spend an immense amount of effort in the sciences on facilitating communication, utilizing everything from conferences to telecommunications as tools to do so. It is, in essence, an openness born primarily of a pragmatic ideal which can be readily seen to apply in society more generally.

The story of openness, however, is far from being a constant push from academia to the rest of society. Indeed, much of the current open science movement relies on the open source movement for its inspiration. There is, in fact, a healthy community on the boundary of the open science and open source movements. This community is a wonderful example of the more general realization that pragmatism can and should drive forward the open exchange of ideas.

I would be remiss if I did not take the opportunity to share at least a few good links on the subject. In particular, I have provided below links to the work of a small sampling of the people who comprise much of my view of the open science movement.

Tuesday, August 03, 2010

Context Isn't Everything, But It's Quite A Lot

Context. We can think of it as what separates us from the current generation of machine intelligences, floundering around with no memory. Context is the difference between a definition and a connotation, between an innocuous statement and a sly innuendo.

Part of being human is that we have a shared culture, which serves as a context for all that we do. Thus, a statement which is delivered with only good intentions can, in the context of culture, communicate bigotry instead. Recently, in the particular blogging circles in which I run, this effect has reared it's ugly head quite a disturbing number of times.

This most recent run seems to have been set of by a list of "sexy scientists" published with good intentions. Following this list, I saw some threads on the subject that quickly filled with controversy. One particular thread reached almost 700 comments, a good indication of the original list having struck a nerve.

Before going any further, I'd like to stop for a moment and point out something: despite not ever having named the gender of the scientists on the "sexy" list, you probably know the answer. Indeed, women were the subject of that post. In our cultural context, it is predominantly women who get the label of sexy, and so context lets you fill in that missing information. This is precisely where I think that Luke went wrong in posting the list.

Even though there is nothing inherently wrong with noticing the physical attractiveness of those around us, or even commenting on it, in the context of a society where women are unfairly disadvantaged as a consequence of their gender, Luke's list takes on a different meaning. Were women not already judged more on their physical attractiveness, then the intended celebration of beauty may not have been perverted into just another aspect of life in a patriarchal society.

Similar problems have been occurring in discussions all over the Internet, though, and so I don't want to hone in on what is, in many ways, a done deal. Just look at what happened in the comments following another of PZ Myers' posts on feminism. Here, commenters that I can only assume were well-meaning tried to point out ways in which men are hurt by sexism, but in doing so neglected the context of a discussion of male privilege. In turning a thread on male privilege into a discussion of how men suffer, these commenters perpetuated, however inadvertently, the cultural norm that men's problems are somehow more pressing then those of women. Thus, the context turns a well-meaning discussion of sexism into yet another mechanism to perpetuate sexism.

Cultural context can be a powerful thing, twisting our words and actions. By necessity, this introduces a double-standard, where the same kinds of jokes and statements that are acceptable to make about men turn poisonous when placed against a backdrop colored by sexism of the most vile kinds. Without the cultural context of religious oppression, a veil would be just another cloth. Without the context of a society in which many women live in constant fear of sexual assault, a flirtatious compliment could be seen as innocuous. Without a context of a society that fetishizes youth, a pole-dancing class for children would be just another dance class divorced from its sexual origins (after all, it's not as if ballet or tango have "innocent" origins).

If we want this to change, then we must all-- men and women alike-- be more inviting and inclusive. We must learn to not play into the problems of our culture. We must recognize that there are limits to how much we can make note of a woman's attractiveness before our message becomes one of objectification. For instance, we can't use phrases like "cry into your underwear with nerdlust" when referring to our colleagues and our peers if we want to change this poisonous cultural context.

Likewise, the men among us must be involved in the conversation in ordder for change to really set in. Here, I'll admit that the story gets much more personal for me than I'd like, so please forgive me if I spend a bit longer on this point than is really appropriate. I'm not always perfect at how I express myself, or always the best at communicating about feminism. It's hard for me, as a man, to truly understand what women go through sometime. Despite this, I do try, not out of expectation of reward, but because I feel downright compelled. It makes it hard to try, however, when speaking out means that vile accusations like this get leveled:
As I read more of your posts about this girl, I begin to see what your motivation is. You're the overprotective geek friend/wannabe lover who thinks by defending her honor on some random geek message board, you will curry favor with her and this will somehow lead to her fucking you. I'm sad to inform you this will never happen.
As long as it is so inconceivable that a man may speak up to try and improve their own community, rather than in a single-minded pursuit of sex, sexism will persist. We must all, men and women alike, understand the context in which we exist if we seek to change it so as to respect each other.

Monday, August 02, 2010

What are the complex numbers?

Now that we have ideas about states and probabilities under our belt, we're ready to build up yet another step towards understanding what a quantum state is: the complex numbers.

First, as is typical for me, I'd like to go off on a tangent. In mathematics, we often consider sets of some kind of object, such as the integers (which we write as ℤ) or the real numbers, written as ℝ. We can then add operations on these sets, such as addition and multiplication. A very useful property for a set to have with respect to an operation is that of closure, by which we mean that an operation doesn't take you outside of a set. For example, if you add any two integers, you get another integer, and so ℤ is said to be closed under addition. Similarly, ℤ is closed under multiplication and subtraction. Where it breaks down, however, is when we consider division; the specific counter-example of ⅔ shows that not all integers can be divided to produce another integer. For that, we must take a step back to the rational numbers, written ℚ. The rational numbers can be taken as the set of numbers produced by dividing integers with each other (except zero, for which we must always make an exception). One can then show by direct calculation that if you divide two rational numbers (that is, two fractions), you get another rational number, and so ℚ is closed under division.

Where, then, do numbers like come into the picture? One can prove that (that is, that there is no way of writing as a fraction), and yet the number comes up in a very natural way from looking at polynomial functions of integers, which we write as ℤ[x]. When we study such functions, we are very often interested in the roots of polynomials, since they tell us quite a lot about how such functions behave. For instance, consider f(x) = x² - 1. We can obviously factor this as f(x) = (x - 1)(x + 1), which gives us by the zero factor theorem that f(x) = 0 has two solutions at x = ±1. Notice that if we have n factors of the form (x - a), we obtain a term like xⁿ from multiplying all of the xs together, and so we should expect that a polynomial of degree n (that is, whose largest power of x is xⁿ) will have n roots. This fails if we restrict ourselves, however, to ℤ, since the function g(x) = x² - 2 has roots , which are not integers or even rationals. The solution, then, is to broaden our perspective to all real numbers, written ℝ.

This idea of including roots of polynomials is related to, but not precisely the same as, the concept of closure. It is often useful to consider sets of numbers such that all polynomials must have roots from within that set. We still, however, cannot say that ℝ has this property. Consider the polynomial x² + 1 as a counter example. Obviously, , but what does mean? As is customary in mathematics, we can generalize our notion of a square root by defining a new number i such that . We shall call this number the imaginary unit, as it has some surprising properties that we may not expect out of real numbers. The set of all complex numbers, that is, numbers that are a sum of a real and an imaginary part, such as z = a + ib, is written as ℂ. It turns out that this set does in fact include all of its polynomial roots, while still remaining closed under all the typical operations, indicating that in some sense, ℂ is large enough to encapsulate all of our typical arithmetic. Any set smaller than ℂ will not be expressive enough to capture all of the arithmetic operations we might wish to perform in our study of quantum states.

While a post on all the wondrous properties of ℂ would be far beyond my modest goal for the afternoon, one property in particular is too wonderful to go unmentioned. When we define the imaginary unit i, we also define how our typical arithmetic carries over, so that (a + ib) + (c + id) = (a + c) + i(b + d) doesn't surprise us. This does not, however, let us immediately make sense of expressions like for some real number θ. For that, we must use a mathematical tool such as power series to extend our typical definition of to include complex exponents. When we do so, we obtain a beautiful formula which serves to define what means:

Notice that this allows us to relate complex numbers to angles in a simple and straightforward way. One immediate consequence of this definition is that complex numbers tell us about rotations, whereas real numbers tell us about scales. Since , the "size" of (which we can define formally as |a + ib|² = a² + b²) is always one. This property shall be very useful to us in considering quantum mechanics and information, where we shall interpret complex rotations as a phase between two wavefunctions.

For now, though, I shall leave off this brief introduction to complex numbers, having (hopefully) demonstrated both a bit of their utility and of their beauty. Until next time!