Sunday, December 03, 2006

Commencing the Metashell Project

This entry was cross-posted to cgranade::social.


Well, folks. It's time. The open-source Metashell project (hosted at Google Code, read the blog) is underway. Sure, it started as a class project, but it is time to try and make something wonderful out of it. I strongly encourage anyone who's interested to go and read about it, and to give it a try. There isn't a snapshot up yet, but the Linux users among you can download MonoDevelop and use it to compile a copy. It can't do too much yet, but it does have enough there to be interesting and to be fun to mess with.

That said, to say that Metashell is rough around the edges is a significant understatement, and it serves to drive the point home that I'm not good at everything. Thus, I do want help. I want someone to write a nice installer and to get it to compile under Windows, for starters. More than anything, though, I want this project to get attention. I think that, even if the project itself isn't that well written, that the ideas behind it are sound, and I hope that developers writing all kinds of software can glean some inspiration from it. If it sounds conceded to write that, it is. I'll admit that much. I can't justify why I feel conceded about it, but I hope that at least my conceit isn't too offensive to anyone, and that you give my work a fair chance.
In short, if you want, it's there to download and play with. The Metashell project has commenced. Spread the word.

Technorati Tags: , , , ,

Monday, November 13, 2006

The most basic premise.

Underlying all of science is one basic axiom that has been implicitly invoked time and time again throughout the ages. So basic is this premise, and so drastic the consequences of its falsehood that it routinely evades comment. It is the reason that we can even write down the laws of physics, why we can establish principles, and why we are able to make predictions about the natural world. Those who attack science often do so without full cognizance of this foundation, and thus are apt to issue only the most topically applicable of arguments.
What is this assertion, then? What one statement could all of science be built upon, and yet be seen as too trivial to deserve mention?

The Scientific Axiom: The world is logically consistent, and is describable by a set of logical constructions.

If this is not at all true, then all of science is meaningless, as all events are merely coincidences, with no correlation or structure. It is even worse when we consider that the statement that all things happen randomly is a perfectly logical description, and thus would not violate the Axiom in any concrete way. Rather, a world which violates the Axiom is even more perverse and more impenetrable to understanding than even a completely chaotic world. We could not distinguish an unscientific world from a scientific world based only on historical observations, thus rendering all reflection and introspection useless and meaningless.
Why do we assume this strong of an axiom, then? Simply stated, we would very much like that it is actually true, as there is no humanity, no experience to an unscientific world. Art, philosophy, culture and even thought itself depend on the Axiom for their existence. We assume the Axiom because, if we did not, there would be no point to science itself. Perhaps the Axiom is false. If so, then whatever we do is utterly devoid of meaning anyway, so why not spend life on such a useless pursuit as describing that which is by definition undefinable?
From this, we see that despite the protestations otherwise, science is the foundation for a great many other things, and that a world in which science carries no weight cannot also support any other human pursuit. So many aspects of human experience are intertwined thusly; we cannot practice science in an artless world (a basic result of complexity theory), and we cannot practice art in an unscientific world. Any attempt to completely isolate and detangle these aspects of human existence is futile, and endangers humanity itself. At the same time, we must separate the spheres of our lives to at least some degree, as they are in fact different, and must be treated in different manners.
It is patently absurd to approach art with the same axiomatic rigor as is applied to science, just as it is to apply the subjectivity and flexibility of art to scientific pursuits.
Rather, the Scientific Axiom tells us that no matter in what open field we find ourselves, we can make some ground beneath our feet for us to stand upon. We needn't fall into an abyss of despair, for we may always trust logic to at least some degree. At least, in the cases in which we can't, we aren't really "wrong," as correctness is a logical construct anyways.

A small footnote.

The astute among you will note that my statement of the Scientific Axiom is, in fact, not accurate, as it does not take into account Godel's Incompleteness Theorem, nor any other aspect of complexity theory. I must obviously hold some faith in the validity of complexity theory, though, as I so flippantly invoke it elsewhere in my diatribe. No, I realize my omission, and made it intentionally to simplify the statement of the Axiom and to render it accessible to a wider audience. Technically, I should have defined that the world can be described as the asymptotic tendency of increasingly accurate logical constructions. There will, of course, always be lapses and differences, but these are minimized by refining our constructions.



Technorati Tags: , ,

Friday, November 10, 2006

What is important?

With the liquid ban still mostly in effect, months after any pretend or real threat has passed, the time comes for any intelligent citizen to ask one resounding question to the TSA: what is important? The boarding pass flaw still unchecked, the question becomes even more pressing.

Let us, in asking this question, take the view of a passenger going through the "security checkpoint." A passenger, most often also a citizen of the United States of America, is made to remove shoes, watches, coats, purses, backpacks, belts, rings, wallets, key chains, cell phones, and any number of other arbitrarily chosen objects. Meanwhile, the line behind them grows. If we accept the premise upon which the TSA supposedly operates, that the world before the checkpoint is dangerous, and that the world after the checkpoint is safe, then we realize that the TSA is placing American citizens into danger.

Indeed, I claim that with every arbitrary inconvenience, with every invasion of privacy, with every bureaucratic hiccup, with every police-state-style intimidation, the TSA has every reason to believe that they are putting American citizens into mortal danger. Citizens are being thrust into a situation where they are crammed in a tight space, with no substantive security measures protecting them. If a rouge suicide bomber ever wanted a nice target, the TSA has saved him the walk all the way to their gate.

What, pray tell, is being protected by the TSA? What does the organization feel is important enough to protect at the direct cost of the safety of the American populace? For what is our safety spent with not even a receipt to show for it? Is it the planes, the private property of already heavily subsidized corporations? Is it the Commander-In-Chief's approval ratings? Is it the jobs of the agents? Their place in the federal government at all? What is important to the TSA?

Tuesday, August 15, 2006

At What Cost: A rational approach to security.

"No such thing as a free lunch."

Security always comes at the comes at the cost of something else. We must ask ourselves what we are willing to sacrifice in the name of security; what we are willing to accept in return. There are certainly those who show no such introspection, as is evidenced by the article titled News Hounds: Fox News Airs Call for 'Muslim-Only' Line. People like those described seem to think that there is no trade off- that those target by such profiling are unworthy of consideration. This can hardly be further from the case.

Making a sacrifice.

Despite warnings from those such as Benjamin Franklin, there are those among us who are willing to make an exchange of liberty for security. This exchange, however, is rarely thought through to its logical extremities. If one is to make this choice, then a firm line must be made, or else we end up with an entire state of matter being forbidden. Worse, we could have just as easily been in a state like the British find themselves now: forbidden from even carrying their own personal effects. Are we too far from the old rag about flying naked?

If sacrifices are to be made, they must be made in the context of a strong system of legal oversight to ensure that limits imposed on the extent of the sacrifice are held. Of course, by definition, sacrificing liberty means sacrificing one's ability to redress grievances if this accountability is not observed.

Fair exchange.

It would be ridiculous to make these kinds of sacrifices without even having any security to show for it, but that is exactly where we find ourselves. Air travel is no more secure for our having made these sacrifices, and so we might consider that, as a populous, we have been cheated. To support this claim, consider the massive problems with the revised security procedures advanced by the TSA:

Sadly, these problems are only part of a larger progression of ever worse security, being brought to us in exchange for our liberties. Indeed, non-solutions such as racial profiling distract us from the real problems, as do such statistical nightmares as our generation's polygraph. Our fear is being co-opted, and we are being swindled by power-hungry fiends.

Remember, a public system cannot be perfectly secure. Especially not one as trafficked as the airline system. There will always be ways around security, whether it be through body cavities or through sneaking in modified fast food ingredient shipments. Besides, security goes beyond the airports, and as we tighten the airline system, we lose sight of the general problem.

A solution.

We don't have to make these choices. We don't have to choose between liberty and security. As we have seen, blindly ignoring the costs of security makes us both less secure and less free. Instead, let us pursue diplomatic and humanitarian means of resolving the underlying problems of which terrorism is a symptom. It is hard work, and comes at the cost of many years of diplomatic endeavors, but leaves us free, secure and respected in the world. Not that security should be eschewed altogether, but such procedures as are in place today should not be relied upon, but should be secondary to fixing underlying causes of violence.

Surely, our liberty is worth a bit of patience, and a fair spot of work? At the very least, a diplomatic nation is a secure nation.

technorati tags:, , ,

Celebrating a Failure of Civic Duty

As written in a post entitled Hoystory » What they knew and when they knew it, Matthew Hoy berates the New York Times for not revealing the illegal NSA wiretapping program (which Hoy describes as a "terrorist surveilence program," despite that it targets law-abiding American citizens) as soon as they knew about it. The reason he gives, however, is as fine an absurdity as you'll likely find on the Internet:

Frankly, the second-best choice (the best choice being not revealing the program at all) would have been for the Times to reveal the it when it first discovered it. Democrats would’ve beenforced to take a responsible position — not the politically convenientone — and endorse the program and trash the Times. The year-plus delay served to give the paper, and Democrats, some cover.

So, basically, Hoy seems to wish that both the Democrats and the Times would abandon the American people to be victims of this administration's war on the Constitution. Not revealing an illegal program that you have knowledge of can hardly but be considered a deriliction of duty, and makes one an accessory to that crime. This unethical and illegal program stripped American citizens of their Fourth Ammendment rights, as well as any right or privledge to privacy. Furthermore, the program could not be considered to be effective, as before it was commenced, we already knew that the methods by which many terrorists choose to communicate are invisible to this program, such as the shipping of prepaid cell phones to other countries.

Whenever a program strips citizens of their rights and lets terrorists go unchecked, while at the same time violating the law, I would hope that all citizens would at the very least feel transgressed, and not celebrate any derilictions of duty which occur.

While I may respect that others have different viewpoints on how to combat terrorism, broad and untargeted wiretapping is unethical, ineffective and illegal. Targeted wiretapping, with warrents obtained through open or secret courts, against those strongly suspected of terrorism does work. In fact, this is what Britian used to foil the most recent airplane-related terror plot. Remind me again how it would have been a good thing for the Times to fall through on their duty as a journalistic enterprise? You know, the watchdogs of democracy?

technorati tags:, , , , , ,

Monday, August 14, 2006

Pacifism Meets Godwin's Law: Debunking a strawman argument.

Whenever I hear about some kind of "do no harm" attitude I always want to ask "does it pass the WW2 test?" What I mean is, would you really have preferred to have sat by and watched the Holocaust happen rather than fight? If so, then I consider the concept morally bankrupt.
-- theStorminMormon (883615)
It is truly unfortunate to see pacifism treated in such a disrespectful manner. This argument, if one could call it that, is a straw man. It discounts entirely that the point for pacifism and diplomacy had passed: by the time the Holocaust had begun, the choice to engage in violence had been made through inaction. Were pacifism applied to the events of WW2, more efforts would have been made to preserve peace before Hitler took power. To say that WW2 is an argument against pacifism is to substitute blatant emotional appeal for rational discourse, and is, in effect, distorting the claims made by pacifists to paint them with the same brush as the Nazis.

technorati tags:, , , , , ,

Monday, April 17, 2006

A Rare Moment of Unadultered Hatred: Shut the hell up.

I get frustrated just like any other person. Anyone who knows me in person can vouch that this is indeed the case. I try to keep it in check, but right now, I have but a few words to say to a great many persons:

Shut the hell up. If you can't think of anything intelligent to say, or at least something that doesn't jepordize the future of everything you might reasonably care about, then just go into your little corner of reality and don't say anything at all. You are now, and always will be, a nuisence, a distraction and a danger. While in general, I don't advocate "censorship," I must at this point admit that I need some isolation from the overwhelming idiocy that pervades what passes for political discussion in this country, and the only way to do that is for the debate to become intelligent, or for me to leave my political awareness behind. Of the two, I prefer that the dangerous idiots leave.

Having said this, I suppose I probably should make clear who I am aiming at, since we as a society have been damn near brainwashed into thinking that we actually have meaningful discourse on political, social, moral and scientific issues. I am refering to the overwhelming and completely unfounded attack on science that seems to have manifested itself this week in the form of "articles" denying the existance of global warming. By the way, don't even dare accuse me of failing to make an argument. I shouldn't have to. Many others have done so, and I don't intend to waste any effort in convincing people so utterly disconnected from reality. At this point, I wish only to encourage anti-global-warming dittoheads to simply lay off the issue until they learn what it means to have a brain and apply this learning in practice.

Let me describe some examples of the sort of thing that prompts this Unadultered Hatred. Attempting to read the opening comments of this recent Slashdot article on a related issue simply sickens me to the point of physical nausea. Reading pseudo-arguments like the following examples make me despair for humanity.

We already have droughts, floods, powerful storms, varying jet streams, famines, and lots of other weather. Why should we expect next century's droughts to be drier than last century's? When was the time when the weather was perfect for everyone? What makes you think that you can have the weather you want?
-- Kohath, comment #15145483.
Marked as 3, Insightful. I guess noting that storms exist without making any sort of actual argument passes as Insightful these days.
Face it. Most people in the US are bored. They on average spend 4 hours a day in front of the tv, 8 hours working, 8 hours sleeping, and 4 hours unexplained.

From what I hear, New Orleans is a blessing since the hurricane. Crime is almost non-existant, and people are focused on rebuilding the city, working, and being nice to each other.

Maybe a shifting environment and real estate changes will be good for us.
-- hackstraw, comment #15145816.
Marked as 4, Interesting. I suppose that I can't deny that it's interesting. Then again, isn't Hitler interesting, too, or did I just lose the debate via Godwin's Law?
Does it bother you that hurricane researchers have said repeatedly that global warming had little or nothing to do with it, and that there was an expected upswell of activity due starting last year, give or take? Or that the US coastline had been dodging the averages for the better part of 20 years, with a far smaller fraction of hurricane strikes than the historic record would otherwise suggest? What will you be saying if the next hurricane season shows lower activity than the last?
-- Martin Blank, comment #15145268.
Marked as 5, Informative. So that means if I make a bunch of baseless and uncited claims that have nothing to do with the argument at hand, it's not only perfectly on topic, but an informative contribution to an intelligent debate?
Let's make a deal:

Global warming caused last year's record number of hurricanes. So this year, when the number of hurricanes is fewer, we'll know it's because global warming has peaked and is no longer a problem. Do we have a deal?
-- Kohath (again), comment #15145883.

Marked as 4, Insightful. I give up on Slashdot moderation for now. I suppose I have to find a new target, like maybe... Digg? At least on Slashdot, the article itself is fine. The comments are what are so scary. Even then, there are some good commenters mixed in, but they spend all their time responding to idiocy like what I just referenced. On Digg, however... well, let's look at the very headlines of some recent front page articles:

"Global-warming alarmists intimidate dissenting scientists into silence."

"Global Warming Reportedly Stopped in 1998."

"Remember Global Cooling?"

Sick. To be fair, some of these have been marked as inaccurate, but again, they never should have made it to the front page. There is no substance to such articles; no arguments, no compelling presentation of new perspectives, no attempt at intelligent thought. To repeat myself, then, please, for the love of whatever you hold dear, shut any orfice from which words may emit, cease to utilize any appendage capable of recording written words, and go take a middle-school science class.

technorati tags: , , , , , ,

Wednesday, April 12, 2006

Semantics and Software: What the hell is a beta?

If this blog has any recurring theme, it is that language doesn't mean the same thing to everyone. Often times, this subjectivity can lead to great amounts of confusion and general miscommunication. Should we not expect, then, that there exist those willing to exploit such difficulties? Of course we should. This effect is seen everywhere, and we are not surprised to see it again in issues relating to software quality.

Chances are, if you haven't been asleep for the past fifteen years, you've heard the word "beta" used in a sense other than radioactivity or Greek literature. Particularly, you've probably heard it in reference to the development status of software. Historically, the progress of software progress has been described in terms of a progression from planning, feature-incomplete implementation, feature-complete implementation, tested implementation and maintained implementation. Though the precise terms used to describe these phases depends heavily upon whom you ask, there is little disagreement on the five-phase system. Some may split or divide, but these five seem to be common across categorization systems. One of the more common codification schemes involves referring to the second and third phases (call them Phase 1 and Phase 2 for now) as "alpha" and "beta," respectively. Thus, "beta" is quite often intended to refer explicity software which does everything it should, but is likely to be buggy, and in need of through testing.

At this point, many readers would do well to stop and think of what they've seen the term "beta" applied to. Gmail, which has gained no small amount of features, has been in "beta" since its inception. Internet Explorer 7 Beta 2 is quite obviously not feature-complete, nor is the documentation even close to adaquate. Thus, we see that in both cases, the term "beta" has been shifted from meaning "feature-complete but buggy" to "people won't use it if its called alpha" in the first case, and to "we don't care enough about quality to have a bugfixing phase" in the second. Of course, the idea of distinct phases gets a bit blurry with a feature-incomplete production application. Google can't afford to have major security flaws in a production application, whether they call it beta or not. Thus, I would propose that Google fix the problem by simply choosing another word to use other than "beta." For Microsoft's part, calling Internet Explorer 7 Beta 2 is nothing short of a boldfaced attempt at deception. The documentation and visual styles are far from complete, and most dialogs are completely different than most others, leading to the realization that Microsoft is not really even trying at making the beta phase a quality control phase.

In summary, please be aware of what the word "beta" means, and don't let it sucker you into ignoring large and important faults of a product.

technorati tags: , ,

Friday, April 07, 2006

More Trust Models: To Trust Telecos and Governments.

As discussed in the article "AT&T Forwarding All Internet Traffic to NSA?," the EFF alleges that AT&T has been forwarding that traffic which passes over their lines to the NSA. In keeping with my recent obsession with trust models, I shall raise an important question: to what degree should one's telecommunications provider and one's government be trusted? The most obvious answer seems to be to not trust either at all.

Dealing with each in turn, let us consider the role of a teleco in a trust model. A teleco sells a very specific service: to connect you to the Internet. Nowhere in this is the guarantee that they have the human decency to keep the data which you trust to their networks reasonably secure or private. Though some telecos may give you this decency, there is no compelling reason to assume that they will prevent unauthorized access to your data. Rather, the very people you least desire to have access to your data will seek to integrate themselves with a teleco, just as a pedophile might find access to their victims through a position in a police organization (as seen in the recent Department of Homeland Security child sex scandal). Thus, in one of those many ironies which permeate throughout information security, a teleco should be distrusted by default. How do you deal, then, with securing your data over what is, fundamentally, an untrusted network? For that, cryptography again comes to the rescue. A trust model which assumes a base distrust of the network itself will promote the use of end-to-end encryption. Oh, would that this were the case in practice.

Moving on to trusting a government, let us reflect upon words of wisdom from the Federalist Papers, No. 51, written by Alexander Hamilton:

But what is government itself, but the greatest of all reflections on human nature? If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself. A dependence on the people is, no doubt, the primary control on the government; but experience has taught mankind the necessity of auxiliary precautions.

A careful examinations of these words reminds us that at its most fundamental, a government is a response to imperfections in the human condition. Unfortunately, however, that response is in itself forged from the same flawed humanity. At a practical level, we are again reminded of a very basic axiom of trust:

The positions most requiring of trustworthiness are sought out by those most apt to abuse that trust.

Put differently, the positions that we create to deal with issues of trust and crime are the most desirable to those intent on violating that trust. As I have already mentioned, a position in a police organization is highly desirable for a criminal, so is not a position in lawmaking most desirable for a lawbreaker? How, then, can we ever trust our own government to be responsible with our data? We cannot if we wish to have any expectation of security. Government can secure us from each other, but it can never secure us from itself.

It is thus seen that the recent allegations by the EFF represent yet another failure to apply sane trust models to every aspect of our lives. Instead of harboring a base distrust of our communications providers and our governments, we explicitly place large amounts of trust in them. Though this by no more excuses the alleged crimes than leaving an expensive car unlocked excuses its subsequent theft, we should likewise not be at all surprised that, when we are so naive as to trust our governments and telecos, our trust will be violated in the most profound sense.


technorati tags: , , ,

Sunday, March 19, 2006

The Fatal Flaw With Credit Cards.

Why is it that credit and debit cards are so continually under attack? Is it that those in charge of securing the credit card system are so incompetent, or is there a fundamental flaw with the model that they are charged with making secure? If the latter case holds, then no matter how intelligent the security teams are, the system will remain insecure due to a these fundamental assumptions. In this essay, then, I attempt to make the case that the model underlying modern credit cards is fundamentally insecure, and must be replaced if we are to expect any sort of security from the system. Let us consider how this might be the case by examining a typical transaction.

John Q. Public buys a few groceries at the local supermarket. He decides to use a credit card, since it is much more convienent than cash. He then proceeds to swipe the card through the card reader. The card has now been comprimised, for he made the decision to trust equipment fundamentally outside of his control with the entirety of his card's data. His web of trust has not even had the chance to start building, as it was so throughly violated at the first step. The problem that is immediately seen is that in this kind of a transaction, the validation data is the key! Every time that card is swiped, the exact same data is exchanged, and one need only capture that data once to invalidate the implicit trust in every other instance. How else might this transaction have been completed, then? By the magic of private/public key encryption, also known as keypair encryption.

Before delving into how this would work, we must first develop an understanding of keypair encryption. If you already have such a grip, please skip this paragraph. Under a keypair encryption model, each user has a private key which they keep secret at all times, and a public key which they distribute as widely as possible. There are four basic operations that can be performed using a keypair system: encryption, decryption, signing and verification. Encryption takes a set of data and encrypts it against a public key so that only the matching private key can decrypt it. Decryption, then, is the act of taking such encrypted data and recovering the original data by applying the private key. Signing is the act of attaching an extra block of data, called a signature, to a message, and requires a private key. Verification takes a public key, a signature and a message and checks to see if the signature was generated from the matching message and private key. For instance, I can sign an e-mail message and attach the signature. If someone malicious intercepts my e-mail and changes it, then the signature will no longer verify and the message should not be trusted. These operations can be combined, too. If I have your public key, I can encrypt a message so that only you can decrypt it, and then sign the encrypted message with my private key. Upon recieving the message, you can verify that it is from me, and be assured that only you will ever see the message.

Let's revisit the credit card purchase scenario again, and use keypair encryption this time. Before arriving at the grocery store, let's say that John Q. Public created a GPG keypair (which can be done using free software available for most any OS) and sent the public key to his bank. He then goes to the bank's branch office and reads to a teller the fingerprint on the key (a string of data that is unique to each public key) and verifies that the public key they recieved was the same as the one he intended to transmit. Having done that, the bank now trusts that key. John now goes to the store, selects his purchases and goes to the counter to pay. Instead of swiping a card, he takes out his PDA.

The register now generates a message to send to John's PDA. It doesn't know who John is, and so it can't encrypt it to his PDA. There would be some serious privacy concerns if the register were to send a message with his purchase details over the air, and so it first attepts to confirm his identity by generating a block of random data (called "salt" for reasons beyond my understanding) and sending it to John's PDA. The PDA then responds by signing it with the private key stored on board. The register now knows which key to use for the transaction, and verifies the signature with the public key from the bank.

Next, the register needs to confirm the transaction details. The register now writes a message consisting of the current time, the name of the company recieving payment, and the transaction amount. The message is then encrypted against the public key it recieved from the bank, as well as the store's public key and the bank's public key. (That means that any of John, the store and the bank can decrypt the message.) John's PDA recieves and decrypts the message and shows a dialog on the screen asking if the transaction should be completed. If he clicks yes, then the PDA signs the encrypted message and sends it back to the register. The signed and encrypted message can serve as legal verification of the transaction in case of dispute.

Note that nowhere in this process is John's private key sent to anything but his PDA. Thus, in order for this transaction to work, John needs to trust the following three things:

  • The bank's public key actually belongs to the bank.
  • The store's public key actually belongs to the store.
  • The PDA has not been comprimised, and will only sign the messages that John explicitly agrees to, and will protect his private key.

The first two can be solved rather easily; if the bank's public key is trusted by a large number of other people, then they can sign a message containing the bank's public key's fingerprint and a statement that the key referred to actually belongs to the bank. Software exists to collect such messages and assemble a web of trust. If John knows any of these people personally, then the trust link will be stronger. The same process can be used to verify that the store's public key really belongs to that store. As for the PDA, that is a trust issue that John has explicit control over. If he does not trust the code on the machine, it can be replaced with code that he does trust. Of course, such trust has its limits, as it may be that the hardware itself was comprimised, but such issues remain in any system, regardless of the model. Rather, a system of trust such as the one described here minimizes the risk by decentralizing points of attack. In order to comprimise John's transactions, one must either comprimise his personal property, or form a conspiracy of the bank, store and many customers of each to poison the first two bullet points above. To form such a large consipiracy without John's knowledge is difficult at best and counter to human nature at worst.

The system can be further protected by creating John's private key such that it only works if a special password is entered at any use of the key. This, however, still requires that the PDA's hardware be trusted, and simply protects against physical theft of the PDA. These arguments are not to indicate, however, that such a system is secure against all attack, but rather that it is improved from the direct key exchange method of modern credit systems.

technorati tags: , ,

Tuesday, March 14, 2006

Related or not? A case for more metadata.

Recently, I tried to search for a PSP port of the PC game Cave Story using Google, using the search string "cave story psp," and was completely frustrated by the preponderance of articles about the game on sites with PSP sections. A human can quickly see that these articles have nothing to do with the PSP, and that the links are part of the site's chrome. Google and other search engines, however, have no means of discerning this separation. Thus, I propose that in order to give search engines the help they need, site designers should label navigation links as rel="nav".

A more complete, but robust, solution would be to include an attribute for the div element from another XML namespace (say "uri:seo-metadata") that would allow you to specify information like the rel attribute of the a element. For example, a div element could take the following form:

<div seomd:tags="chrome nav"><!-- lots of navigation links --></div>

The contents of the div element would then be marked as being part of the site's chrome and not directly related to the content, and would also be marked as being part of the navigation structure. Such an approach would also be extensible so as to include a mechanism to describe other aspects of a resource for the benefit of search engines. Of course, such metadata would be useful to applications outside of SEO and so it would be more appropriate just to refer to it as a generic metadata stucture that allows you to attach metadata to any arbitrary element. That, however, is a topic for another day.

technorati tags: ,

Tuesday, February 28, 2006

Language, Cause, Evolution and Effect.

Recently, I found myself greatly amused by a particular Doonesbury strip illustrating the problems with the creationist assertions. Deciding to share my amusements with those who cohabit the dormitory in which I spend my sleeping hours, I printed a copy and posted it on my door. A few days later, I found that my posting had been modified to include a strange response:

"Evolution is a complete change of species: fish to bird. [The adaptation of pathogens to drugs] is called natural selection. Get the facts straight, stud."

This statement is simply false. Evolution can include complete changes of species, but not in any sort of sudden sense. In order, then, for a complete change of species to occur by evolution, there must be intermediate steps that are incremental in nature. Thus, these too should be considered part of evolution as any theory of evolution predicts their existence.

Interestingly, this response did not actually "debate" anything, but seems to have sought to distract other readers of my door with a semantic sleight of hand. I have posted to other forums on this tactic, but I feel strongly enough as to do so again on this forum. Here, our friendly neighborhood linguistic charlatan, whether consciously or not, has acted to confuse the method with the effect. Evolution can happen by many different methods, of which natural selection is but one. We can exact a change in a species through other methods, such as artificial selection practiced in the breeding of domesticated animals, resulting in an evolution of that species. At its most basic, to say that evolution exists is to say no more than that the world's zoology is not constant with respect to time and space. This notion can be experimentally shown by an examination of any number of datasets, including fossil records showing a set of species not found on modern-day Earth.

The more controversial notion is that of natural selection, the proposed method by which species evolve. (It should be noted that this is technically not correct -- species do not evolve as a direct result of natural selection, but rather, genetic patterns evolve with species as hosts. For our purposes here, however, the two models are in close enough agreement that we need not belabor the point further than to point an interested reader to Richard Dawkins.)This can be observed directly on a microscopic scale, but the allegation by the creationist apologists is that the lack of direct observation for macroscopic natural selection precludes the proposal that macroscopic natural selection is responsible for evolution from being useful or worthy of consideration. Of course, by similar criteria we are left with no viable explanation of the observations alluded to above. Creationism is no more able to produce evidence from the depths of time long past, and in fact lacks any analogous data to that of microscopic natural selection, which serves to suggest that the principles are sound and should hold at other scales. Thus, the whole trickery of trying to recast the debate into terms of an emotionally charged term and then seeking to redefine the term in a straw man argument is seen to be a callous attempt to change the universe by changing popular opinion.

What would be the effect if our delightful correspondent had succeeded in convincing all of his readers that evolution is a sham? Would the universe suddenly stop evolving? Would TB drugs of yesteryear suddenly become effective again? No. The universe would not even slow down in its continual process of change, despite our kicking and screaming, and our denial of overwhelming evidence. Such intellectual dishonesty gets us no where, as it ultimately divorces us from the world around us and impairs our ability to make rational judgments.

technorati tags: , , ,

Tuesday, February 21, 2006

Not Safe For Work: A case for new metadata.

You've no doubt seen it before: those wonderful four letters that alert us that a certain link should be treated differently than all others around it. Depending on the person, one may either anticipate with glee akin to that of eating from a forbidden tree, or avoid it with the same anxiety as one might avoid a rabid animal. Yes, NSFW is one of the more useful acronyms to have been developed by the collective patrons of the Internet.

All the same, however, one does occasionally fail to notice these instructive glyphs, resulting in shock, amusement and pain. If only there was a way to flag these links at the metadata level so that the browser wouldn't allow you to follow them inadvertently. Luckily for us, the rel attribute of the a element in (X)HTML is designed for such things. If we simply mark links as being NSFW by adding the attribute rel="nsfw" to the anchor tags, then it would become possible to create a Greasemonkey User Script to prevent such links from being followed. This measure would be easily deactivated, thus making it a conscious decision to follow NSFW links. In fact, I am working on just such a script right now. Hopefully I'll finish it soon. If so, I'll post it to this space when I do. In any case, please consider marking your NSFW links with this attribute, and perhaps it will catch on.

Update: Jeremy Dunck on the Greasemonkey mailing list was kind enough to write up a script for me to do exactly what I wanted.

technorati tags: , ,