randomwalker

Ubuntu continues to be a joke

Two and a half years ago I wrote that Ubuntu was a joke. After upgrading to 11.04 recently, I'm here to report that unsurprisingly, Ubuntu remains a joke.

First of all, the new tablet-inspired interface, Unity, is simply not ready to be taken seriously yet. Maybe it will mature in a year or two, but right now it's more-or-less unusable for a work machine.

But hey, a six-monthly release cycle has its constraints, and if I want polish I should stick with the Long Term Support releases, right? Actually that strategy only works in theory, because of the amount of third-party software that depends on the latest libraries. So I went for the next best option, which is to disable Unity and go back to the Classic interface.

And the first thing I noticed was Fitts's law violations of a particularly horrific kind.

When a window is maximized, the top right corner doesn't activate the close button — there's a one-pixel gap. It you haven't experienced this, it's an egregious sort of user-interface bug that's been referred to as snatching defeat from the jaws of victory. "Throwing" the mouse to the top right corner to close the window is a deeply-ingrained muscle-memory feature for most experienced computer users, and breaking these expectations tends to make us rather acutely uncomfortable.

This is a regression. The bug wasn't in previous versions. If I remember correctly, some themes had it, but you could change the theme to get rid of the problem. That doesn't work anymore.

It gets worse. It's not that the top right corner does nothing — it passes the click through to the window underneath. This is just bizarre and results in a tragicomedy of errors. I always have a maximized Chrome window open, and since Chrome, bless its soul, uses its own titlebar, it doesn't suffer from the bug.[1] So whenever I'm working on a maximized app that I try to close, Chrome receives and processes the click, and I end up unexpectedly closing an average of 23.7 tabs.

If that's not infuriating I don't know what is.

There are other "features" that laugh in the face of Fitts's law such as "overlay scrollbars" which you have to squint to even see. As a bonus, these scrollbars make it impossible to throw the mouse to the right edge of the screen to drag them, breaking another hardwired habit.[2] There are many more UI problems, but this should give you a sense of the I-can't-believe-they-pushed-this-out-without-testing feeling that you get with every release of every Linux distro, ever.

There's a common thread in the annoyances I described, and it's a sign of a deeper problem. Ubuntu is trying to adapt itself to tablets — hence Unity; hence overlay scrollbars — which is great, but they're going about it all wrong. Touchscreen devices are fundamentally different from mouse-based ones, and an OS that targets them needs to be rethought from the ground up. There's a nice article on Ars Technica that discusses how Windows 8 is trying to do this.

I believe that the only tenable course is to fork the interface. Instead, Canonical seems to be mutating Ubuntu into a platypus, attempting to create something that will work on both types of devices. It seems to me that this approach is prone to disaster and the problems I described will inevitably get worse in future releases. [3]

Finally, since my rants on the subject seem to attract numerous random Linux apologists, I feel the need to declare that I use Ubuntu as my main system because programmability, security, and open-sourceness (is that a word?) easily trump the usability drawbacks for me. The point of this post is that the "Linux for human beings" ideal doesn't seem to be getting any closer to reality.

[1] Note the irony: Chrome is the only app I never actually try to close.
[2] Fitts's law in Gnome is something I've been writing about since 2004.
[3] I don't follow Planet Ubuntu or other development channels any more; maybe they know what they're doing. This is just my impression as a user.

Comments on Google+
randomwalker

The New Scientific Revolution

I thoroughly enjoyed this lengthy but brilliant article on criminal justice by star neuroscientist David Eagleman, which serves as a case study of the impact of science on moral questions. Eagleman starts from the examples of University of Texas shooter Charles Whitman and another accused man, both of whom were later found to have brain tumors that completely explained their criminal behavior. Next, he rejects the tendency to sweep these under the rug as extreme or atypical cases, drawing upon the example of a Parkinson's drug that can produce equally aberrant behavior in much larger classes of people.

Eagleman argues forcefully that there is no categorical difference between these examples and everyday criminal behavior. The distinction comes down to the simplicity of explanation: a single drug or tumor versus a complex of genetic and environmental factors that affect neural circuitry and chemistry. But it doesn't matter—either way, the invocation of "free will" as a causal agent for actions is equally wrong.

Philosophers bicker endlessly about free will in the abstract, but Eagleman's ends are eminently practical. His point is that neuroscience exposes blameworthiness as a meaningless concept, and hence it should be irrelevant as a legal principle. Of course, he is not advocating letting criminals walk free; instead, he shows that the foundational thinking behind retributive justice is fundamentally at odds with science, and that a theory of justice that focuses on future behavior of the accused and on rehabiliation is the only ethical course.

While this might surprise or shock some of you, it is not the reason for this post. What I think is remarkable about Eagleman's article is it illustrates, blow by blow, how science can essentially settle a question that has traditionally been considered to lie almost entirely in the realm of philosophy. I will admit that philosophy still has a role, but a ruthlessly empirical kind rather than the traditional one.
 
Pardon my excitement, but the importance of the broader movement cannot be overstated. If neurolaw is looking to effect no less than a revolution in the legal system, elsewhere the New Atheists are arguing increasingly successfully that science can and should take on religion, and that the two are fundamentally incompatible. Sam Harris brought a lot of this together in his TED Talk "Science can answer moral questions".

Buttressed in part by the recent meteoric increase in the power of data to address human questions, we're seeing a spectacular land-grab by science and empiricism from other domains of knowledge and understanding, and the rejection of the idea of separate spheres of applicability. Clearly this movement is in its early stages and faces a long road ahead, but I believe that the wheels have been set in motion and it is only a matter of time until we witness the final ascendancy of science as the ultimate arbiter of human actions—the culmination of the scientific revolution.
randomwalker

StumbleUpon Considered Harmful

About a week ago I noticed a large, anomalous traffic spike on one of my articles over at 33bits.org. These visitors seemed to bounce immediately, not viewing any other pages, and were much less likely to engage with the page in any way, such as commenting. Numerically, this traffic source contributed about 75% of the total for that article, but only 2 of the 64 tweets came during the time window of the spike, which means that these visitors were about 100 times less likely to engage with the article as others. An admittedly crude measurement, but even if it's within an order of magnitude, it means that this is a "poor quality traffic source" in SEO parlance—an extraordinarily poor one.

Then I glanced at the referer chart and noticed that the source was stumbleupon.com. At once it all made sense.

Let me explain. The average StumbleUpon user turns to the service when they're bored, so bored they can't even go to the trouble of endlessly clicking on links on web pages like most of us do. Instead they click repeatedly on the "Stumble" button which takes them to random web pages supposedly somewhat tailored to their interests. They're not in it to read the articles (any more than someone who's flipping through the pages of Playboy is in it to read the articles). Instead they're in it for the tiny dopamine spike that they get each time they land on a new page.[1]

Nine times out of ten, such a user will bounce immediately after looking at the title of your article, deciding that it's not something they're interested in. If they do start reading, a further nine times out of ten they'll bounce somewhere into the second paragraph. If you don't believe me, try using the product, and see how quickly you find yourself doing the same thing.

Before I go on to make my point, I should say that this is nothing more than a minor annoyance to me personally. I'm an academic; I'm not trying to monetize my site. And 33bits is a wordpress.com blog, so I don't pay hosting costs. The only reason I'm annoyed is that when I look at my stats page to see what sorts of articles my readers are most interested in, I have to mentally discount the articles that got StumbleUpon traffic. But anyone who pays hosting costs for their blog and is trying to make money (or spread an idea, or whatever) might want to take note of the following.
The architecture of StumbleUpon is fundamentally exploitative of the quid-pro-quo nature of free websites. A pageview from a StumbleUpon visitor costs just as much in bandwidth, but is a couple of orders of magnitude less likely to result in any sort of engagement. Your website wasn't meant to be viewed in a frame, so don't let it.
Even though StumbleUpon has only 10 million users, this is a bigger problem than might seem at first sight. The recommendations that the system makes are voting-based, so the mechanics of popularity and the resulting traffic patterns are essentially the same as with Digg and Reddit, although the engagement numbers are very, very different. This means that most days you'll probably see no StumbleUpon traffic, but one day you'll get unlucky and the resulting spike will dominate your traffic and costs for that month, but you'll have nothing to show for it.

I would recommend some framebusting and User-Agent sniffing code to politely tell StumbleUpon users to go somewhere else, but whatever you do, don't put a Stumble button on your pages!

[1] I'm sure there's a sizeable fraction of users for whom the collaborative filtering aspect works well, and who consequently actually read the articles and engage with the sites. But even if half the users fall into this group (although I doubt it's anywhere near that high), most of the traffic generated by StumbleUpon users to any given site is going to be low quality because the dopamine junkies make 100x more clicks.
randomwalker

Tupper’s Self-Referential Formula Debunked

The so-called Tupper’s "self-referential" formula is:

When plotted over a 106x17 rectangle starting at (0, n) where n is a certain 543-digit number, it results in:

i.e., it "plots itself”.

In this post I will explain:
This formula is not self referential, any more than a program that reads its source code from disk and prints it is a quine.

The value of n encodes the bitmap of the image, and the formula acts as a print statement. It turns pixels on or off according to the binary representation of n.

The formula is deceptively impressive; I will show you how to construct a much simpler one that does the same thing.

Even though the claim of self-referentiality is bogus, the formula does something kinda cool.
If we want to construct a formula that prints itself, the first thing to realize is that there’s no possible way that the mathematical content of the equation has enough “entropy” to encode the information content of the image, so a true self-referential formula is impossible. I will return to this point at the end, but let’s take it as a given for now.

So let’s instead “cheat” and encode the image elsewhere: say in the value of the co-ordinates at which the formula prints itself. We could pick either x or y, but let’s arbitrarily pick the y co-ordinate. How exactly do we encode the bitmap? Let’s pick the simplest possible encoding: the bits of n, which is the starting value of the rectangle’s y co-ordinate, will represent individual pixels.

Now all that the formula needs to do is: for each value of (x,y), output a specific bit of n. A slight catch is that the formula doesn’t have access to n, but it has access to y which is approximately equal to n everywhere within the rectangle (remember that n is a several hundred bit number). So if we ignore the right-most few bits of y, the rest of the number should be the same as n.

How do we output a given bit of n (or y) at the corresponding pixel? The first step is to turn the pixel coordinates into an array index. If h is the height of the image (we don’t know what the width and height are going to be yet), we can turn pixels into consecutive array indices like this:

hx + y

That would work nicely if the image started at the origin, but since it doesn’t, the array indices from the above formula won’t start at zero. We can fix this by instead using

hx + y%h

This ensures that we only look at the lower-order bits of y. We’ll need to make sure later that n is divisible by the height h, so that y%h has an initial value of 0. This will be easy since we’re not using the lower-order bits of n to encode the image, so we’ll free to play with those bits to make it divisible by h. Note that y has two functions: it acts as a bitmapped array, but the last few bits, together with x, act as an index into this array.

Now, given an array index hx + y%h, we can extract the corresponding bit of y (treated as an array) as follows:

y >> (hx + y%h) & 1

That is, first we right-shift y by the given number of pixels, then look at the last bit (standard C operators and precedence).

That’s it! That’s our formula. Happily, it fits neatly in one line, which means that we can substitute 10 in place of h h if we’re going to use a 10-point font.

One little gotcha: this prints all the bits of y including the right-most, but we need to avoid printing the few right-most bits since we don’t control them. We could modify the formula to add some extra right shift, or we could declare that we’re going to start the rectangle at x=1 :-) This means the minimum value of the index is going to be 10, which lets us ignore the last 10 bits. Behold, the formula:

prints itself at the rectangle bounded by 1 <= x <= 100, n <= y < n+10 where n is

11015155530099148746084767279873874328940939149851989606372628810148273717947674060433973614379127881410170934305801904435789783335689003112825326746673689719230041274521315982432509769292233761710401623099301073813173483881337369878043721541780302407745436861736032053101022122199579350872597421046628360

I calculated n by encoding the image in binary, then left-shifting by 10 bits, and adjusting the last few bits so that it is divisible by 10.

This is precisely what Tupper’s formula does as well, except (1) it assumes real-valued x and y instead of integer valued, necessitating floor signs, (2) it uses the ‘mod’ function instead of the % operator, (3) it uses negative exponentials of 2 instead of right shift; (4) because of the exponentials, the height of the image is bigger: 17. All of these make the formula look more profound, especially the appearance of ‘17’. I too, at first sight thought the numerical properties of 17 had something to do with it before I realized it was simply the height of the image!

If the formula I’ve showed you, as well as Tupper’s, aren’t self-referential but merely print statements, shouldn’t they be able to print any input and not just themselves?

Yes!

If there’s anything actually interesting about these two formulas it’s that graph of their plots over the integer plane contain all possible bitmaps (of height 10 and 17, respectively) if you look in the right area. For example, to get

from my formula, simply look at (1,1393927028945687951208608908854274144862220) whereas

is at n=2839549863835611243770369885760240519880614661852209842047097422873116957197820823170688580239030514282972465171635651597931422679050

and finally,


is at n=10972249630976314209007504532033134949682086639198171874178386489416721212125763915435428634919871166306043061044372149862503608116825035242203771489961723271194035787338485934044510302195667783070693718545221897410050191506550799292424771369239643895773826749855239799677873092167849216646372880455565320

Deep, isn’t it? Just kidding. It’s kind of like the infinite monkey theorem—yes, the graphs of these formulas contain the works of Shakespeare as well—except you also know exactly where in the graph to find them.

Let me close by explaining why it’s impossible to have a truly self-referential formula—in other words, a formula that prints itself starting at (0,0) is impossible. Such a formula would have to be truly self-referential to be able to print itself because it would have no external memory to rely on.

Consider the symbol ‘10’ in the formula I presented: it takes up over a hundred pixels—a hundred bits. However, it only encodes a 4-bit integer! (10 is 1010 in binary.) In other words, there is a huge discrepancy between the information content of the image and the entropy of the mathematical content of the formula. Even if you consider all the nonnumerical symbols that could have been encoded in 100 pixels, the discrepancy persists, and will only (almost) go away if you use something like a 2-bit font, at which size the text is unreadable. You might hope that the formula could somehow contain a ‘compressed’ encoding of itself, but mathematical operations aren’t good enough to do compression.

The only way to get around this limitation is to somehow augment the formula with “memory”. The easiest way to do it is to generalize from “formula” to “source code’ in (say) C, which means you can freely assign values to variables. Such self-printing code is nothing but a quine, albeit in a particularly nasty language, and I suspect it’s doable.

But a minimal way of endowing a formula with memory, and one that at least pays lip service to the spirit of the word “formula”, is to allow an assignment statement in addition to the usual mathematical operations, but nothing more. I don’t see an immediate reason why a self-referential formula of this type can’t exist, but at the very least, finding one would be an absolutely Herculean task. [Edit. Turns out there's a simple way to do it if you allow assignment; see comments.]
randomwalker

There's Only One Type of Immigrant

Late one evening many years ago, as a fresh graduate student and recent immigrant, I was sitting in my cubicle working when my colleague Justin walked in to pick something up from his desk. As he was leaving, noticing that I was the only one left, he quipped, “Ah, Arvind, doing the jobs that Americans won’t do.” 

At that moment I was enlightened.

Before I go on, let me address those of you who I’m sure are sitting there offended on my behalf. One of the awesome things about Justin is that he is not hindered by political correctness, and I’m glad that he’s my friend and says things like this.

Everyone “knows” that there are two types of jobs held by immigrants, roughly on opposite ends of the skill/wage spectrum: blue-collar and white-collar, or low-skilled and high-skilled. The former mow lawns, clean homes, and drive taxis; they are able to get those jobs because they accept lower wages. The latter are doctors, scientists and engineers; they are able to get those jobs because their skills are in shortage. If you’re on the Left, you might also believe that this is because the US sucks at math and science education.

Nonsense. There’s only one type of immigrant.

Here’s my theory. In any country that’s under immigration pressure (more people trying to get in than get out), through a combination of market forces and laws, native-born residents are able to pick more desirable jobs on average, where desirability is a composite that incorporates all the relevant factors—how much fun the job is, how much education/training it requires, and of course, pay. The negative correlation between desirability of a job category and the percentage of immigrant workers is strong, but not perfect.

The most important insight from this theory—and the main reason for this post—is that poor math and science education in the US is not a causing a lack of native-born workers in tech jobs, but is a consequence of it.*

In other words, educational opportunities in math and science exist aplenty, but students choose not to avail of them (not paying attention in math class, not taking AP classes or math-y majors in college, etc.), since the job prospects aren’t very appetizing, say compared to a career in law.

Of course, students don’t always consciously make these decisions; the actual processes by which demand influences supply are complex, and may include such factors as stigmatization of certain fields. For example, this recent article describes how the financial success of Silicon Valley is making computer science cool again, leading to increased enrollment. The rejuvenation arguably also improves the quality of undergraduate CS programs (and longer term, via a trickle-down effect, K-12 math and science).

This view also explains many other things—the existence of numerous high-skill professions with negligible immigrant representation, such as airline pilot and astronaut, and the average-to-poor social status of immigrant-dominated high-skill job categories. It also partially explains the lack of American chess players: it’s not that Americans don’t become chess players because they suck at it; they suck at it because they don’t want a career in chess.

The broader principle that demand for jobs determines supply (i.e., educational opportunities) as well as social norms, rather than the other way around, applies equally well to other countries. For example, Indians value tech skills and (some) Indian engineering colleges are exceptionally good because those are the most valuable job sectors in India. Unfortunately, Indians who move to the US usually raise their kids based on the same norms, which is not only economically suboptimal but also inflicts a toll on emotional wellbeing. (See also: Asian parent stereotypes; Tiger parenting.)

* About a year ago I read an article that discussed the supply-determines-demand vs demand-determines-supply debate in the context of math/science education in America. It noted that the latter view is still in the minority, but is winning converts among experts. I’m really bummed that I’m unable to find that article any more. If someone could fish it up for me, I’d be very grateful.

Commenters on the linked post provided convincing explanations of why Americans don’t want a career in chess.
randomwalker

Unhinged

Scott Aaronson once described Less Wrong-style rationalists in typically Aaronsonesque manner, withering but funny enough that it's impossible to be offended. He called us "bullet swallowers," alleging that when faced with a situation where reasonable assumptions lead to conclusions that society considers absurd, a bullet swallower would say:
The entire world should follow the line of reasoning to precisely this extreme, and this is the conclusion, and if a ‘consensus of educated opinion’ finds it disagreeable or absurd, then so much the worse for educated opinion! Those who accept this are intellectual heroes; those who don’t are cowards.
I will happily accept that as a description of how I typically think, although I don't try to impose my views on other people. My own behavior, on the other hand, to the extent that it doesn't impact others, is guided by first-principles logic with total and cultivated disregard for "accepted wisdom." It's easy to fall subconsciously into the "because you're supposed to" trap, and it takes effort to cut that type of reasoning out of your habitual thought process.

The opposite, mainstream behavior—"bullet dodging," in Aaronson's terminology—has never really been an option for me. I was raised by religious fundamentalists, and I shudder to think what my life would have been if I hadn't rejected the culture and belief system that I was raised under once I was old enough to think for myself. (And if you doubt that the act of questioning my received beliefs was extreme and borderline unthinkable, I have news for you.)
 
Bullet dodging is relatively safe—you're unlikely to seriously harm yourself physically, mentally or financially. But if you're going to be a bullet swallower and make it work, you have to (1) keep it to yourself, unless you're OK with being an asshole, and (2) constantly question your axioms, your reasoning and your evidence, and be willing to do an about-face at a moment's notice. This requires, among other things, keeping your identity small.
 
For many years I insisted that I had no need for a 24 hour sleep-wake cycle and that synchronizing my schedule to the Sun was antiquated and inefficient given the existence of lightbulbs, the Internet and 24-hour stores. But defeating my circadian rhythm and external cues was way harder than I'd anticipated, so I had to give up. In fact I now have several hacks to keep my cycle in sync.
 
That said, the potential rewards of bullet swallowing are huge. My questioning the conventional wisdom on caffeine has led to gigantic productivity gains in the last few years. There are many other personal behaviors that I've put into practice with even more of a reward potential, but I won't go into them here.

Back to the inimitable Aaronson: in another post he describes us as people who 

follow their chains of logic straight past the acceptably-quirky into the “childish,” “weird,” and “naïve” without even noticing the “WHAT WILL PEOPLE THINK?” danger-signs
 
That's a great way of putting it. "Not noticing the what-will-people-think danger signs" is a trait with much explanatory power: it is correlated with poor social skills, for example. My strategy has been to learn to let logic roam unchecked inside my own head, while filtering what comes out of my mouth. I believe I've improved my abilities on both these fronts, although there is certainly much more room for improvement on the latter! This online journal is a happy middle-ground: I find that I have a lot more freedom since I don't have to fear instant social alienation, but I do practice a significant degree of self-censorship.
 
There's a word I really like that describes the combination of bullet swallowing and ignoring what other people might think: "unhinged." The "hinges" in question are mental barriers that anchor us to the bounds of social norms. Hinges can be powerful—they help us play it safe, as I mentioned earlier, and keep us from running away with flawed logic that might lead to catastrophic mistakes. I see an analogy to the freeze reflex that evolved as a way to let our emotions take over from logic in dangerous situations. 
 
But hinges can be harmful in the modern world. We are well-adapted to function in the Savannah that we evolved in; our brains are pathetically underprepared for the challenges of technological civilization. Just as the freeze reflex often achieves the opposite of what it's supposed to—such as by preventing us from stepping away from an oncoming car—so do hinges. Cultural norms are smarter than evolution, but only slightly so.
 
I learnt to unhinge myself better, at least inside my own head, a few years ago. It was a beautiful experience. Following the trail of cold logic on a wide range of moral issues—pain, punishment and retributive justice, just to name a small sample—and deriving an internally consistent ethical worldview was quite rewarding as a thought experiment, although practically useless. But the insights that led to improving my personal productivity and what I think is a much-improved ability to predict the path of technological and societal change, and prepare accordingly, have been priceless.
randomwalker

Productivity and performance hacks

In the last few years I've adopted several behavioral 'hacks' that have been life-changing, and some which I've learnt about recently and show strong potential of being life-changing.
Each of these has been tremendously valuable (except the last category which I have yet to practice seriously). In monetary terms: a lifestyle change that increases my productivity by 10%, under the simplifying assumption that it eventually translates to a commensurate increase in income, is worth millions of dollars in lifetime earnings. (I'm just trying to derive a quantitative lower bound on the benefits; a lot of the gain is obviously nonmonetary.)

This suggests that rationally, I should be expending a huge amount of effort trying to learn about more of these behavioral changes. And yet my current expenditure of effort is zero—I've learnt about all of the above serendipitously.

How can I fix this?
randomwalker

Pinboard.in as a Lightweight Database

Summary.

  • A tagged/annotated list of web links with some rudimentary programming is surprisingly useful for many simple content curation and presentation tasks.
  • Pinboard.in is particularly well-suited as a store for such links due to its reliability and nice API.

I noticed some of my academic peers collecting lists of press mentions of their work, and I wanted to make such a list for myself. I realized that the only way I'd maintain such a page, and not eventually abandon it, was if it was dead simple to add or delete a link. The ideal UI would be to tag an article via a bookmarklet and have it automatically show up on the list.

So I quickly whipped up a script to do that; you can see the result here.

Not so delicious. The first version of the script used delicious as the backend, which gave me headaches right away. One of the links wasn't showing up for some mysterious reason, and after banging my head for a while, I realized it was delicious's API that was buggy. Worse, transiently buggy.

That's when I decided it was finally time to quit delicious (for everything, not just this project) and jump ship to the site that all the cool kids were talking about—pinboard.in. Best ten bucks I ever spent. The API is compatible with delicious, so I didn't even need to change my code.

Now that I had the script in place, I kept finding all sorts of new uses for it. I'd been meaning to collect my writing and software into lists for years; now that I had the right tool, it just happened. I also convinced my collaborator on DoNotTrack.Us to use my script for his bibliography.

Why does this work? The difference in usability in each case was so dramatic that I figured something was going on here that was worth thinking about. This is my best attempt to explain the combination of factors that make this approach so appealing:

  1. Maintaining a list of links is the kind of thing that happens via numerous tiny efforts spread out over time. If I had to log in to edit a document each time I wanted to add a paper to a bibliography, or change a minor detail like published to unpublished, the administrative overhead would overwhelm the actual work. Use of a bookmarklet here is a form of in-place editing.
  2. The bookmarklet automates a lot, including the title and the link itself; the backend script takes care of automatic chronological sorting. The latter, after all, is the trivial feature that led to the explosion of blogging.

Finally, here's a totally different application for which I found pinboard perfectly suited as a "database": making collage posters, which is a recent hobby. Here is my latest one: 

 
Click to embiggen (note: ginormous size)

I tag the images on pinboard, and my collage script uses the tags to guide the layout. The alternative would have been way more cumbersome. 

Parting thoughts. First, I'm curious to hear about any other potential use cases. One application I'm already considering is spaced repetition learning—bookmark an article and it will be injected into your RSS reader (or other reading list) at spaced intervals.

Second, I would urge developers of websites and CMS software to consider incorporating this feature. Not necessarily using pinboard, but the broader concept of creating/editing content via a bookmarklet. The primary challenge would probably be in communicating the workflow to users who are not familiar with it.

randomwalker

Unbearable Incongruity

My walks frequently take me past Facebook's offices—they're just a few blocks from where I live. College terrace is a thoroughly residential neighborhood that I can only describe as a rustic paradise. Standing in the midst of these cute-as-a-button homes and lush vegetation, where time seems to have come to a stop and the only sound is that of crickets chirping, I stare transfixed at the innocuous building that doesn't bear a corporate logo or any signage at all for that matter, except for the street number "1601." My brain refuses to grasp the fact that lurking inside is a fiercely competitive corporate titan dead set on a path to world domination.
randomwalker

Observations from Chennai

I see snapshots of my hometown, Chennai, once every 2-3 years. The changes are always stunning.

Economy

A sustained period of 8-10% inflation-adjusted GDP growth has extraordinary effects on a country. For one, rapid economic growth has a habit of bulldozing ‘culture’ and ‘tradition’ like a mosquito. If you know me you know I consider that to be largely a good thing. More to the point, affluence seems to be eroding what I consider the bad aspects of tradition in Chennai, such as superstition, but not the good, such as Carnatic music.

Living through rapid societal change is weird, even if it is for the better. The generation gap is ridiculous. And it’s a confusing time for everyone. The country is playing catch-up with decades, even centuries of progress in the developed world all at once.

I think several factors have contributed to the pace of Westernization. It seems to me—forgive me for being an armchair economist for a second—that high GDP growth coupled with high inflation makes foreign goods cheaper to buy compared to local ones, and that when people import material products, they will inevitably import a bit of the culture that produced them. Then there is of course the information revolution which is making the world smaller.

In a way, India got the IT revolution before the industrial revolution. Everyone has a mobile phone, but mechanization is still minimal to nonexistent in many areas of life, such as food—whether agriculture or cooking in the kitchen. Many daily activities still involve the use of pre-electric era technology.

On that note, signs of leapfrogging are everywhere. People have laptops but they never had desktops. Mobile phones without land phones. And now I’m hearing of a crazy plan to put high-efficiency solar units with advanced battery storage in villages, eliminating the need for the electric grid altogether. That’s a 100-year leap.

By the way, the 8-10% growth figure I mentioned is only an average; for city dwellers who’ve worked hard, incomes have increased even more than that, perhaps averaging 15%. The improvement in the economic status of my social group compared to two decades ago is just astounding.

Government and the masses

The wealth gap has probably increased—I haven’t checked the stats—but at least in the cities, the trickle-down is in full swing. Cars are replacing motor-bikes and motor-bikes are replacing bicycles. Wages for minimally-skilled and unskilled labor, such as driving and housework, have gone up a lot, although there is still a long, long way to go.

But the progress is encouraging. People can live with dignity, even in the slums. No one is starving. Caste matters less each year, and the crime rate, already low, continues downward. I saw a much lower level of general unrest. I didn’t think this day would come so soon, but I saw a happy people.

Government has had a big role to play. The water shortages and rolling power blackouts are gone. Panhandling is way down; apparently there’s been a rehabilitation drive, and it’s clearly working. There are many areas that need to improve a lot, primarily traffic and pollution/cleanliness, but there are encouraging signs.

The current approach to road traffic congestion seems to be to build flyovers (overpasses) everywhere. This is clearly not sustainable, but the city metro rail (MRTS) is being slowly rolled out, and it remains to be seen if it will make a serious difference.

As for pollution—for those who’ve never been, the city, like most Indian cities, is basically overflowing with garbage, and the air is noxious from vehicle exhaust as well—the near-term outlook is not very rosy. One interesting Government initiative in this area is a big push toward public art. It is easy to pooh-pooh this, but I am wondering if it will make a (small) difference by mending the broken window, thus reducing litter.

One area where Government effort is simply inadequate is the expansion of the city. Cities are the future of the country—there is nothing to friggin’ do in the villages. Agriculture needs to be mechanized and the entire population needs to pour into megacities as quickly as possible.

While Chennai is expanding “rapidly” by first-world standards, it’s not nearly fast enough. There is a huge demand for labor in the cities, and rural people are dying—sometimes literally—to get there, but the bottleneck is infrastructure development and (de-)regulation.

Unfortunately I think this is less a matter of Government inefficiency, although that plays a big role, as Government policy. Our honchos are still set in their socialist-era ways of thinking. I find that depressing. How anyone with any brains can fail to realize that rapid urbanization is vital is beyond me.

Food

The restaurant scene has improved dramatically. A newly affluent middle-class is demanding an end to the idly monopoly, and the city finally, finally has an adequate number of restaurants serving other cuisines. I would have been miserable if I hadn’t been able to get Chinese food regularly.

Foreign cuisines are heavily Indianized, of course, but so what? Most restaurants in America are Americanized as well. I should note though that the majority of (say) Chinese restaurants in America are run by Chinese immigrants, but that’s not the case here. For this reason, we have far more ‘multi-cuisine’ restaurants than those serving a particular foreign cuisine.

Food remains cheap—an entree at a nice sit-down restaurant is $1-$3—but prices are going up quickly. Road-side food remains dirt cheap, but then you’re likely to get actual dirt with it.

There are a few successful chains, but US names have had trouble making an impact. Amusingly, ghetto US chains like KFC set themselves up as exotic sort-of-upscale dine-ins over here.

In one of the more obvious and predictable signs of affluence, coffee shop culture has arrived with a bang. Again the successful chains are local, Cafe Coffee Day being by far the most prominent. No surprise—it is nearly impossible for a chain to enter India without tweaking the model heavily.

One coffee shop I went to had a designated make-out area. I mean, it’s not marked as such, but everyone understands that that’s what it’s for. It’s brilliant if you consider the prevalent cultural factors, and has become my favorite example of tailoring businesses to local conditions.

Fashion

I will be blunt—it is hard to overstate how badly people dress in Chennai.

The older generations are the worst. For this group there is little distinction between formal and casual wear. For men, it consists of half-sleeve “dress” shirts, slacks, and sandals. I will leave you to picture that abomination for a minute :-)

For women, traditional attire begins and ends with sarees. Ugh, sarees. They are so ridiculously uncomfortable that I can’t help thinking they must have been invented by men to keep women in the kitchen. At any rate, there is no reason to keep wearing them now that textile technology has advanced beyond the ability to make rectangular pieces of cloth.

Things are somewhat better among the youth. Men’s wear is gradually reaching parity with the West (with some oddities: polo shirts are called “t-shirts”; actual t-shirts aren’t worn much). Women’s wear shows no such inclination. Thankfully, though, sarees have been replaced by salwar kameez. Perhaps in one more generation, social norms will have relaxed enough to allow women to be proud of their bodies rather than having to hide them with baggy attire.

My complaint is more than that. People seem generally oblivious to what they’re wearing and whether it looks good. When I was growing up I never thought about it, because I hadn’t been anywhere else much and because fashion doesn’t matter when everyone is poor. But you’d think affluence would change things. It hasn’t.

Physical culture is also essentially nonexistent. A few young men seem to be taking up weight training, but by and large physical activity is simply not seen as part of everyday life. The vast majority of women, in particular, get no exercise whatsoever (other than walking and housework, if those even count).

Assorted differences from America

The big things are of course all different, but it’s relatively easy to get used to them. It’s the little things that make you pay attention to the differences between Indian and American life and go, “huh.” Here’s a random selection of examples.

Intellectual property laws haven’t caught up, and are poorly enforced. You can find stores carrying internationally recognized brands such as PUMAA and “Converge All Star”. It's not as extreme as China, though. For the record, I'm opposed to most forms of intellectual property in a normative sense; trademark is one that I'm marginally in favor of.

The huge differences in relative costs can result in some pretty bizarre situations. Buying a microwave oven or a chapati maker (tortilla press) can be a major event. Get this—there’s a salesperson who visits you at home to give a demonstration. It only adds a little bit to the cost of the machine, and it seems to be necessary because people are still so wary about mechanizing their kitchen. I bet millions of housewives are silently terrified about becoming unwanted if cooking gets easier.

My haircuts cost Rs. 70, which is 5x what they did a decade ago, but still only about 6% of what I pay in the US. Insane.

Surprisingly, there are still hardly any foreigners around. On the plus side there are now a large number of immigrants from other states, and they seem to have a monopoly of new businesses in certain sectors, such as restaurants.

There still isn't, and it is doubtful if there will ever be, a notion of privacy or personal boundaries similar to the West. An example: I was shopping for shirts and asked a salesperson for help locating my size. I said I was a 42, to which he responded, “Are you sure? I think you’re a 44.” The guy next to him—remember, stores are teeming with salespeople because labor is absurdly cheap—chimes in: “no, I think he’s right, he’s a 42. Look, he doesn’t have a big belly or anything.” I want to emphasize that this is a perfectly normal conversation to have in India.

OK, I’m done. Is there anything in all these changes that I consider to be a negative? As someone who considers modernity good per se, I have to say no. My only quibble is that some things aren’t changing fast enough. For example, there are still manual rickshaws, although they are very rare. I’m looking forward to seeing what the next decade will bring.