Following the most recent Democratic Primary debate, on MSNBC this past October 30, much of the news media reported that a room full of male candidates seemed to all gang up on the sole woman in the field (and front runner) Hillary Clinton. She was roundly deemed to have been given a bruising. You can see the video of the debate and judge for yourself at the New York Times.

I find it hard to see. What these reports find so blatantly apparent, Hillary stumbling and taking her licks, to me looks more like one person who has become the center of everyone’s interest and who can hold her own against a roomful of formidable opponents. Let me stipulate that I am not especially a Clinton supporter and not particularly motivated to defend her (she is too middle of the road and too much of an establishment candidate for my taste).

Still what I see in the video is a woman who is defiant and has a powerful understanding of the issues, for the most part a greater understanding than the other candidates. Even on the New York State driver’s license question, where the news media almost entirely understood Hillary to have botched the end of the debate, I see her simply taking a complex position, which the rest of the candidates want to falsely and maliciously portray as double-talk.

In the end, to me, what the video shows, but which is not reported, is that Hillary really seems to have become the center of power. She has what everyone else wants. What else could it mean that both the Democratic and even the Republican primaries have come to be almost entirely defined by the question: Who can beat Hillary?

And yet, there remains this curious disjunction between the debate itself and the way it has been generally reported in the media. Why were reporters so eager to latch onto the idea that Hillary had been hurt by a roomful of attacking (male) opponents? It is as if, while appearing to critique the problematic gender dynamics of the debate, or at least the imbalance of the critical remarks made during the debate, and while simultaneously ignoring the power of Clinton’s position, what the reporters were really salivating for was the opportunity to tell the story of a woman taking a hit (whether they were sympathetic to that woman or not).

We all know, from the invention out of thin air of the Howard Dean scream (compare video shot from the crowd to the audio enhanced version aired by FOX news and others), that what the media reports can bear a much greater reality than its putative subject matter. But in this case it goes farther. Not only does the media seem to have invented the story of Hillary’s bruising after the fact, it also seems to have been trying to stage it in the first place. In the video, you can see that, from start to finish, criticism of Hillary by any candidate was for the most part initiated by Tim Russert, one of the two reporters playing the role of “moderator” for the debate, and to a lesser extent by Brian Williams, the other “moderator.” Candidates criticized Clinton in response to repeated questions from Russert and Williams that followed the formula, “Hillary said…What do you think?”

I don’t know if Russert and Williams have personal vendettas against Hillary or both of the Clintons, but they clearly wanted to stage the bruising that Hillary was subsequently reported as having undergone; even though, again, I think if you actually watch the debate Hillary comes out looking stronger, not bruised. So there you have it, the news media sets the stage and then after the fact reports what they wanted to have happened, as having actually happened, whether it did or not.

But there’s more to this. It doesn’t really stop or start with Clinton and the debate. The spectacle of a blonde woman raised onto a pedestal only to be taken down a notch by the media and popular opinion seems to be in the air these days. It’s like a sure fire hit that no one can resist. I’m reminded of the now famous YouTube video of Miss Teen South Carolina making a terrible gaffe in the Miss Teen USA pageant, this past August, and then only a couple weeks later Britney Spears delivering a commandingly lackluster performance on the MTV Music Video Awards. (I also learned, while writing this post, of another similar video that made the rounds of disparagement in the past few months–Merry Miller botching an interview with Holly Hunter for ABC News, this past July.)

In the case of Miss Teen South Carolina, it’s worth dwelling on a few statistics. If you add up the multiple postings of the video of her gaffe on YouTube (there are dozens), it has been viewed in two months over 28 millions times. If there were simply a single posting of the video, given these numbers, it would at this point in time be the 9th most viewed video in YouTube’s entire history. The only other videos that have come close to rising this quickly to the top of YouTube are a handful of popular commercial music videos. But even in comparison to the most popular of these commercial music videos, the Miss Teen South Carolina video appears to have had the quickest rise to the very heights of YouTube viewership of any video in it’s entire history. (This video in fact eclipses the number of views currently garnered by the Filipino Prisoners’ “Thriller” video—8 million—which I previously posted about, an arguably far more startling and fascinating clip.)

What accounts for the unprecedented YouTube popularity of the Miss Teen South Carolina video? Can it really be said to be, by some measures, the most interesting and entertaining video ever to be posted on YouTube? Many pious viewers attempt to explain what draws them to the video, by wringing their hands about whether or not they’re watching out of sympathy for someone letting her nervousness spectacularly get the better of herself or on the contrary because the video offers a galling indictment of American ignorance (Miss Teen South Carolina was unable to answer a question about why one fifth of Americans can’t place the U.S. on a world map). Even ostensibly feminist columnists seem confused or torn about what’s actually going on here.

Certainly Miss Teen South Carolina’s gaffe is astounding (and for me painful to watch). Certainly many Americans are ignorant. But is that really enough to explain the overwhelming numbers of viewers in comparison to other engaging videos? I don’t think you have to go far to find an answer. Look at the comments on YouTube itself, or on sites like Digg and Technorati, or do a Google search of blog posts about Miss Teen South Carolina. It’s hard to miss the mean-spiritedness (usually explicit) that completely dominates what people are saying. This video, people seem to think, is the perfect example of a dumb blonde and that makes it immensely entertaining. Consider even the numerous video responses on YouTube to the Miss Teen South Carolina video, many of which have become popular in their own right. They all take the dumb blonde routine and run with it (or to a lesser extent the dumb Southerner routine).

No wonder so many people, after the recent Democratic debate, were dying to tell the story of Hillary stumbling and dropping the ball or at least of her being subject to “withering” rebuke (as the New York Times said), even if it didn’t actually happen. Clinton is a blonde woman, she’s symbolically from the South, due to her association with Bill Clinton and years spent in Arkansas, and it looks like that’s all it takes to fit the bill. Indeed, some bloggers are already making an explicit comparison between Hillary and Miss Teen South Carolina, for the purpose of ridiculing the presidential candidate.

And so, to pretend that the enormous popularity of the Miss Teen South Carolina video has anything to do with the rationalizations of a tiny minority of viewers who claim to view the video for sympathetic or socially critical reasons, is to ignore what the utterly vast majority of people are saying and to willfully blind oneself to an essentially misogynistic media phenomenon.

After all, can it be any coincidence that yet another Southern blonde woman, Britney Spears, within a couple weeks of Miss Teen South Carolina, managed to become the subject of overnight fascination for a bungled performance at the MTV Music Video Awards (to say nothing of being the subject of general fascination for her ongoing tabloid decline). Of course in this case, there were those who followed the prescribed routine and attempted to lay out critical responses to the Music Video Awards by tempering them with a whole host of morally self-justifying palliatives. Spears was purported to represent the mediocrity of popular music, or her experience exposed the exploitive nature of the music industry, or she was served up as an exemplar of bad parenting, or she simply showed us another rich spoiled brat getting her comeuppance. Yet, if you look into the comments on the MTV Music Video Awards on mainstream web sites, what you again find is an enormous majority of people ridiculing Britney’s body and the flaws of her performance. (Despite the fact that, as far as Spear’s body is concerned, it is svelte and lithe and could only be considered undesirable in comparison to the fetishized teenage body that helped her rise to fame–and her performance was at worst dull and unprofessional, not ridiculous).

The same sort of analysis could be done of the popular video of Merry Miller, which I mention above, in which she botches her interview of Holly Hunter, for ABC. And here again, it is a case of a blonde woman from the South subject to ridicule for a blunder.

With each of these videos, though most prominently in the case of Miss Teen South Carolina, there seems to be a willful refusal by much of the media to distinguish the insights of their social critique from the broad phenomena that determine how and why these videos capture general public attention. There seems to be a general desire to make it all much more complicated than it is. There is a powerful motivation to muster a whole host of real but in each case sideline issues (American ignorance, personal failure, the music industry, etc.) to rationalize and mask the overwhelming mass of public opinion.

And so I return to the recent Democratic debate and Hillary Clinton’s putative bruising. On the one hand, there’s the actual video, which shows Hillary as the focus of the entire election, aside from the war, and therefore as occupying a position of central importance. On the other hand, you have the story told in the news after the fact and the way Tim Russert (mainly) egged the candidates on, seeming to actively want to hold Clinton up for ridicule. It’s as if the desire to see Clinton, the only female candidate, taken down a notch by a roomful of attacking men, is more powerful and more real than anything else.

I can only conclude that in America, even when it’s not true, everyone likes to see a blonde (from the South) fall on her face. And I have a sneaking uneasy suspicion that this could be what comes to drive interest in the entire presidential election. I can only hope, despite my misgivings about her politics, that if Hillary wins the primary, she also wins the presidency and gets the last laugh.


Sweating it out in my un-air-conditioned apartment, during an overheated New York summer, I recently received an email from my friend Jo, of a sort that thousands of people around the world must have been receiving at the same time. She was alerting me to the latest YouTube phenomenon video. I mean, of course, the prisoners at Cebu Provincial Detention and Rehabilitation Center, in the Philippines, restaging Michael Jackson’s “Thriller” video. In twelve days (so far) this video has been viewed over two and a half million times and is on track to become one of the most viewed YouTube videos ever.

The email from Jo also contained a question that probably did not appear in many other emails on this topic. She asked: “What would Michel have made of this?” Jo was referring to the seminal late twentieth century French philosopher, Michel Foucault, who famously wrote about the history of prisons and their place in contemporary societies. (Foucault also wrote ground breaking work on other topics, such as sexuality, madness and psychiatry, the human sciences, and medical practice). Since I wrote my doctoral dissertation about Foucault’s work, it certainly made sense that I might have something to say about it.

My first reaction, however, was an honest sense of being completely dumbfounded. The vision of over a thousand prisoners precisely re-choreographing Jackson’s “Thriller” dance, in their orange jumpsuits and sandals, was just too odd. It seemed more like a mockery of prison decorum or like the accidental coming to life of a musical in the wrong place, than like a purposeful instance of punishment, justice, rehabilitation, or even vengeance.

Two minutes into the video, I got over my amazement and realized that in fact it all made sense, as far as Foucault goes. Thinking back over his theories of disciplinary society, it fit almost too well. Indeed, perhaps I was astonished at first, more because, as Freud says, what is uncanny is what is most familiar and therefore strangely hard to recognize.

And so the more I thought about the dance and about the appearance of the video on YouTube, the more I saw that it conformed quite exactly to Foucault’s theories about the social function of the prison. It was simply hard to believe that Foucault’s theories might be enacted in such a literal manner. Here we have in the prisoners’ dance, I realized, and in its appearance on YouTube, the model itself, as Foucault proposes in at least part of his work, for how all individuals become proper citizens, workers, students, family members, patients, and all the other roles we play in our contemporary life.

So I had my answer for Jo. What would Foucault make of this? He’s chuckling to himself in his grave and saying I told you so.

What we see in the video of the Filipino prisoners are thousands of people in identical clothing being marched around in a military like fashion. Apparently they are dancing, but in fact they are being taught discipline. The security consultant who came up with the idea freely acknowledges the disciplinary goal of the dance. This accords with Foucault’s contention that one of the crucial functions of the prison is to create “docile bodies.”

But more importantly, the dance is videotaped and posted on YouTube. This is done again by the security consultant who came up with the idea. The consequence of this is that, once the prisoners realize how popular the video has become, they feel compelled to better perfect their dance/discipline. Once again, it turns out that this accords with Foucault’s discussion of the impact of prisons in contemporary society. Prisons, according to Foucault, early on became sorts of panopticons (following a type of architecture invented by Jeremy Bentham in the 18th century). The function of a panopticon is to make people feel they are being watched at all times. In turn, this feeling of being watched (for the Filipino prisoners potentially of the whole world watching) makes people take it upon themselves to better regulate their own conduct (literally for the Filipino prisoners to come in line with the dance). And so in the final instance, self-regulation turns out to be the mechanism by which external forces control the body.

This last point is probably the most crucial element of Foucault’s argument in Discipline and Punish, his deeply influential history of prisons. It is also a point more often than not misunderstood. Foucault does not argue that in our disciplinary society people are being controlled by the fact of other people watching (or videotaping, as the case may be). Rather he argues that we control ourselves by making ourselves feel watched. The sense of being watched comes from the individual and is an act of the individual on him or herself. In fact, according to Foucault, it is this very act of self-surveillance which is the means by which we become individuals at all in contemporary society–how we fit into the workplace, home, school, church, etc. Beginning in the 18th century and definitively by the 19th century, when this model of conduct was perfected in prisons and simultaneously expanded beyond their walls, and up to the present, we have been and remain a society of individuals formed in the act of making ourselves feel observed by others.

This is what is most fascinating about the “Thriller” video: the fact that it is placed on YouTube. It brings starkly into relief the underlying social dynamic bound in the fact that YouTube is precisely a place where people, more than anything else, post videos of themselves. And then they post videos of their friends and families, their most intimate relations. It is where collectively, and more and more, we are all subjecting ourselves to the idea of the disciplining gaze of others. We think we’re sharing, when we’re really conforming.

And once one begins to think about the Internet this way, there’s hardly any reason to stop at YouTube. For example, most forms of blogging make up even more powerful acts of forming oneself under the possible gaze of unknown others. How is it, after all, that the diary, a sort of text a person conventionally writes only for him or herself, a sort of text once symbolized by the lock that closes it, how is it that this sort of text has come instead to be published happily and willingly in the most public and global of forums ever devised? It is almost a complete inversion for the act of writing a diary, to turn into the act of writing a blog.

Dating, blogging, YouTube, Flickr, Facebook, MySpace, craigslist, discussion forums, webcams, more and more we are enacting every aspect of our personal lives under the gaze of potentially any other person in the world.

I’m going to stop now. What I have said is all pretty abstract and rather quickly sketched out. I think this topic is worth subsequent posts, where I will go into many of these points in more depth and less abstraction. I want to try to make plain the disciplinary medium we swim in and form ourselves out of every day. In the meantime, perhaps it is worthwhile returning to and keeping in mind my passing remark about Freud and the uncanny. Beware of what is most familiar. In our globalizing world, perhaps we do not need to be on guard so much against the encroaching interests of others (governments, corporations, terrorists), as we need to be on guard against the way we incorporate these interests as the things we take most for granted about ourselves.

Bugaboo New York

July 8, 2007

For the uninitiated, Bugaboo is a Dutch company specializing in the production of ergonomic baby strollers with modern designs. Their strollers are modular, highly modifiable, made from quality materials, and come in a white lightning streak of fashionable colors (the “denim collection” being the latest addition to the coveted line). You know a Bugaboo when you see one. It screams high concept like no other stroller.

If you want to get your baby into one of these babies, however, it’s going to set you back somewhere between $700 and a cool grand. And then there are the accessories: parasol, foot muff, sun canopy, snow wheels, cup holder. Only the best for little so-and-so.

To spot Bugaboo parents, with their Bugaboo babies, bugabooing about, you need only frequent the gentrified enclaves of America’s cities. The Bugaboo parents are strolling around in Los Angeles and San Francisco, more than a few certainly could be found in Portland, Seattle, Chicago, Miami, and then one or two no doubt down there in Austin or around the Plaza, in Kansas City.

In New York, it all goes to another level, however. In New York the Bugaboo is not merely a stylish possession, to be spied in the hands of the lucky few. It is a status symbol like few others, with mothers clambering over each other, trying to establish their rank in the baby-fashion world order. This clambering was all supposedly started, years back, by the appearance of a Frog model Bugaboo on “Sex and the City.” Subsequent to the rise of the Bugaboo to bourgeois cultural icon, of course, there’s been a bugabacklash. Even buga-highway-robbery.

Still none of this is very surprising for a city awash in money and status like New York. What actually makes me stop and scratch my head is the number of Bugaboos I see coming out of the vast blocks of public housing that cast outward toward the east river from my apartment building. Certainly the desire for status is understandable from anyone, anywhere in the socio-economic hierarchy. But if you’re living off of public assistance where do you get the money for a Bugaboo? Or are these underground Bugaboos? Craigslist Bugaboos? Miracle knock-offs from lands far away? Perhaps, but I have yet to have anyone try to sell me a Bugaboo on the subway, like a pirated DVD.

Nonetheless in New York, you see it all the time. And not just the latest and greatest stroller, in the hands of someone who is not otherwise playing the part of yuppie, but the coolest phones, the newest and slickest laptops, outrageously fashionable clothes, and every other high concept commercial item you can think of. In New York, yuppie affectations are somehow not always a yuppie thing. Youthful flare is not always a youth thing. And gaudy extravagance is not always a wealth thing.

I myself have developed a taste for shoes that it would never have crossed my mind to purchase when I lived in California. For example, I recently bought a pair of Japanese tennis shoes that aren’t, you know, real tennis shoes, but shoes that just look like tennis shoes, for, you know, when you’re going out. When I got them, it seemed like the most mundane and normal of acquisitions. It was not an attempt to stand out from the crowd. If anything, it was more of a semi-conscious gesture of fitting in. Yet all I have to do is leave New York in these shoes and suddenly I feel like a disco freak dressed for a night on the town, but somehow ending up in church instead. Only in New York could shoes like these seem banal.

So in New York, status symbols and desirable consumer items aren’t really the exclusive domain of a narrow class of people. They flow around more freely and somehow more pervasively. In large part, I suppose this is because fashionable items are so readily available here and often surprisingly cheap.

It helps certainly that Manhattan is in many ways just a giant shopping mall. There is not one Kenneth Cole store, but one for every neighborhood. A “vintage” clothing store doesn’t sell clothing from the 40s or even 70s, it sells last year’s best Chanel dress. There are stores for labels you’ve never heard of, heralding local niche fashion at outrageous prices, and then unloading piles of leftovers for next to nothing in “sample sales”; and I mean hundreds of stores like this. And it’s not unheard of to find a practically new stereo or art deco leather chair, piled up on the sidewalk in the garbage. Indeed when the myth is perpetuated that Manhattan has “everything,” so demonstrably untrue when applied to food or culture, it is nonetheless never more true than when applied to consumer items.

I find myself wanting to conclude from all this that New York probably has the most branded population of any place in the country (if not the world). It is a commercial culture in a profound and exhaustive way. This makes living here both an eye-opening exercise in what really makes the world economy tick and a sort of collective logo-fied delusion. It also shows the capitalist promise of choice for what it is: an inexorable movement towards sameness and expense, casting all people inevitably into the dark maw of the generic, while simultaneously pickpocketing them on the way down.

But don’t get me wrong, I like my shoes.

One afternoon recently, I ducked into the Union Square Barnes and Nobel to kill half an hour while waiting for an appointment. I have a weakness for the Starbucks on the third floor, because, although it is a Starbucks in all its corporate ubiquity and dullness, this one occupies a choice space, in a historic building, with towering ceilings, old iron columns, and a string of windows overlooking Union Square.

A desire to visit this particular Starbucks, however, is often not a feasible one. As usual there were no free tables and many people trolling around hoping to catch a table as soon as somebody stood up. So I decided to go up to the fourth floor of the bookstore instead and see which Raymond Chandler novels I haven’t read yet. Answer: The Little Sister, Playback, and the incomplete Poodle Springs (if you count it).

The fourth floor of Barnes and Noble on Union Square also houses a large space where it hosts readings. As it turns out, on this humid mid-week day, Günter Grass would be reading from his new memoir. It was 2pm when I stumbled upon the rows of chairs readied for the event. The reading would be at 7pm. Already thirty-four people had taken up seats. And while I stood there more people arrived in groups. One can only imagine what it looked like at 7pm. I didn’t hang around the five hours to find out.

This is a classic New York event and a classic New York moment. Anything that’s cool is filled up or sold out before you even think of it. I mean, who stops to consider that they need to show up five hours early for a reading? Even Günter Grass? But that’s what it was like.

One time, a couple years after I moved to New York, I naively tried to go see David Sedaris read from Dress Your Family in Corduroy and Denim, at the same bookstore. I showed up maybe fifteen minutes early, at which point the fourth floor was completely full, the third floor, where you couldn’t actually see Sedaris, was filling up, and store employees were holding people back on the ground floor, while a line snaked up the escalators, advising us would be literate latecomers that it may not be worth our while trying to enter.

Why are these classic moments? Because New York likes to pride itself on having “everything” (whatever that is), but even if such a claim could be true, it doesn’t matter because you won’t get in. Bands that I thought of as small cool off the beaten track bands, when I lived in California, bands that you could see in a bar or a club, sell out shows in New York. Everybody knows about them. Or at least enough do, once The New Yorker, Time Out, New York, The Village Voice, and so on, tip people off.

There’s more to it though. Not only is everyone in New York in the know, one way or another, but also the currency of New York is exclusivity.

And so the dual force of a broad range of people being in the know, while simultaneously coveting exclusivity, pretty much guarantees that anything you want to do is sold out, roped off, and has got a list. Is your name on it? No. Even if your name should happen to appear on the list, once you go inside somehow imperceptibly the scene will lose the full sheen of its glamour. Perhaps this isn’t the right place? Is there an after party? A back room? Where is K when I need a guide to this city?

In contrast, not so long ago, in what I guess must be a complete backwater, Paris, I actually did manage to see David Sedaris read, to about twenty people, in a small bookstore, on the occasion of the translation of Naked into French. We chatted with him afterwards (how quaint!). And if a certain friend had not been uncharacteristically shy, it was pretty obvious we could have taken him out for a drink. He seemed eager and like he didn’t have any plans.

Likewise I’d venture that if Günter Grass had shown up, when I found myself studying a while back at a rather large university out West, careful stratagems would not have been required to see him. I might have gotten there half an hour early, but even if I had show up on time, I probably could have crammed in the back, standing, and heard the man, as well as seen him, along with everybody else. Someone would have chosen a large enough space and guessed how many people were going to show up. Someone would have thought it would be nice to accommodate everyone. It would have been like I was part of something, like something was actually happening. It would have been like I was together with other people who wanted to be there, and afterwards we would all go out into the sunshine, or the cool night air, and smile, and talk, and need nothing more.

Coming out of the bathroom in the toiletry section (appropriately enough) of Whole foods, I was informed today that I had violated sanctified “Team Member” space. Customers apparently must walk the football field length of the store to use the bathroom at the other end. Of course, there were no signs indicating this was not a restroom for customers and, on a previous occasion, I’d been pointed toward this bathroom by a kindly “Team Member.” Moreover, what’s the use of informing me on my way out? Needless to say, the person policing me was not your classic über-friendly Whole Foods “Team Member.”

Tacitly, of course, I was not being told that this restroom isn’t for customers, but rather that a restroom frequented by customers is not good enough for “Team Members.” Indeed, there is a striking difference in cleanliness between the two.

What’s noteworthy about this banal incident, though, is its place in the larger class struggle for toilet access. It is probably not the most well known subsection of Das Kapital, however the struggle of the proletariat to use the bathroom and the clever efforts of the bourgeoisie to keep the bathroom all to itself constitute a significant dimension of the general battle that subtends and defines the entire socio-economic fabric. Nowhere is this more evident than in New York.

New York is a city where literally millions of people walk the narrow sidewalks everyday, often packed shoulder to shoulder. And yet, I can honestly say that in five years of living here I have never seen a public restroom. Not a single one. No doubt they exist, somewhere. I even believe someone published a guide book to their locations. But come on, if you need a guide (of one sort or another), it means the existing bathrooms are not plentiful enough to be useful.

This paucity of public places, for the tired masses to relieve the pressures of daily life, means that when one does find oneself out in the big city, far from home, feeling the need, one must engage in the usually awkward and often abasing effort to find a sympathetic bathroom patron, who will allow you to traverse his or her private restaurant or bookstore and use the precious facilities sequestered therein. (Or you can just go in, use it, and risk being policed by something like the Whole Foods surveillance apparatus.) From a social perspective, the message is clear. Bathrooms are for people who have money. If you can’t afford to enter a paying place of business (or wouldn’t be admitted to one), then why did you become a biological organism in the first place?

The correlation of money to bathroom access reveals itself most clearly in the corridors of New York’s polished and towering office structures. Whereas, on the street the everyman or woman struggles to find a usually ill-cleaned restroom in a semi-public business that will admit him or her, once one is permitted past the security of an office building lobby, shining porcelain cathedrals abound. Expansive, empty, and more or less spotless restrooms veritably litter New York’s skyline. Of course, the nicer the building and the higher the security, the better the dens of urinary repose.

So while people duke it out to pee at Starbucks, the ethnically cleansed corridors of Corporate Headquarters U.S.A. have more toilets than they know what to do with. I have never been in a bathroom on any floor of a large office building in New York and had it be anywhere close to fully used. Which leads me to the conclusion that (effectively) the corporate rich are hoarding the toilets.

One frequently adopted solution to the lack of public restrooms (at least by men) is to pee in the street. You see it a lot. In broad daylight. Street pee-ers seem to fall into two or three categories. The very same self-entitled fraternity of corporate crusaders, who have already hoarded all the nicest toilets for themselves at their places of work. Homeless people (understandably). Drunks (more often than not members of the first group). And members of the tired and downtrodden masses who lack much of an option. There also seems to be a significant subset of people who simply enjoy peeing in the street or think that it’s perfectly normal. And a friend of mine instructs me that I should include people with small bladders (who also lack much of an option). Lastly perhaps there are a one or two people who just want to try it.

One particular member of one of these groups occasionally takes it upon himself to pee on my bicycle chain lock (when I’m not using my bike, it lives 24/7 on the street, locked to a lamppost). Since it could not be easier to aim a few inches in any other direction, I can only assume this urination crime is perpetrated on purpose, for reasons that are difficult to fathom. Does my chipped and rusting bike represent something that someone hates? Or is this just another instance of that pervasive American narcissism where, in this instance, someone can’t even be bothered to consider that this bicycle lock probably belongs to another person and is not just part of nature?

Whatever the case, I do believe that the general desultory state of the street-peeing masses (young elite pee-ers notwithstanding) is a result of how the wealthy top ten percent will take anything from the rest of us and keep it all to themselves, even the right to urinate with dignity.

And so, at long last, I find myself making the call to revolution. It is time for people to rise up, overthrow the Wall Street and Midtown overlords, in their skyscraper fortresses, and assert the inalienable right of all people to relieve their bladders in a manner befitting a great democracy.

I call upon you, my reader, to ask yourself: if you do not act, are you free?

What…? Hello…?

April 28, 2007

An ad for the Samsung Helio cell phone, prominently displayed across the street from the Whole Foods second story cafeteria windows, on Houston Street, says, “Don’t Call It a Phone.” I find myself thinking: I won’t.

Presumably Samsung is referring to the multitude of non-calling related features that make the Helio device so much more than a phone. Nokia similarly wants consumers these days to think of its phones as “multimedia computers.” Indeed, the cell phone has truly become the Swiss Army knife of the 21st century. Packed with a surprising array of capabilities (screw driver, saw, scissors, shoulder launchable surface to air missile), but somehow not so great at any of these functions.

Meanwhile, as far as I can tell, there have been no advancements in the call quality of cell phones in the last ten years and no one seems to care. (I recently discovered that my Ericsson T28z, from 1999, has better call quality than Nokia’s flagship $750 N95 slider cell phone, with the latest in Symbian OS, wifi, and a 5-megapixel autofocus camera.)

The willingness of people to tolerate and, even more, their tendency not to notice poor call quality is especially strange in a place like New York, where we all limit our time at home due to the tiny size of our apartments. A friend of mine here likes to say, “the city is your living room.” No doubt. And many of us, from the early 20th century until the 1990s, when cell phones began to be widely adopted, have become accustomed to having phone conversations in our living rooms.

Yet, by any measure I can think of (call clarity, call volume, sensitivity of the microphone), the venerable old analog land line far exceeds the cell phone in its ability to reproduce the speaking human voice in a pleasant and comprehensible manner. Indeed, not only is the call quality on cell phones worse than land lines, and stagnating, but cell phones have introduced new problems (dropped calls, over amplifying background noise, brain tumors).

Why do we accept this? All the while demanding ever more versatile teeny tiny web browsers and well animated golf games. Could it be that we never really wanted to talk to each other anyway?

I once wrote to the editors of GSMArena, one of the more serious web sites for the cell phone geekerati, asking them to pay more attention in their phone reviews to call quality. Most reviews on their site and others (see also, for example, contain perhaps one or two sentences, often none at all, about call quality. This in reviews that can go on for twelve pages. A kindly editor at GSMArena wrote back to me, saying that there are so many features in current cell phones they cannot always cover everything.

Let’s think about this for a minute. On a device which is nominally a “phone” (though Samsung and Nokia seem a little anxious about whether the word actually applies), the functionality of the so-called “phone,” as a phone, is not always a relevant “feature.”

I will now engage in my inaugural use of a currently popular phrase: WTF?

In my personal struggle to find a phone which works okay in the noisy city, I have actually found a great deal of variation from model to model and manufacturer to manufacturer and so I consider the “phone” feature of the phone worthy of review.

Phones tend to have two significant problems. First, although they can blast out megaphone style in speakerphone mode, they very often do not have enough volume in the earpiece to be heard while walking down the street or in other noisy places. Second, cell phones these days tend to assume fairly small sizes, frequently placing the microphone a couple inches away from the mouth. This means that the microphone has to be more sensitive to pick up what you’re saying, which in turn means that it amplifies all background noise just as loud as your voice, and in yet another turn can tend to drive someone like, say, your mother crazy, who then drives you crazy, asking you over and over to repeat every word you say.

The solution? Not easy. There seems to be no consistency amongst phones for call quality, regardless of price, and not enough consistency with a given manufacturer to count on any particular phone they release working at least decently.

I do have a working hypothesis that flip phones are better for the microphone/background noise issue, because they place the microphone closer to the mouth. Steve, at Steve’s Southern Ontario Cell Phone Page, confirms this, in over a decade of reviews that actually focus on what he calls the “core functionality” of a phone (like, you know, how it works as a phone). But don’t jump to conclusions. Any particular model may well be terrible. For example, the classy Nokia 6133 and 6126 flip phones are utter crap, because the microphone is located, strangely, under the hinge. It must be a whimsical lot of engineers there in Finland, they’ll put a microphone just about anywhere.

I have also found that Nokias in general tend to be the worst phones for picking up every bit of background noise and making it as clear and loud as a bell (I tested a bunch of Nokia phones at their flagship store on 57th Street and have owned a couple). The aforementioned Steve confirms my observation about Nokia. Apparently Sony-Ericssons have the best noise canceling technology and some Motorolas are good.

But really, even when you try, it’s hard to find a cell phone that gets every element of call quality right. So in the end, I guess I have to agree with Samsung and Nokia. I won’t call it a phone.

What is Cynicism?

April 26, 2007

In my last post, I claimed, amongst other things, that when businesses like Crest or Whole Foods employ green brand strategies, they care little whether their products and business practices actually have a positive impact on the environment and people’s bodies. It occurred to me later that many people would be quick to label such an assertion cynical.

This sort of knee jerk use of the term “cynical” reflects the incredibly broad and general misunderstanding of what cynicism is (and it drives me up the wall). Indeed, it’s a term so popularly misused by journalists, politicians, and people in everyday conversation, that dictionaries now offer the popular incorrect meaning of the word as its primary definition.

Of course, that’s how languages change and shift over time. And dictionaries at best can only be indexes of a language according to its conventional use. But in the process, the actual meaning of cynicism is being lost, so that no word, least of all “cynicism,” applies to what it more usefully might mean.

Popularly, cynics are understood to be people who tend to ascribe selfish and dishonest motives to the actions of others. In this view, “cynicism” labels not the questionable actions of the party being subject to scrutiny (for some reason this party is always given the benefit of the doubt), but rather it describes the negative attitude of he or she who is presumed to imagine disreputableness everywhere.

So, for example, if I were to suggest that the real motive, behind the ongoing military debacle in Iraq, is the Bush administration’s desire to assure that either the U.S. profits from Iraqi oil or no one does. And what’s more, I might continue suggesting, Bush, Cheney, et. al., want to prevent Iraqi participation in an Iranian oil exchange, which will trade oil in euros rather than dollars. For, they worry, if Iraqi participation in this Iranian oil exchange were to take place, it might upset the U.S. dominance of the global economy by undermining the predominant adoption of the U.S. dollar as the world’s reserve currency. If I were to suggest this, as merely an example, I might well be called cynical.

Could Bush, Cheney, et. al., really prefer a military debacle in Iraq over pulling out, as at least a second best choice to an ever elusive victory? Only someone who thinks the worst of other people would propose such an argument! Cynic!

And so the cynic (wrongly) becomes he or she who perceives nefarious motives on the part of others. What’s lost in this is the possibility that maybe Bush and Cheney are indeed sneaky fellows. Maybe they aren’t even as good as their word. As usual, the person who perceives the problem is called the problem.

In point of fact, there’s a lot of controversy about the dollar/oil argument I just presented. But the strange thing is, I’d be called cynical not because the argument might be incorrect, but rather just for considering it at all. What does that say about the willingness of people to, oh, I don’t know, think?

The Cynics (the original ancient Greek ones), interestingly enough, believed above all in self-discipline and virtue, through actions, rather than ideas. They tended to lead monk-like lives, eschewing ordinary pleasures. So the Cynic, way back when, was far from the bitter and paranoid curmudgeon we imagine today.

I would suggest, then, that those who critique the darker side of contemporary culture and politics are not cynical at all. They are skeptical and analytical. The true cynic, if the term must maintain its contemporary negative connotation (and it likely will), is the person whom the skeptic critiques. The cynic is the politician or business person who has so much contempt for other people, that he or she says whatever it takes to accomplish their narrowly interested ends. Moreover it is a sort of person who usually exclaims virtue, while exploiting the good will of others. That’s cynical. Yet strangely, in contemporary culture, it tends to be the person who points out this sort of behavior who gets labeled cynical.

This, of course, is all part of a larger issue. Americans do not like to admit that there could be anything not-nice about our wealth and power. So in the U.S. we tend to label any claim about our less than admirable motives, hidden beneath the surface, as paranoid or a conspiracy theory, or just plain cynical. It is part of our long tradition of anti-intellectualism. Ignorance, it turns out, (willful, happy, Pollyannaish ignorance) is a privilege of power.