Archive for February, 2008

The Doom of Disruptive Change for Incumbents

February 26, 2008

My friend Rags Gupta wrote a post today on how the record labels can save their dying business, (published on GigaOm… nice!), a comment on David Hyman’s manifesto on the same topic.

The back story, for the even-less-initiated than me, is that the amazing efficiency of Internet-based digital music distribution has eroded the big four record labels’ main, historic business model—the sale of physical media. It’s the classic old story of a disruptive technology coming along and not leaving enough table scraps for the old, fat-cat incumbents. It’s now nearly free to get a piece of produced music from the artist to the consumer. No more expensive media (CDs, tapes, records), no more physical distribution chain, no more brick & mortar retail outlets… all the little way-stations that used to clutter the path between you and, say, Boston 90’s underground sensation,Morphine—are gone.

Why the cost isn’t appreciably lower to you is something Steve Jobs still has to answer for. But that’s beside the point right now.

Rags is a smart guy, and so is David Hyman; I wouldn’t presume to analyze their prescriptions for the music business, but the situation does beg the question: what should we expect a big, slow, intractable business to do when faced with a disruptive competing technology / model / business / competitor? They are, after all, big and slow and intractable. Even for private companies, it’s just not realistic that they will do the most rational thing: recognize the coming change (or, in this case, the one that’s already here), and accept the new reality—that they get to keep pennies-on-the-dollar of their old business.

Kodak is a notable example of what’s possible: they were a huge, market-leading company which made a sharp and painful move to follow the consumer & professional trend away from film, toward digital. They dismissed 20-25% of their workforce at one point. They have survived the changeover from film to digital, but as you can see from the chart below, that’s about all they’ve done: survive. Kodak is virtually flat since 1970, whereas the Dow has increased roughly 15 fold in that same time.


A company like Warner Music Group, notoriously non-progressive with regard to digital distribution, will try its damndest to preserve the old way, even when faced with logical, inventive alternatives like the ones Rags and David Hyman outlined. They’d much rather fight to the death than surrender to the disruption and adjust their expectations to something more reasonable given the new realities.

But as I say, the question isn’t just what should they do to save their businesses, it’s: what can we reasonably expect from a dinosaur? More to the point, what do investors expect from a public company? I’m not sure how Kodak proceeded, but what happens to the already beleaguered WMG stock if they announce that they’re doing away with DRM, slashing their rates for streaming, or doing anything at all but fighting the uphill battle against change? That’s probably something they can’t just do without being badly punished by Wall Street, which has serious repercussions to their ability to compete.


What amazes me now is how the big four have focused so exclusively on changes to their distribution model, when they’re sitting on marketing gold. They don’t always own the rights to online and/or mobile distribution—my understanding is that these rights are negotiated per artist—but they should endeavor to do so in future, and modernize their marketing efforts to take full advantage of the rights they do maintain. In theory, they are. But in practice, would-be innovators like my friend Dan Pelson face uphill battles, I assume in all the big four.

Whether you bend with the winds of change like Kodak or fight against them like Warner, the moral of the story is the same, in business, in politics, or anywhere else: being on the incumbent side of an industry facing major disruption is no good. It’s much more satisfying, lucrative, and inspiring to be on the motive side of change!


Political Alchemy

February 23, 2008

When Michelle Obama said she was proud of the US for the first time in her adult life, I understood what she meant. Of course, she probably knew instantly that she was in trouble, that you just can’t say something like that in America without the non-thinking right coming after you with manufactured outrage. “Not proud of America!? Outrageous!”

The implication of their outrage is, and I guess this is reality for some, that you should always be proud of the US, no matter what we’re doing or what we’ve done. That there are no conditions under which you may be less than proud.

It doesn’t mean I’m not very proud to be American, but the truth is we as a nation have done plenty to be ashamed of, especially since the Bush administration took office, and I like the fact that Michelle Obama sees it that way too. If her husband feels the same way (and may he never say so out loud!), it’s an indication that, should he be elected, there’s hope for correction of the problem. If one has no shame over what we do wrong, then we’ll never bother to correct our wrongs. I know it’s much simpler to just be proud and not bother fixing or correcting or changing anything, but that’s not a formula for improvement.

So her comment was an off-the-cuff remark, which has been taken out of context, and commented upon, and shrieked about from the right. Political alchemy; trying to synthesize outrage from a benign, throw-away remark.

The irony is that the same type of trouble befell Bill O’Reilly as he himself was participating, albeit mildly, in that unwarranted, ridiculous attack on Michelle Obama. I’m no apologist for O’Reilly, who is a giant turd, but he was dumb enough to accidentally drop the word “lynching” into his comments on the story. Even the linguistically dense George Bush knows you can’t do that. It’s obvious to me that it was just as thoughtless as Michelle Obama’s comment, and not intended to be as evil or provocative as the other side characterized. He clearly didn’t realize what he was saying. But he deserves the flack for participating in the very same kind of shabby, groundless attack, for playing the alchemist against a target like Michelle Obama.

O’Reilly is part of the problem. Going after a candidate’s wife is low. At least this once he was repaid in kind.

Why Google is in the Photoshop Business

February 22, 2008

Slashdot ran an article Wednesday on how Google has hired a team of developers to improve the performance of Photoshop running under emulation on Linux.

Why does Google care how well an Adobe product runs on Linux? Because Google knows that a Google OS, which would nearly certainly be based on Linux, must automatically, immediately run the entire panoply of everyday consumer software. Development progress on native, GUI-intensive consumer software for Linux has been—at best—slow but steady. Google knows that the existing library of Linux software certainly won’t cut it for a broad OS release to a non-geek consumer public. And they know that current Windows emulation software (“Wine”… published by the same firm Google hired to do the Photoshop gig) isn’t ready for prime time.

Why Photoshop? Because it’s a processor- and GUI-intensive consumer title which, if not being the key individual title needed for potential Google OS, will certainly cover a lot of ground for other titles which could run under the same emulator.

Why didn’t they use MS Word or Excel instead? Two reasons: first, neither of those titles regularly push PC processors toward the edge of their abilities. But also: Google doesn’t care whether their OS runs those titles because their OS distribution will come equipped with browsers; browsers with which consumers will be able to find their way online, to Google docs. The whole purpose of the Google OS may be to drive traffic to their online applications.

Is there a Microsoft-like anti-trust argument against Google owning the OS and the consumer software which run on it? It’s a weird situation because Google will only be distributing browsers, and those browsers won’t be Google, they’ll be Firefox (or whatever). Google’s apps won’t be distributed with the OS. The biggest anti-trust argument that can be made against Google in this scenario (and it’s a valid argument, in my opinion) will be that the Firefox which Google will distribute with its OS will be completely tricked-out for easy-access to Google apps… plenty of links, shortcuts, maybe even a toolbar. With a much-superior, tricked-out version of Wine available underneath, to run other high-end apps (e.g. Photoshop), they’ll have a pretty compelling alternative to Microsoft’s flopped Vista and out-dated XP.

When the Clintons Should Quit

February 20, 2008

Especially if she continues her negative campaigning strategy, Hillary should quit if she doesn’t at least win a majority of delegates on March 4. After that date, if she doesn’t begin to turn the delegate count in her favor, her entire campaign must boil down to extreme negativity or hoping to cause a major crisis in the Obama campaign via synthesis of scandal (e.g. the so-called plagiarism issue).

Just winning Texas cannot be considered a basis for continuing her campaign. It must be an actual majority of delegates from all the primaries that day. If she can’t muster that success, she must stand aside. Her only other option at that point would be to go full, ugly negative. And since the McCain campaign’s attacks—already underway—seem identical to the Clinton campaign’s attacks, a continuing Hillary campaign equates to a bolstering of McCain campaign war-chest. It would be, effectively, a several million dollar donation to the opposing party’s campaign.

So on March 4 we should either see Hillary win a majority of delegates, or we should see a genuine concession speech, with an announcement that she’s dropping out, and an immediate endorsement of Obama.

Clintons’ Latest Attempt to Steal the Nomination

February 19, 2008

It’s official; to steal the nomination from Obama, the Clintons now have strategies for siphoning delegates of all three types: pledged delegates, super-delegates, and the non-delegates from Florida and Michigan.

The Clintons would love to controvert the will of the electorate by exercising their far-reaching influence within the party and convincing as many superdelegates as possible to nominate her rather than Obama. This is not the purpose nor spirit of superdelegate votes.

The superdelegates are there to confirm the will of the electorate, except in the extraordinary case that the popularly selected nominee should prove completely unviable; for example, if a huge scandal came to light subsequent to a majority of primary- and caucus-goers having made their selections.

Superdelegates are not representative of population, nor geography. They are chosen purely by virtue of their influence within the party. Thus, the argument cannot and should not be made that, for example, Kerry and Ted Kennedy should vote Clinton simply because their constituents did. How many superdelegate votes, per capita, are controlled by officials who were elected by voters in Massachussets and New York? You can bet it’ll be disproportionately high. Why should Democrats in these states have more influence in choosing a nominee than, say, one from Idaho? And what about superdelegates who aren’t now representative of any electorate? Gore for example… to whom should he be bound?

Unseated Delegates
The Clinton campaign’s attempt to get the DNC to seat the delegates of rogue states Florida and Michigan is a bit ugly. When the campaign season began, the rules were clear to everyone, candidates and state DNC chapters alike: if those states hold their primaries on those days, the delegates will not be seated. And now, after securing a pledge not to campaign in Florida, and happening to win in the meaningless Florida primary, the Clintons are trying get those delegates seated. This time, instead of fighting against the intent and spirit of the system, her campaign is trying to change the rules altogether.

Pledged Delegates
And completing the trifecta, I refer you to a article from today: Clinton targets pledged delegates. Really? Going after the pledged delegates?

After the Potomac/Chesapeake primaries of last Tuesday I expected the Clinton campaign to go ugly. I expected Bill to be let out of his cage and some negative ads to start appearing. It’s to be expected. This is a fight, after all, and she owes it to herself and her supporters to do everything she can to get the nomination. But I had hoped “everything” would fall short of concerted, coordinated attempts to pervert the rules and distort the intended spirit of the DNC’s system. It’s evidence that in practice, if not in policy, Hillary’s tenure in office may not represent a radically dramatic shift away from the practices of the current administration. And that is what we need now, more than ever—radical change.

Why The U.S. Health Care System Needs Reform

February 19, 2008

…well, one of the reasons it needs reform.

There was an editorial in today’s NYTimes. It’s about how the numbers which insurers use to calculate “reasonable and customary” rates for health care services are provided by a company that’s wholly owned by UniteHealth Group. How convenient for them. This is why your 80% coverage of an out of network medical service almost universally covers less than 80% of what you spent.

An investigation by the NY State Attorney General’s office implies that major health care companies are rigging the system to shortchange beneficiaries… both patients and doctors.

As I was readying my outrage at this, it struck me that I didn’t really know who was responsible for overseeing the health care companies. What’s the FDA or FAA for health care? Of course I thought first of the American Medical Association, but realized immediately that they’re a private non-profit, not a federal agency. Who has oversight for our nation’s citizens’ health? I dug around a bit on the internet and found Joint Commission on Accreditation of Healthcare Organizations, but hell… who are they? We have a Nation Transportation Safety Board but no federal organization to oversee the interests of the citizenry in the face of “Big Healthcare”? Not right.

Eras of Web Evolution – Legible Table

February 16, 2008

Here’s a more legible version of the table from my previous post comparing the “Dot Com Era” with “Web 2.0” with “Semantic Web”.

  Dot-Com Era Web 2.0 Semantic Web
Sometimes Called Web 1.0 Bubble 2.0 Web 3.0
Rough Timeframe Mid-90’s to 2000/2001 2003-present ?
Quick Academic Description Human-readable static web pages, linked. No
Human-readable dynamic web pages. Flexible
interface. Social interaction. Centralized “mashups” via
proprietary APIs
Machine-readable content in standard, XML formats.
Multiple possible interfaces. Data-level mashup at client via standard data
Chacteristic Usecase One-to-many website; directory. Think Yahoo Many to many; social network/sharing. Think
All web data/content is interoperable and available to all ‘net-connected apps. Think RSS
Preferred Business Model Banner ads Google AdSense; subscription? Micropayments?
Characteristic applications Web mail; directories; online shopping Social network; wiki RSS feed-reader, meta search; mashups?
Characteristic Properties Hotmail; Ebay; Yahoo; Geocities Google; Wikipedia; YouTube; Flickr; Facebook None yet
Interface & Display Technologies Links; nested tables; HTML AJAX; CSS; Flash; RSS RDFs; microformats; standardized XML
Design Aesthetic The more the merrier; flashing text; Times; black
background; crowded
Less is more; pastels; sans-serif; white background;
Acronyms and Buzzwords URL; Web AJAX; Blog; API; CSS Meta data; RDF; “Semantic Web”
Preferred Browser Netscape; IE Firefox; Safari Type-specific? browsers not needed?
Users Communicate Via Email; chat IM; in-network message
Self-Expression Via Personal web page Blog; social-net profile Distributed microformats?
Gets It Ebay, Geocities, Netscape Craigslist, Wikipedia, Digg, Google
Doesn’t Get It AOL Microsoft, Netscape
Vanguard of Next Era Suggestions? RSS, Google Maps API
Incumbants Yahoo; Amazon Google; Wikipedia
Unfulfilled Promise Instant, perfect information Online apps Semantic Web itself??
Poster children Suggestions? Kevin Rose; Mark Zuckerberg None yet
Founders’ Exit Strategy IPO M&A with/by Web giant
Terminus Dot-com crash; “dot-com bubble”; Sept. 11? TBD… current recession? Smooth transition to next
Next next big thing.

Comments/suggestions are welcome. Badly needed, in some cases!

Update: I’ve created a wiki to use for improving & adding to this chart. Visit!

A Semantic Web Overview for the Web-Literate

February 16, 2008

The “What”

Like the Worldwide Web, the Semantic Web is a model for how we use the Internet to share and consume information & services. It isn’t a single project or company or technology. Where our current Worldwide Web is all about humans reading text, the Semantic Web is about software reading data. The buzz-phrase “machine-readable” would cause much less confusion if instead it were “software-readable”… which is more accurate anyway.

For an example, if you’re looking at the “About” page on any given website, you the user can personally read and interpret contact information on that page, but to your browser, and your PC, that contact information is just a string of text, no different than the strings of text in the “Privacy Policy” page. Nothing about its HTML formatting identifies it as being information about a person, with an address and phone number and email address. So your browser, or any other software reading that page, cannot treat it any differently than as a blob of text.

A Semantic Web approach would be to format contact information in a standard, XML-based “microformat” specifically designed to contain contact information. The stylesheet for the page instructs your browser on how to format this data for a web-browsing experience, but the content itself would also be available—and legible—to any other application which knows the “Contact” microformat. Your could point your desktop address book application to the URL and let it scan the page for valid contact information. The program could find the contacts, ignoring everything else, and offer to update your address book by adding each of the contacts it found.

The “contact” example is popular because it is accessible, but other usefule formats exist, and others will emerge.

The “Who”

The Semantic Web isn’t a single project being conducted by a single entity (neither was the Worldwide Web). There are a few pioneers who espouse and employ Semantic Web techniques [Tim Berners-Lee], and there a handful of companies/projects which could be considered “Semantic Web plays” [Examples to follow].

The “When”

We’ve already witnessed first major success in the Semantic Web movement: RSS.

RSS is an open standard format for syndicating news stories, where multiple applications are able to read, interpret, and act on any RSS document on the Web. RSS newsreaders can be web-based or client based, and applications can use any piece of any RSS document to accomplish its purpose, which may well extend beyond simply displaying it for users’ consumption.

The “How”

We’ve already begun using display technology which will be necessary to integrate the web of today with the SM: Cascading Stylesheets which decouple web data from its display instructions.

In “original” HTML, the content itself was encapsulated within the markup information that described how the content should be displayed. For example, on a page listing a number of products, all the product names might be contained within table cells, bolded, and slightly larger than the other text on the page. The browser didn’t need to know that it was the name of the product, it only needed to know how to display it, because only a human user would be reading it.

Today, instead of surrounding content with display queues, we describe it within our well-formated XHTML content. We might have an internal structure for “product”, which would have properties like “name”, “description”, and “price”. Then, we apply stylesheets which tell browsers to treat all product names in one way, prices in another way, etc. It’s more convenient for developers because all product formatting can be changed in one place, with one change, rather than having to update it in multiple places.

This recent modernization has not been more about design control and simplicity/organization of code, but it does get us closer to the Semantic Web. In the Semantic Web model, rather than having proprietary formats for describing our content (each site with own structure for “product” data), we would apply standard formats (“microformats”). The stylesheet-based display instructions would be moved to a separate document. This is often done today, but in many cases at least some (or maybe all) stylesheet code is contained within the HTML document (though not usually interspersed with the content, as in the old days).

So today, in terms of readiness, we’re about half-way between the original Web and the Semantic Web.

The “Why”

I’ll save the Why for another blog post. The promise and potential of the Semantic Web, like our original WWW, is enormous. And like the original Worldwide Web, for good and bad, the reality will diverge sharply and dramatically from the academic vision. Much fodder for further commentary and discussion!

Phases of Web/Internet Evolution

February 15, 2008

For the last 5+ years I’ve maintained an ongoing personal and professional interest in the so-called “Semantic Web”. Many have identified the Semantic Web as the “next big thing”, and it seems to perpetually hang just over the horizon, always 3-5 years down the road.

Recent chatter correlating the Semantic Web movement with “Web 3.0”, especially in the Buzz Out Loud podcasts from CNet, have inspired me to develop a basic comparison of the various generally-accepted phases of the Web/Internet’s evolution.

Future posts will contain further discussion of the Semantic Web, which (if anything) would likely be a component of the “next big thing” in our industry, rather than encompass it. Using “Semantic Web” and “Web 3.0” interchangeably would be similar to using “Social Network” or “blogging” as a synonym for “Web 2.0”.

In any case, along with a request for comments and/or additions, here is my comparison chart for “Dot-Com Boom”, “Web 2.0”, and “Semantic Web”. The chart excludes the “pre-Web/early-Web” days before widespread distribution and use of web browsers.

[Click for PDF]

These phases seem to be delineated by the US business cycle, but as there have only been two phases so far, we shouldn’t assume that Web 2.0 will suddenly end when the current/looming recession begins, nor that there will be another renaissance when the cycle turns upward again.

As for the Semantic Web… whichever new phase of Internet evolution to which it may end-up belonging (or not!) it will very likely only vaguely resemble the vision Tim Berners-Lee has outlined for it… just as the first dot-com era only vaguely resembled his vision for the World Wide Web.

Growth and the Balance of Skill vs. Taste

February 8, 2008

How do great writers, great musicians, great artists become great? Let me suggest, from observation and personal experience, that for talent to bloom there must exist a delicate balance between skill and taste. Specifically, development of good taste must not advance faster than the skill of the artist.

We’ve all met the promising writer, the nineteen year old (or twenty nine year-old!) who writes prolifically and with imagination, but whose taste is evolved beyond her years. She is discouraged by her own work because she has the critical ability to compare her own work to that of her literary-superstar idols. In discouragement, she often stops writing, though she should continue.

Then there’s the opposite problem… one a little closer to home for me! The guitarist who has pretty good skill for a self-taught kid but whose taste never matures beyond heavy metal (or rackabilly, or whatever crap with which he grew up). He usually continues playing, though he should stop… or better yet, mature.

The optimal scenario balances these two forces, and has a young, promising talent whose artistic taste lies always just over the horizon of his or her ability. His taste in music must be such that, as he rocks out to the bar chords of Iron Man or Smoke on the Water, he must be capable of thinking to himself “man, I sound good!” Until he wakes up 12 months later to realize that Pink Floyd is the bomb, and that he must learn to play like David Gilmour before he can get back to that “man, I sound good!” that felt so wonderful. If his taste never quite evolves from there, he becomes static, not to mention annoying.

On the other hand, if a kid starts-out loving Django, he’ll be so discouraged at his slow progress that he’ll likely drop the instrument. It’s no accident that kids start really taking to their instruments between the ages of 13 and 17… it’s right in the sweet spot of poor taste (or rather, lower standards), emerging manual dexterity, and brain circuitry capable of quickly climbing steep learning curves.

Everything always seems to comes down to a question of balance.