Categories
Internet My Opinion

A case for hosting your photos in the cloud (Flickr, Picasa, etc)

Pictures on the web, much like grown children, live better lives away from home. As a bonus, they don’t eat all your food and use your hot water. And if they get sick, they won’t infect everyone else. But most importantly when you decide to move house, move to a different country, or if you get foreclosed on, your house burns down or when you pass away, they continue their existence and continue interacting with others.

Pictures on the web should be autonomous units that can act and be acted on in their own right independently of what you do.

Though this sounds scary it is a good principle upon which to base your publishing of images on the web.

Categories
My Opinion

Last Friday … In Norway – my op-ed piece in the Vancouver Sun

“Last Friday a terrorist tried to kill my friends. With a bomb placed outside their workplace he voiced his political dissent in the most cowardly of ways: Through violence. In the hours that followed I reached out over the Internet, through email, Facebook and Twitter, to make sure they were OK. And they were. By random chance, the luck of the draw, by the tiniest of margins. One was on holiday. Another had gone home early. The third met a mutual friend in front of the building at 3:16 p.m., only 10 minutes before the bomb went off. They likely walked right past the terrorist. In an email to me later, one of them writes “It’s strange to think how close I was to waiting a bit longer.” The bomb went off as they turned the corner a block away, killing eight and wounding many more. The time was 3:26 p.m.”

Read the whole piece over at the Vancouver Sun.

Related: My reaction on the day of the attacs: Together is our only option and my ongoing Norway Q&A.

Categories
My Opinion

Your Questions Answered: Q&A About Norway

Watching online and international news coverage since the terrorist attacks in Norway on July 22nd it has become abundantly clear to me that people outside of Norway are having a hard time understanding our culture, our history and most importantly right now our reaction to what has taken place. I’m not particularly surprised by this – for outsiders, and especially North Americans – Norway must seem like a bizarre country where everything is turned on its head. And in many ways it is. Our culture, our politics and our attitudes towards social and political issues are fundamentally different to those of our fellow people on the other side of the Atlantic.

There is also a serious problem with translation. Norwegian is a notoriously complicated language with many dialects and two official and very different written languages. It is also a language that relies heavily on reference. Many words and sentences taken out of context lose their meaning completely and auto translation solutions like Google Translate often have a hard time making heads or tails of them. In addition there is a cultural translation barrier. Many words, when translated, turn into words with a different reference. And when that happens meaning is lost.

In an effort to help non-norwegians understand what is happening over there in my home country I will answer questions and find reference materials and links for anyone interested right here on this site. If you have a question, if you are looking for information or if you are confused about something, leave a comment below and I will make every effort to answer you. I’ll post all the questions and answers in this post as a running log so come back as it gets updated. I’ll also do the same for all questions asked through social media including Twitter (@mor10), Facebook and Google+ (gplus.to/morten). You can also send me a question directly through the contact form.

*Updated from the top*

Q: Do you think he wrote his “manifesto” himself? Is Roid Rage being talked about in the Norwegian press at all?

A: From reports it sounds like the manifesto is a patchwork of different content. In the preface he also says something to that effect. It is heavily littered with quotations from published authors and bloggers, some of it cut and paste, some in edited format. There is also a section, referred to as a “diary” that is clearly his own work. The last entry there is on July 22nd a few hours before the attacks. Some experts have said it is impossible that he could have written it all himself but I think it is the work of one person. As for roid rage there hasn’t been too much talk about it. Though he did take steroids he doesn’t look big enough to have gotten to that stage IMHO.

Q: What kind of camp took place on Utøya exactly?

A: The AUF summer camp is not a camp in the sense that most North Americans think of camps. It is a gathering of the regional members of the AUF (youth branch of the Norwegian Labour Party) to discuss and formulate policies. It is not as some have suggested an indoctrination camp run by the Labour Party to fill young minds with political propaganda. The Utøya camp is run by and for the members of the AUF and the AUF actually owns the island. A point of interest: Many of the policies and opinions held by the AUF and its members do not correspond to those of the parent Labour Party. There are often quarrels between the two and the AUF in general tends to be more radical and left wing than the Labour Party.

Questions by Cord Jefferson in preparation for his excellent article Why the Norway Shooter May End Up Serving a Life Sentence:

Q: As I understand it, Norwegian law says that nobody, regardless of crime, will be sentenced to longer than 21 years in prison

A: “Life in prison” in Norway is 21 years with a possibility of parole after half the sentence is served. However there is a discussion taking place that the terrorist will be tried for Crimes Against Humanity, paragraph 102, for which the maximum sentence is 30 years in jail. If he receives this maximum sentence he will be released after 30 years unless something changes.

An alternative is to send him into what is called “forvaring” or “containment”. This has a maximum length of 10 years but after this the courts can extend the containment in 5 year increments indefinitely.

The one thing that is a certainty is that there will be no reintroduction of the death penalty. Norwegians don’t consider the death penalty an actual penalty.

Q: Are you comfortable with the idea the perpetrator might only receive 21 years, or would you like to see something more severe? What is justice to you?

A: I am a strong believer in the Norwegian legal and penal system. The system focuses on rehabilitation and restoration, not just punishment and retaliation. Many a murderer has served his or her sentence and is now free to roam and contribute to society. And in all but the most unusual cases these people get on with their lives and are not a continuing problem. In an extreme case like this however I don’t see a future in which the legal system will let the accused out. I imagine they will find some way of keeping him locked up indefinitely under the current legal statute.

Am I satisfied with this? Assuming he is held until the end of his life at age 80, yes. This guy should be made an example of. He should sit in jail, preferably in solitude, and serve as proof that even though he committed the worst crime against the country since World War II, and even though he treated his victims inhumanely, we, the society, will still treat him as a human being. He should be held without visitation rights, without access to news, letters or anything else from the outside. He should be left to spend the next 40 years contemplating the fact that his actions didn’t lead to the outcome he wanted.

I think in all of this the key is that last sentence. We, as a society, have to make sure the acts of this man do not produce the results he was looking for. And to do that we need to treat him as the cowardly criminal he is: with humanity. I pity him for his lack of understanding of the human condition.

Categories
My Opinion

Together is our only option

Sun rising over smoky water in NorwayWhen 600 young minds gathered on an idyllic island to form policies and opinions about the future, their own and that of their country, the last thing on their minds was that that future would hold a rain of bullets, devastation, and death. In a short few hours in the late afternoon on a lazy summer Friday their world, and the world as a whole, changed forever. Lives were lost. Innocence was lost. The very fabric of reality seemed to tear, showing a glimpse of a harder, more brutal existence. One in which we fear our neighbours for what they might do to us. One where communities were built to protect us from “the other”. One in which force and violence was the only solution. The world of Hobbes, of Nietzsche, of the individual, alone in the masses.Only the tear was permanent. Burned into facades of buildings by a massive explosion. Ripped into the bodies of the next generation by bullets. Forever imprinted on our retinas as we watched in horrified disbelief.

Is this the world we live in?
Can this really happen?
This cannot happen.
This will not happen.

While the families of the countless victims of the worst terrorist attack in the history of Norway try to cope with their loss it is up to us to take stock. What is this world we live in where people kill? What have we become that makes us capable of such atrocities? What has our society become that the massacre of human lives seems just in the pursuit of an ideological goal?

We have lost our way. Not from God or Allah or Marx or Rand. We have lost our way from humanity. We have forgotten who we are and what we can do. We, the people, the only people, have the capacity for greatness. Yet we resort to petty quarrels over ideology, territory and possession. We have become greedy. Self righteous. Self absorbed. We have lost our way.

I am drawing a line in the sand. And I hope you will stand with me. This ends now.

From this day forth I will do my part to make things better, to make us better. I will speak up against violence. I will speak up against oppression. I will speak up against injustice. I will speak up against indifference. And I will speak up against those who use division and antagonism to pit one against the other, that use words like “us” and “them”, who draw the world in black and white. And I will help them see that division makes us half of a whole. That we are all in this together. No situation has a single cause and no cause has a single effect. In all our actions, no matter how small, wel play our part. And if we all make that part a positive one, one without prejudice, ideology or personal gain, we will all be better for it.

This is not a political manifesto, not a religious doctrine, not a moral dogma. This is humanity, pure and simple: Race, colour or creed we are all sisters and brothers, born of our mothers. We are in this together and together we must make it work.

Together is our only option.


NB: There is a memorial planned for anyone who wants to gather about this event at the Scandinavian Community Centre in Burnaby on Sunday July 24th at 12:30pm.

Scandinavian Community Centre
6540 Thomas St
Burnaby, BC
V5B 4P9

Categories
Internet My Opinion

Open Letter to the CRTC

To the Secretary General, CRTC

I am confused about the CRTCs role in Canadian society. You are said to be a watchdog, but to me it seems the only parties you are watching over are the 4 big telecommunication companies in Canada and their monopoly on everything from television transmission to internet services and mobile networks. This impression has been with me for a long time but recent decisions on Usage Based Billing and unlocking of cell phones for a price have made me put serious questions to whether the CRTC is put in place to ensure fairness or if it’s actually just a government appointed body that protects a monopoly.

Usage Based Billing is not fair for anyone

The debate over Usage Based Billing is limited to a debate over whether or not the big telecoms should be allowed to impose billing practices on their 3rd party resellers. The arguments against this practice largely focuses on two points:

1. Limiting bandwidth to users prevents them from using new more data heavy applications and stifles innovation.
2. The argument that heavy users should pay for their keep makes little sense seeing as the difference in cost to the supplier of transmitting 1GB vs 100GB is minimal. The cost imposed is grossly exaggerated.

First of all, these arguments apply just as well to the main telecoms as to the resellers, so if the ruling is overturned (as it should be) it begs a revisiting of the regulations regarding the main telecoms and their capping of services.

More importantly however is an issue not addressed at all: That the big telecoms have a vested interest in capping their services, not to preserve bandwidth but to block out competition and force the public to use only services provided by the big telecoms.

The simplest example is Netflix, but it is far from the only one. With caps on internet traffic users will be hard pressed to use streaming audio, video and imaging services without having to pay huge overages. This forces them to use only services provided by the big telecoms.

Thus it can be argued that the capping of internet services by the big telecoms is actually a move against competitors to push them out of the market, and an unfair one at that because these same telecoms have a monopoly, imposed by the CRTC, on bandwidth in Canada.

Such a policy enacted by a company in any other industry would be considered questionable, and it reeks of activity normally reserved for criminal cartels.

Capping of internet services is bad for communication, bad for investment, bad for the industry and bad for consumers. The only party that benefits from it is the big telecoms. If they are allowed to continue this practice, the CRTC needs to break the monopoly and allow other actors into the market to create a fair market.

I work in the web industry and we are in the process of developing an application that requires a lot of bandwidth from the users. It’s a free service that will help them get more out of their photos online. With internet caps these types of services are doomed to failure, not because they are too bandwidth heavy but because the big telecoms and the governing bodies that mandate them are not thinking forward but trying to anchor us firmly in the past.

Unlocking of Cell Phones: If I own it I should be able to use it

Yesterday it was announced that the CRTC will be imposing on the big telecoms to allow unlocking of all fully paid cell phones so that the users can use the network of their choice. This is a practice that has been in place in most other western countries for over 10 years and is only fair. After all, if you own a product outright you should be allowed to use it in any way you want.

The problem is that the CRTC is letting the telecoms charge a fee for unlocking the phones. Reportedly Telus will be charging $50 for the unlocking of a phone. This is tantamount to a ransom and is unacceptable.

When a consumer purchases a full price cell phone or buys out their contract, they are paying full price plus a markup on the cell phone just like they would if they bought a vacuum cleaner, an MP3 player or a car. It is only fair to assume then that seeing as the company that sells the cell phone has no vested interest in it and is in fact turning a profit, the consumer should be able to use the cell phone in any way they see fit. Until now this has been impossible because the telecoms have asked the cell phone manufacturers to lock the phones so they can only be used on their networks. This is a simple software key and it can easily be unlocked with the right code, but the code has so far been hard to obtain.

Now with this new rule in place, the telecoms have to unlock the phones upon request, but they are allowed to charge for that unlocking. And they are charging $50 which is $30 more than what the same unlocking would cost on eBay.

The problem here is that a) the locking is done at the request of the telecom, b) the unlock procedure costs the telecom nothing and c) when fully paid the phone is the sole property of the consumer and should be fully functional.

Forcing the telecoms to permit unlocking is the only correct thing to do here. Allowing them to charge for this service on the other hand is unacceptable. Just like any car owner is allowed to buy gasoline from the vendor of their choice, so should a cell phone owner be allowed to buy cell phone services from a provider of their choice. This is basic free market theory. What we have at present is closer to cartel or even mafia practices.

What is your role and who protects my consumer rights?

I am left wondering what the role of the CRTC is. Based on these recent decisions and others before it I find it hard to imagine it can be protecting anyone but the telecommunication companies the body is set out to be a watchdog over. If it is to protect consumers the body has utterly failed and it would be time to revisit its mandate.

But more importanly, who is protecting my rights as a consumer? I am from Norway, a country where consumer rights are valued. What I see happening in the telecom industry in Canada could never happen in my home country because it is unfair and puts the consumer at a permanent disadvantage. To put it plainly, if not the CRTC then who is protecting Canadians from being screwed over?

I would very much like to hear your thoughts on this because as of right now I see no rhyme nor reason in the decisions made by the CRTC.

Yours truly,

Morten Rand-Hendriksen

Categories
Internet My Opinion News

Capping the Net – You Don’t Know What You’ve Got ‘Till It’s Gone

If you don’t want to read all my ramblings, here is what I want you to do to help protect and preserve the free and clear open web:

  1. Go to http://stopthemeter.ca and sign the petition
  2. Send all your friends, family, frenemies, school aquaintences and your neighbour’s cat to the same site and get them to sign the petition (well, maybe not the cat)
  3. Share the link on Facebook, Twitter and everywhere else you think someone may see it
  4. Go to OpenMedia.ca and educate yourself on this very important issue.
  5. Contact your local and government representatives and demand that the CRTC start protecting the rights of consumers, not just the rights of corporations
  6. Call your Internet Service Provider and tell them point blank you are not happy with what they are doing and that you want your internet to remain free, clear and uncapped
  7. Tell your friends about this issue and get them involved

And here’s why:

You may have heard some of your geeky friends talk about the major internet service providers in Canada pushing for new legislation to allow them to cap internet use and demand pay for “overages”. And you may have heard the CRTC – the decision making body put in place to ensure fair trade and practice in the communications space – has made some decisions in this regard that in no way favour consumers. What you may not know is that this move is the first step in what could become a stifling of the internet, a blockage of services and you ending up with a web that just isn’t what it used to be.

Why it matters to you

The crux of the situation is this: Up until the last few weeks your cable internet connection has been open meaning you pay the same if you download 5kb or 300 GB per month. The Internet Service Providers (Bell, Rogers, Telus and Shaw) don’t like this. They want to charge you a base fee for a capped service (say 20GB per month) and then charge you overages (say $1 per GB) when you exceed that cap. That may sound fair but in reality it’s not. And what’s worse, it may just be the first step in an attempt to stifle the web and force you to use paid services rather than the free ones that are currently available.

Although it might not seem like such a big deal right now, capping the web will become a very big deal very soon. New services like Netflix and other streaming media are popping up everywhere, and with them come new ways of using the web. No longer can you only surf web sites. You can download or stream movies and TV when you want where you want, you can use Skype to have video conversations with multiple people at the same time, you can stream music from a myriad of services. And as quality and compression improves these services put more and more loads on your connection. As a result, whereas right now you may only use 5GB per month and get your movies at the local video rental shop, a year from now you may use 60GB per month and watch your favourite TV shows and movies from a streaming service like Netflix, XBOX Live or iTunes. And if you do, your Internet Service Provider will stuff it’s big hands deep into your pockets and pull out all your cash.

Here’s Strombo explaining it:

But isn’t that fair? Shouldn’t we pay for what we use?

This may sound fair, but in reality it’s not. As Netflix points out the actual cost of a GB of data transfer over wired lines is about 1 cent, not $1 like they want to charge. And there is no real reason to cap downloads because the capacity is there. This is just a good old fashioned moneygrab. But there may also be a more sinister reason behind it, and it relates to the Net Neutrality debate that has been raging in the US.

The Internet Service Providers have a not-so-hidden agenda – to force you to keep using their services. It’s simple really: All the major Canadian ISPs also offer TV and video-on-demand services through their cable boxes. But now companies like Netflix infringe on this market. Why watch a pay-per-view movie on Shaw for $3.99 when you can watch all the movies you want on Netflix for $8.99 per month? The trick here is to make Netflix unavailable, or too expensive, so that people are forced to stick with the old content providers. It’s as simple as that.

Net Neutrality at risk

But there’s more to it than simply trying to force people to stick with their old cable plan. This move may be the first step in an all out attack on Net Neutrality. And that’s worrysome to say the least. Net Neutrality simply means that you pay the same price regardless of what type of content you download. So reading your email, checking updates on Facebook, downloading documents from work and watching videos on YouTube and Netflix are all bundled into your internet package. In short you pay for the use of the web, not its services. In the world ISPs wants you pay based on what services you use. So if you want to use just email and facebook you pay one fee, but if you want to watch streaming video on YouTube or use your internet connection for gaming you have to pay an extra fee. And when it comes to music, TV and video the many services out there are simply blocked and you are forced to use the services authorized by the cable providers.

Sounds insane, right? Well, it’s excatly what the ISPs in the US tried to do. And it’s exactly what the ISPs here in Canada will try to do if they get the chance. The bottom line is they want to make money, and the free and open internet is preventing them from doing so so they want to shut it down. Disturbing, right? Well, it gets worse!

(To see a great exlanation of Net Neutrality go to www.theopeninter.net)

The CRTC is not here to help you (!?!?)

Last year I reported Shaw Cablesystems to the CRTC for willfully crippling HD broadcasts on their regular cable. My argument was simple: You can get CBC, CTV, Global, CityTV and Omni in HD for free if you attach a clothes hanger to a cable and hang it out your windiw. But if you have Shaw cable you get a cropped SD version of these same channels and you have to pay for an expensive HD box to get access to the free HD signal. Furthermore this was around the same time the cable companies were trying to force these same over-the-air channels to pay for the privilege of being broadcast on the cable systems. You may remember it as the “Save Local” campaign and it was one ugly piece of corporate greed, willful misinformation and outright lies on both sides.

Anyway, I contacted the CRTC and after a lot of back and forth I got one of their representatives on the phone. What he told me was truly mindboggling: When I asked him why the CRTC was not acting in the best interest of the consumers he told me point blank “That’s not our job.” He went on to tell me, and I’m paraphrasing here, that the job of the CRTC is to ensure that the cable providers follow Canadian law and act in a fair way in the market. In other words that they don’t enter into price gouging and undercutting against each other. “So you’re saying if they all just agree to raise prices to an insane level, stifle service and generally screw over the consumers, the CRTC is OK with that?” I asked. And his reply? “Yes”.

The reality is that unless I was misinformed by this CRTC employee and I’m unaware of some other government entity that has oversight over this, the Canadian consumers are not being protected from price fixing by four companies who are basically allowed to run the show on their own. It’s kind of like the mafia really. And taking this into account things really start to make sense: Why our cell phone services are crappy and more expensive than anywhere else on the planet, why we pay more for cable than our neighbours to the south, why we can’t get Netflix, Zune Marketplace, Hulu and a whole pile of other services in Canada and why we, the consumers, are being screwed over again and again without anyone standing up and saying something about it.

Time for action

Not to be blunt or anything, but this bullshit has got to stop. Canadians are far too polite when it comes to issues like this, and the big corporations take advantage of that compliance. This is one of those cases where unless you stand up, let your voice be heard and tell your elected officials they are screwing things up for everyone, we are all going to pay for it down the road. Unfortunately I’m a mere resident of this country and I have no right to vote so I’m at the mercy of those with the power of citizenship in the matter. So here’s what you should do, right now:

  1. Go to http://stopthemeter.ca and sign the petition
  2. Send all your friends, family, frenemies, school aquaintences and your neighbour’s cat to the same site and get them to sign the petition (well, maybe not the cat)
  3. Share the link on Facebook, Twitter and everywhere else you think someone may see it
  4. Go to OpenMedia.ca and educate yourself on this very important issue.
  5. Contact your local and government representatives and demand that the CRTC start protecting the rights of consumers, not just the rights of corporations
  6. Call your Internet Service Provider and tell them point blank you are not happy with what they are doing and that you want your internet to remain free, clear and uncapped
  7. Tell your friends about this issue and get them involved

We are at a turning point in time. Up until now the internet has been free, clear and uncapped and as a result we have seen a massive emergence of new companies, new services and new ways of communicating, sharing and enjoying content. If the ISPs get their way, those days will soon be over and we’ll be moving backwards. That’s not acceptable. Stand up for your rights and take action!

Categories
My Book My Opinion Publishing

The Future of Book Publishing, Part 2: The Perils of an all-digital world

In this second part of The Future of Book Publishing series (read part one, The 10 Steps From Idea to Printed Book here) let’s take a closer look at the future; more specifically digital book publishing. We are at a crossroads in time right now. Whereas before book, magazine and newspaper publishing was a secluded realm of large corporations with massive printing facilities and distribution networks, now the internet and its myriad of connected devices has cut a big hole in that impenetrable wall and made it accessible to anyone with the ability to type. And we’re only getting started. The e-reader, in its many manifestations, has begun to make inroads into our homes and our bags and with it the written word suddenly bypasses the entire printing and publishing process that previously took so much time and money. But what does that mean for the future of book publishing, and more importantly democratic access to information?

The problem begins with content control

It may seem like the publishers have been sleeping at the wheel where the whole ebook phenomenon is concerned. Nothing could be further from the truth. Publishers have not only been aware of ebooks as an emerging technology; in many cases they have been driving it. In spite of appearances cutting out the middle man and getting a book from the author to the reader in a couple of weeks rather than a couple of months is something that would benefit the publisher as well. That is if they could control the content.

The inherent problem with ebooks and digital publishing in general is that the second the work exists in a digital format it is ripe for illegal duplication and distribution. And while music and movies have been fairly easy to duplicate ever since they started appearing on CDs and DVDs, books have, by nature, been well shielded from this problem: Scanning hundreds or even thousands of pages manually is just too much work. Not so with the ebook: Since it is by nature a text document it is very easy to copy and distribute.

To curb this problem before it becomes a problem, publishers, distributors and 3rd parties are all working furiously to come up with the perfect copy protection method. Unfortunately this has led to yet another format war with two main rivals.

ePub vs. Kindle — yet another idiotic format war

You can join the ebook revolution right now by buying your very own e-reader or e-reader app. Just be warned: Whether you choose ePub or Kindle as your preferred technology it may end up like Betamax or HD-DVD. You see, behind the scenes in the ebook universe there is a fierce battle raging — one that is hard to spot on the surface. In the western trenches you have Amazon and it’s Kindle. In the eastern trenches you have the open ePub format supported by the US Nook (Barnes & Noble), Canadian Kobo (Chapters / Indigo), Sony Reader, North American public libraries and most European book publishers.

Based on the description one would think the Kindle was already drowning in mud. But it isn’t because Amazon is too big (in North America at least). Amazon’s market share and enormous sales volume means publishers can’t ignore the Kindle. So even though they may support the ePub format, they will also make a Kindle version of the books to reach the Amazon customers. As a result Amazon has a huge advantage. In truth, if it wasn’t for the growing library of free Public Domain ePub material and the fact that library ebooks can’t be read on the Kindle I don’t think there would be a format war at all — Kindle would already have won.

As it stands North American consumers looking to buy an e-reader currently have to make a choice: Do you want access to Amazon’s seemingly limitless ebooks library and buy exclusively from Amazon or do you want to buy books from another retailer and also have access to Public Domain libraries and ebooks from the library? If you want to go with Amazon, you buy the Kindle. If you want the other option you buy one of the several e-readers on the market and cross your fingers that Amazon won’t kill it. Or you wait. Like with every other format war the only real casualty here is the consumer.

…and then there’s the issue of distribution

The past couple of years have seen the shocking decline of print media. It seems if trends continue the way they are now newspapers, magazines and even books printed on paper might be a thing of the past sooner than we expect. It could be attributed to a natural progression; spoken word becomes hand written scrolls becomes printed paper becomes e-ink; but the forces at play here are much greater and more convoluted. Let’s not dwell on the “why” just now. Instead, let’s look at the “what happens next” part.

The truly great thing about the printed word, and the reason it was so revolutionary, was low cost and easy distribution. You can buy a book for under $10, read it as many times as you like and give it to someone else to read. If the book is lucky it may change hands hundreds of times and be read by all sorts of people. This is the very nature of the book — you can share it and it lasts forever.

But what happens when the book goes digital? Yes, the book – or file — itself will remain cheap, but accessing the book is no longer as easy. To read a printed book all you need is a light source. To read an ebook you require a device on which to display the book and electricity. It’s a whole new level of technological sophistication, and one that is not readily available to the majority of people living on this planet.

It has been said that the internet is the true democratization of information. But it has also cut a big chasm in society between those that have access and those that don’t. And with the ebook that chasm will grow larger.

Is the ebook a threat to the democratization of information?

Right now I can go to a book store, buy a book on any topic I please, put it in an envelope and send it to a friend anywhere in the world. The recipient, even if she lives in a place with no artificial light, no power and no computerized devices of any sort, can read the book and retrieve the information therein. If the book were not available in print but only in a digital format, my friend would never be able to read it.

“But that’s not going to be a problem” you might say. “The publishers will still print books for less technologically advanced regions, and in time the technology will become ubiquitous.” That last part may be true, in 50 — 100 years, but the first part not so much. Consider this: You are a publisher of books. One day you realize you can cut costs by 80% and increase your earnings at the same time by cutting the print department all together and just push everything out digitally. Why on earth would you not do this? That day is coming my friends.

The key question here is who cares about who reads the book? An author always wants her work to reach as many eyes as possible, but for the publisher it’s all about profit. In other words, a publisher may easily argue that if moving to ebooks and scrapping print means a loss in readership it is more or less irrelevant if the bottom line keeps moving up. Of course this will vary depending on the publisher and its mandate, but it’s a fairly obvious conclusion and one that will sound solid for shareholders and investors.

The problem is the second a book is released in digital-only, the reader base is reduced substantially, not just in numbers but also socio-economically and geographically. So even though it may be good for the bottom line, and it pushes the evolution of the printed word forward, in the process it is leaving a lot of people in the digital dust. In the end it becomes a question for the author: Do I care who reads my book? And if so, do I care if my book will be available for people who can’t access a digital version?

Ebooks for the wealthy, print-to-order for the rest?

Let’s perform a simple thought experiment here (we philosophers love thought experiments): Let’s assume that 10 years from now all major publishers have abandoned print altogether in place of ebooks and that smaller publishers are being edged out of the market due to ever increasing printing overhead costs. We are now in a satiation where if you don’t have the means to acquire a device that can read an ebook and you are not connected to the internet, you will have a hard time accessing new written materials.

In this imagined world a new type of service would likely emerge: That of licenced print-to-order businesses. You’ll already find the prototype of this industry at universities around the world. There either the university itself or the students have set up Copy Co-ops that reproduce compendiums of out-of-print books and selected articles that have been licenced to them. Without this service much of the required reading materials would be inaccessible to the students either due to availability or price. In this imagined world a larger version of the Copy Co-op would likely emerge from which the non-connected, non-e-reader carrying populace could order and get printed hardcopies of their chosen books.

The question here is how expensive will this be, will it even be allowed by publishers and also just importantly what happens to censorship. We already know several countries, including the United States of America, censor the availability and distribution of books that are deemed undesirable, be it for religious, ethical or political reasons (Catcher In The Rye is but one mindboggling example). In this imagined world such censorship would likely become more prevalent as the Copy Co-ops could be punished by having their licences revoked if they reproduced “undesirable” materials. I shudder at the thought.

Ebooks — status quo

What’s outlined above is speculation on my part. But the questions posed, and the scenarios outlined are important aspects of this discussion and shed a different light on the discussion. True, ebooks are revolutionizing the publishing and distribution process and making the written word accessible in new and exciting ways. But they also carry with them serious problems that are being overlooked or brushed under the carpet by publishers and fans alike. It is in times of rapid change we have to take a step back and look at the wider ramifications of our actions so we can see not only the shiny new future but also what happens in the shadowlands.

10 Steps from Idea to Printed Book

Categories
My Book My Opinion Publishing

The Future of Book Publishing, Part 1: 10 Steps from Idea to Printed Book

As I wrap up the editing of the 3rd edition of my Expression Web book I figured it’s time to pipe in on the raging debate over the future of book publishing. Much has been written on the topic as of late and points both good and bad have been presented. In this three-part series I will present my views on the topic, part one focusing on how a book gets from idea to print and part two looking at distribution models, present and future and the problems with an all-digital publishing model.

From my mind to your hands in 10 steps – the complex world of book publishing

If you’ve ever waited for a book to be published – the last link in a fictional series, an updated version for a new generation of software, the latest work of your all-time-favourite author – you have surely wondered why it takes so long for books to hit the shelves in your local book store or on Amazon.com. I know I did. This is largely because the world of book publishing is shrouded in mystery – or rather lack of information. To be honest it’s not all that interesting so it’s no wonder the many steps of book publishing are not common knowledge. But understanding how a book gets from the author’s mind to a printed work in your hands will give you not only a new appreciation for the work that goes into publishing a book but also a good foundation for understanding the complexities of the current debate over the future of book publishing and publishing in general.

Any serious author knows that without editors their work is unfinished and unpublishable.

To give you a first-hand look at what it takes to get a book out of the author’s head and onto a printed page I’ll walk you through my own experience in publishing Sams Teach Yourself Microsoft Expression Web in 24 Hours.

Step 1: Author Acquisition (time unknown)

For a publisher to release a book it first needs an author. Self-evident for sure, but none the less important. The task of finding an author is usually done by an Acquisitions Editor. There are many ways of the publisher and the author to connect; the publisher can go out looking for an expert on a particular topic (which is what happened in my case); an author can approach the publisher with a book proposal; a literary agent can approach the publisher with a new author either looking for a project or with a project in hand. Once initial contact is made the publisher will do an extensive review and vetting of the author to ensure that a) she is actually an expert and knows what she is talking about and b) she knows how to communicate her knowledge in a good way and how to write good copy. This might mean reading past works, requesting sample work or interviewing the author.

Step 2: Project Approval (1 – 2 weeks)

Once the author has been thoroughly vetted and the Acquisitions Editor is satisfied the author will deliver, the process of actually getting a book project off the ground and a publishing agreement in place can begin. This is a multi-step process with checks and balances built in to ensure that the book proposed actually will make money.

In my case the first step was to fill out a basic form with a description of the proposed book, the topic matter, target audience, projected sales, competing published works and information about myself. This form was passed to a decision making body where the Acquisitions Editor presents the book and hopes for a thumbs up.

Step 3: Table Of Contents (TOC) (1 – 3 weeks)

Once the overall outline of the book has been approved, a Table Of Content (TOC) is written further specifying how the book will be organized and what it will cover. The TOC has chapter titles as well as bullet lists under each chapter describing in detail what will be covered.

The TOC is passed around internally in the publishing company to ensure it complies with their standards and, once approved, passed to other industry experts for questions, comments and suggestions. The commenter is asked questions like “Does the outline cover the relevant topics?”, “What is the target audience for the proposal?” and “Would you buy or recommend this book?”. Depending on the feedback the TOC might get passed back to the author to be reworked in which case the process starts over.

Step 4: Publishing Contract (1 – 2 weeks)

Once the TOC has been vetted and approved by all the right people it’s time to start talking contract. The publisher will propose a standard contract containing project scope, milestones, deadlines, estimated publishing date and royalties. This is a rather complicated process, especially for new authors, because milestones, deadlines and publishing dates have to be set and adhered to before the project is even started. Then there is the discussion of what kind of royalties should be paid out, how much of an advance the author wants and whether or not there should be a stipend attached to the project. This all depends on the projected success of the book and how famous and important the author is. And if there is a literary agent involved, the process can get even trickier because the agent will want her say as well.

Step 5: Writing the first draft (1 – 2 months)

With contracts signed and everything in order, the actual writing can start. At this point the author starts working on a very strict deadline. The publisher will expect percentages of the draft delivered at certain times. Depending on the type and length of book the deadline can span from a month to three months for 100%. More importantly there are strict milestones for 25%, 50%, 75% and 100%, usually with a monetary compensation at each point.

While writing, the author will at first feel like she is working in a complete vacuum. As she finishes her chapter and hands them in they are passed on to a series of editors (step 6) and while they are working the feedback is pretty much non-existent. In most cases the author will churn out 50% or more of the draft before the first edits start coming back.

Step 6: First Author Review (AR) (1 month – partially overlaps with step 5)

This is where things get serious. While working on the latter half of the first draft, edits will start coming back from the publisher with comments. There are at least three different editors involved at this point:

  • Tech editor – responsible for making sure everything is correct and all the examples make sense and work properly
  • Development editor – responsible for making sure the content is in accordance with guidelines for the book and or series
  • Language / Copy editor – responsible for making sure the language is publishable (ie the person that rewrites every sentence)

Each of these editors will make alterations to the text and leave comments and questions to each other and to the author. Each of these edits, comments and questions must be answered by the author to ensure the consistency of the book and that everything is still understandable. At the same time the author is expected to make her own edits to the text and move things around if need be. This process is extremely complicated because with so many different people editing the same document it can be hard to grasp what the finished text will look like. It is further complicated by the fact that the edits have a very tight deadline that falls within the deadline to finish the rest of the book. So while the author is writing the last part of the book she also has to start going through the first part with a fine toothed comb to make sure everything is correct. She may also have to do rewrites of paragraphs or even whole sections at this point so in effect she will be writing two separate parts of the book at the same time.

The first author review is the time to make major edits and changes to the text. Once the author review is completed the chapters are returned for more editing.

Step 7: Second Author Review (2 – 5 weeks)

3 – 4 weeks after the author review chapters were submitted, the second round of AR kicks in. This time the author receives PDF versions of the reviewed chapters with figures, headings and layouts included. These chapters will already have been passed through the same gauntlet of editors so they are again full of comments, questions and alterations that have to be answered by the author. This time around any edits should be minor as the book is being laid out and major changes will impact all the following pages. Edits here usually consist of font changes, typographical corrections and figure replacements.

Second author review is also where the Index and Front Matter (intro, acknowledgements etc) are introduced and must be edited.

This second author review might overlap with the first author review.

Step 8: Cover and publicity copy approval

In the midst of all this other stuff the author will receive two documents for approval: The cover and the publicity copy. These must be approved of by the author as well as the editors, all of which have to give them the go-ahead.

Step 9: Printing (6 – 8 weeks)

Once all the above steps are completed, the book is considered to be complete and is passed on to printing and distribution. This will usually take 6 – 8 weeks meaning if everything is done and wrapped up by the author in mid-August, the first run of books will hit shelves in mid-October.

When the book is printed the author will receive two shipments: Complimentary copies of the printed books and an unbound copy for future edits. The author now has a chance to make minute changes to the book in preparation for the second round of printing (if there is one) right in the pages of the unbound copy and send it back to the publisher.

Step 10: Digital and Online Publishing (varies)

By the time the book hits shelves it has in reality been done for 6 – 8 weeks. Meanwhile it could be published in digital format, either through an online subscription service like InformIT.com or through a 3rd party distributor like Amazon Kindle. Whether and when this happens is entirely up to the publisher. The digital and online versions of the book are usually identical to the printed version except they are in colour (the book may be printed in black and white only).

Time from inception to the reader’s hands: 6 months or thereabouts

This is of course assuming that the book was started from scratch and that the author took a full 2 months to complete it. Seasoned authors, or authors revising earlier works, tend to take a shorter time which would cut the time down by up to 5 weeks.

The necessity of complexity

Seeing this list, and realizing just how long it takes to get a book out there for people to read it’s easy to think this process is unnecessarily complex. And judging from the current debate it’s obvious a lot of people, including some prominent authors, are of the opinion that most of the steps above are unnecessary time sucks. They could not be more wrong.

The steps above are there for two reasons: To protect the investment of the publisher and to ensure that the reader gets a quality product. One could say the first one is irrelevant to the author and the reader but the reality is they go hand in hand: A bad book will not be read and as a result the publisher suffers economically. So it’s in the publisher’s best interest to publish top quality books. And that in turn benefits the reader. To abolish the steps in an effort to push the content out faster would likely increase production, but it would also result in a dramatic decline in quality.

The not often talked about reality of publishing, whether it be in the form of essays, scientific papers, newspaper articles or books is that what the author originally produces and what reaches the readers are two entirely different products. All the vetting, editing and re-editing steps are in place because no matter how good the author is, she will not ever produce a perfect work. And she is always the worst judge of her own material. Any serious author knows that without editors their work is unfinished and unpublishable.

Taking this into account there really is only one step from the 10 point list above that can be removed to make things more effective: Step 9: Printing. But this strategy has serious consequences both in how the material is distributed and how it is consumed. These issues will be the basis of the second half of this series, to be published shortly.

Categories
Applications My Opinion social media

Social Media Killed Google Wave

On Wednesday Google announced they are pulling the plug on Google Wave. Yes, this will piss you off, but this needs to be said:

Social media killed Google Wave. Or at least social media was instrumental in its demise.

Why? Because the people who fell over each other and sold their grandmothers to get an invite to this much hyped communication invention were not the people it was intended for and they did not need it, want it or know how to use it. As a result it was left, like an unsolved Rubik’s Cube, by the wayside to die a slow death – not because it was faulty or lacked uses but because those of us who had it didn’t understand it, grew tired of it and simply forgot about it all together.

So how did this happen? Winding the clock back to last year and you’ll be sure to remember the insane frenzy that was the battle to get a Google Wave invite. Everyone and their grandmother (before she was sold) wanted in on this revolutionary “real-time communication platform” from Google. The video demos were awesome. Silicone Valley was all abuzz. The gadget blogs, geek blogs, dev blogs, social media blogs, tech blogs, mom blogs and cute-dog blogs were talking about it. The Twitter Fail-Whale got face time over it. Facebook became a trading ground for invites. It was truly crazy.

The description tells the story

But why? All the videos, the writeups and the demos showed the same thing: Google Wave was a real-time collaboration platform that allowed groups of people to work on the same project at the same time – in real time. Which is something that’s done. In organizations. And in companies. And that’s about it. Normal people, like me and the vaste majority of the social media herd, do not need nor use such collaboration platforms because we don’t work on projects where they are needed. And before you say “oh, but Google Wave was something new and different that I needed in my life” remember that there are already several services out there that do part of what Google Wave did that you rarely, if ever, use.

I remember sitting at my desk in those days and thinking “what the hell are people going to do on Google Wave anyway?” I kept seeing Facebook and Twitter updates like “I’m on Google Wave right now! Anyone want to chat?” and I thought “Why? You’re already on a different platform chatting about chatting somewhere else.”

Don’t get me wrong here. I was part of the frenzy and I got my invite and peddled invites to all that asked. I was just as bitten by the bug as everyone else. And I’m to blame for Wave’s demise as everyone else.

Don’t knock it ’till you’ve tried it – for real

When I finally got my invite it was for a reason. We were in the process of planning the first 12×12 Vancouver Photo Marathon and needed a way for the 6 members of our team to work together on a pile of disjointed odds and ends. My partner in crime Angela decided that Wave might be a good platform for this collaboration so she set up a wave for us to play around with. After watching some videos and reading some of the documentation she quickly became proficient and set up a really impressive wave with images, documents, videos, chat and map integration. The problem was noone else in the group had time to get familiarized with the app so Angela ended up working on it pretty much on her own while the rest of us just watched in awe.

What I saw was huge potential – if there was a huge project with a multi-tiered team in several locations involved. What I realized was that this thing was not made for me, my company, my coleagues or anyone I knew really. It was made for large corporations or groups with highly complex projects that require real-time data and content management.

And that was, and is, the crux of the problem: The people on Google Wave were not the people who would benefit, or even find useful the functionality of Google Wave. Thus it was discarded as a neat looking but useless Beta.

Too much, too soon and to the wrong people

In the wake of Wave’s demise a lot of people are saying it buckled because it didn’t have enough to offer, that it was too complicated and that it didn’t have an actual use. I disagree. Google Wave was something truly remarkable that introduced a whole new way of collaborating and creating content. The problem was the people who would actually use it were already using other more established platforms or were drowned out by the masses that were so eager to jump onto the newest and shiniest bandwaggon that they didn’t realize the band was playing atonal black metal jazz with clarinets. Sure, it has it’s followers, but those were not the ones hitching a ride.

Additionally I think Google Wave was a bit too forward thinking. In a nutshell Wave introduced a type of non-linear stream-of-counsciousness workflow that is hard for people to wrap their heads around unless they are already used to it. Although real-time collaboration might sound cool it takes time to get used to writing a document while watching someone else edit it. And it takes even more time getting used to having multiple conversations in multiple streams at the same time. Sure, social media is pushing us in that direction but we still have a long way to go. We are still too stuck in the linear task-oriented way of doing things to be able to incorporate this type of workspace into our lives and offices. It’s coming but it’s still a few years away. Google simply pushed the envelope a bit too far and it fell off the table.

What can we learn

Like I said, the problem with Google Wave was never the app itself but the people who (didn’t) use it. This begs the question “Why were these non-users involved in it to begin with?” The answer is social media hype, pure and simple. Everyone was talking about it. It was touted as the hottest thing since an overheating MacBook Pro. Everyone just had to have an invite. People actually paid money for invites. But noone (myself included) ever took a step back and asked themselves “Am I actually going to use this thing? Is it even for me?” It’s pretty clear that Google had asked, and answered these questions and that both answers were “No!” Which is why the Beta was closed. Unfortunately the closing of the Beta seemed to have the unintended effect that people thought it was cool to get an invite, that they were part of something new and revolutionary, so rather than the Beta staying closed within the groups that were actually going to use the device it started spreading out to nerds like myself who just wanted their share of the fun.

Regardless of how it actually happened the result was an almost vertically accellerating growth in users followed almost immediately by a vertical drop in actual use. Not because the app was crap but because the people enrolled in the Beta testing were not actually Beta testing or doing anything else with it.

The conclusion? Hype is just hype. It is not a measure by which you should make decisions on whether or not to participate or buy something. And closed Betas are usually closed for a reason: To get actual results from actual users. And maybe most surprisingly: Social Media has the power to destroy great things simply by overloading them with massive interest followed by complete abandonment.

Rest in peace Google Wave. We hope to see you again in another time.

Categories
Android Apple hardware My Opinion

iPatch – The Truth About iPhone Antennagate

Categories
Android My Opinion

Rogers treats Android as an unwanted step-child

UPDATE AUGUST 24, 2010: Rogers rep Mary Pretotto posted an update on the 2.1 OS upgrade for Rogers HTC Magic+ on Androidforums.com stating that the 2.1 upgrade has finally passed Rogers’ testing process, has been sent to Google for approval and will be available for an over-the-air update “soon”. The explanation for the long silence is that “we found an issue that required it to go back to HTC for further development” but that now “I’m happy to report that we reached a milestone yesterday and the 2.1 OS for Magic+ was approved by Rogers.

The nagging question remains why Rogers has been keeping their customers in the dark about this process until this point. There is no good reason for this silence and it has lead to an uproar in the community and a lot of people, me included, seriously considering bailing on them all together and moving to a different carrier. More than anything this whole story has been a study in media and customer mismanagement and I’ll probably use it as a cautionary example of such in future presentations on how to use social media technology to further your business.

Hats off to Mary Pretotto for staying with it through all the angry tweets she’s gotten over time, but there is something seriously wrong with the way Rogers thinks about communication with their customers and if anyone higher up in the system has their witts about them there should be a policy change and probably a shakeup in management as well. Someone made the decision not to inform the customers about the progress of this update and as a result Rogers lost not only credibility and loyalty, but clients.

Update July 14: Rogers just announced that Rogers has indeed received the “draft 2.1” software from HTC and that it will be rolled out “end of August”. First off that makes Rogers Management office look like they have no clue what’s going on and secondly it shows that they are dragging their feet. I think it’s time to start sending angry letters to Rogers to let them know how we feel about being given the runaround.

I realize this issue (cell phones and carrier behaviour) is a bit off-topic from what is generally posted on this blog, but this issue is something I’ve been mulling over for some time now and I feel it’s time to share what I’ve discovered with the world.

Last year my wife and I became the proud owners of two sparkling new HTC Magic phones from Rogers. The Magic was the newest and greatest Android powered touch-screen phones at the time and we were hugely excited to get them. The phones worked great and although the user interface felt a bit basic compared to other more refined user experiences we were happy in the knowledge that as Android phones the firmware (or Operating System) was in a near constant state of development and that in short order new firmware would be rolled out and the experience would improve.

Which is what would have happened had it not been for the fact that we are in Canada and our phones are running on the Rogers network.

Upgrade? What Upgrade?

Things started to go sour in late 2009 when Google rolled out the Android 1.6 firmware (the phones were originally running 1.5). Subsequently the hardware manufacturer HTC rolled out a new handset with the Sense user interface and all of a sudden our baseline Magics were starting to look really old and outdated. “Fret not” we were told, “Sense runs fine on the Magic and HTC will make it available in short order”. Or so we were lead to believe. Then came the crushing news that for unknown reasons Rogers had decided that the 1.6 upgrade with Sense in tow would not happen. There was no official reason given but rumours indicated that Rogers wanted to build in custom branding in the operating system but didn’t want to pay HTC to do it. Rumours, OK. I have no idea if that’s the case. The only word from Rogers was that no 1.6 would be released and the next release would be 2.0 “some time in the summer of 2010.

Regardless, the upgrade did not arrive and as we watched our European and American friends get the upgrade we, the people of the Android Nation of Canada started getting really pissed. So much so in fact a campaign was started to force Rogers to roll out the 1.6 upgrade, spearheaded by the I Want My One Point Six website. But it felt like the message was falling on deaf ears. Maybe Rogers was testing out some new noise cancelling headphones or something.

Upgrade, or else!

Then all of a sudden out of nowhere Magic owners across Canada got a weird text message saying they needed to upgrade their phones to the new Sense user interface immediately or lose data access. If I remember the message arrived on a Thursday and the cut-off point was the following Monday or Tuesday. At first it looked like a weird change of heart but then it turned out the 911 features in the Magic phones were completely screwed up and the upgrade was necessary to fix the issue.

And true to their word, a few days later all internet service was cut from the phones and we were forced to do manual upgrades. Which deleted a whole pile of data and caused major headaches for a lot of people. But in the end we got our Magics upgraded to Sense so everything was fine.

Rogers, realizing they screwed things up for a lot of people, relented by offering up one month of free data for all Magic users. Good on them.

But then people discovered that the upgrade was purely cosmetic. Even with Sense the Magics were still running 1.5. Which was weird because only months before Rogers had argued Sense could only be installed on 1.6 and that’s why we wouldn’t get it.

Something was definitely rotten in Denmark.

2.1 is coming… in the summer… or something

So the debacle continued: Magic owners kept asking Rogers why the phone was still on 1.5 and Rogers kept saying the 2.0 upgrade would come some time in the summer. Which still made no sense at all. No explanation was ever given as to why the 1.6 upgrade was not released. The problem compounded when app vendors started writing apps that only work on 1.6 and higher and the frustration grew and grew.

Then in the spring Rogers announced that they would release 2.1 “by the end of June”. That was still months after everyone and their dog who lived outside of Canada would get the upgrade, but at least it was a step in the right direction. Or so we thought.

With the end of June comes … nothing!

As June started getting into the double digits a lot of Magic owners were starting to get anxious. Not only was there no word on when 2.1 would actually be released but Google was rolling out 2.2 while we were still stuck in 1.5 land. The heat only increased when, after brushing off hundreds of requests for info, Rogers’ Twitter customer rep @RogersMary informed everyone that Rogers would receive the HTC version of 2.1 by the end of June and that the firmware would then undergo “testing” before being released. In other words there would be no end of June release of 2.1. This was further compounded when it was announced that both American and French Magic owners were getting the 2.2 release.

Things were indeed rotten. In Rogers headquarters. And that brings us to today.

Who cares about moneybags customers anyway?

Needless to say at this point we are all fed up. Not only are we still running software that is now over 1 year old and 2 generations behind (just imagine what would happen if Rogers did the same to iPhone owners. Wait, who am I kidding. That would never happen) but the complete lack of information from Rogers on the topic is mind boggling. One would think that a company that prides itself on being “committed to Android” would care enough about their customers to tell them why they are stalling the firmware releases. Or at least announce when the firmware will be released. But I guess that’s too much to ask. As of right now there is no official word on when or how 2.1 will be released other than that it will be done “once it is finished”. This in spite of HTC rolling out both 2.1 and 2.2 to other carriers in other countries.

To put it plainly, this whole situation stinks of corporate greed and negligence. I wouldn’t be surprised if it turns out this lack of upgrading is actually some sort of convoluted plan to get people to buy new phones. Again, just a theory.

“The information will be released when the software is released”

So, being totally fed up with this mess I called Rogers Customer Service and asked to speak to someone in charge. The Customer Service Representative told me that I was the 3rd caller in the last hour to ask about the upgrade. One would think Rogers would take that as a warning sign. But that would mean they actually care. Which as far as I can tell they don’t. But I digress.

I was passed on to Rogers Management Office and after about 15 minutes someone actually came on the line. Her name was Rokhaya. And she did not appreciate my business.

After a lengthy round of questions turned discussion turned arguments I asked her three simple questions:

  • When will we get information on when 2.1 will be released?
  • Why is there no information about the 2.1 release or why it is being delayed?
  • Can you confirm that Rogers has received the HTC version of 2.1 for testing?

The answers were truly astounding:

When will we get information on when 2.1 will be released?

“Right now as far as we (the employees) know we don’t have any information to release to our customers. That information will be released when the software is released”. (direct quote)

Why is there no information about the 2.1 release or why it is being delayed?

“We have no obligation to release such information to consumers. That information will be made available when the software is released”. (again, direct quote)

Can you confirm that Rogers has received the HTC version of 2.1 for testing?

Rokhaya: “I can not provide you with any such information. There is another representative here who can answer this question but he is currently on another call”

Me: “Can you get him to call me back with that information?”

Rokhaya: “He will not call you back because you are on a call with me.”

Me: “Ok, can you ask him and then call me back?”

Rokhaya: “No, I will not call you back.”

Take your consumer rights and shove them!

My conclusion after this rather surprising conversation should be that Rogers does not care about their customers. But I’ll give them the benefit of the doubt and assume instead that this is a systemic failure in which information is not moving freely within the company. It is quite clear that someone has decided that Android, or at least the Magic, should not get first-rate service and should be treated like an unwanted step-child. Who knows why that is. It is also clear that when it comes to informing the consumer about what is going on the Rogers policy is “The consumer does not have the right to know.”

I’ll be more than happy to revise that stance if Rogers provides me with answers to the above questions, answers that should be pretty easy to obtain and just as easy to release. In fact, answering these questions will undoubtedly calm down the furore that is currently brewing over this issue on the web.

Right now Rogers is doing exactly what I tell people not to do: Ignoring customer complaints and losing control of the discussion. A simple firm date, confirmation of receipt of the HTC upgrade or even and explanation of why the upgrade is taking so long would do wonders. Because right now the best option seems to be sending the phones back and going with a different carrier.

Categories
My Opinion social media

Mastering Social Media Part 1: Treat Your Blog Like a TV Show

What if I were to tell you that successful blogs have some striking similarities to successful TV shows? That the whole realm of blogging actually looks so much like the world of broadcasting it is surprising institutions that currently have broadcasting programs don’t just merge the two together. It may sound a little odd if you’re not used to working in a production environment, but having split my last 8 years evenly between TV production and online development the similarities are so blatantly obvious that they’ve pretty much passed me by unnoticed.

I know what you’re probably thinking (especially if you read this blog every now and again or know me personally): Ok, here we go again. Morten has some crazy idea and won’t let it go until he’s laid it out in every excruciating minute detail. And you’d be right. So why should you care? Because if my assertions are true (and they are of course) bloggers have a lot to learn from the trials and tribulations of their camera lugging brethren. And, to be honest, broadcasters could learn a thing or two from bloggers as well.

Just so it’s clear from the get go. To me the term “social media” encompasses a wide variety of technologies and can be further sorted into at least two sub-categories: Social Publishing (blogs, YouTube etc) and Social Networking (Twitter, Facebook and the likes). There is quite a bit of a gray zone between the two and there are also social media environments that fall out of these definitions entirely but that’s for another time.

Make it or break it – it’s all about who you know … but mostly chance

I like to say television is one of the most volatile and unsecure professions you could choose, maybe only beaten by radio which is pure insanity. That’s because your job in TV is almost 100% dependent on audience approval and popularity. In other words, it doesn’t really matter if you work with the best people in the business on the best show ever written, produced and broadcast: If the unwashed masses don’t absolutely love the show you are likely out of a job tomorrow. And this is especially true for the producers. If your show doesn’t get stellar ratings you get, at most, a couple of weeks or maybe a month if you’re lucky to save it by changing things up. And if that doesn’t work you’re out the door and your time slot is replaced with the latest and greatest in voyeuristic social pornography, often mislabelled as “Reality TV”.

Sound familiar? Well it should, because blogs are pretty much exactly the same: You can have the best content ever written on the coolest blog ever created, but if the people out there on the internet don’t love it they won’t read it, you won’t get repeat visitors and your stats will devolve into a daily reminder of exactly how many friends and family members you have and how supportive they are. And although no one will call security and have you escorted out of the building with your potted plant and 7 fingered promotional foam hand from Bruce Almighty, your double digit visitor numbers will do nothing to improve your financial status and you will eventually end up caving and getting a “real” job to keep the lights on.

So what is it that separates the successes from the failures? Or rather, what is it that launches some blogs from relative obscurity to 10.000 visitors a day and rising fame in seemingly no time? Exposure, friends and a fair bit of luck.

The first two, exposure and friends, often go hand in hand. To get anywhere in the media world, whether on TV or on the web, you need people to actually find your content. To make that happen you need people to talk about your content, and that usually starts with friends. Actually, “friends” might not be the right word here. I’m not referring to your beer league buddies or shoe shopping clique. By “friends” I mean people with power who for one reason or another take a liking to what you’ve made and tell their friends with power and all their loyal followers to check your content out. Sure, there’s always an off chance that your network of 100 or so friends and family will somehow generate the critical mass that lifts your blog out of the internet soup, but to get where you want to go within a reasonable amount of time you need to reach a bit higher and enlist the help of people with connections. To put it bluntly: While your mom may be able to get her entire kayaking club to visit your blog once a week, a single Tweet from a local paper, a semi celebrity or an established blogger with a solid fan base will make your stats look like an electrocardiogram.

But that’s just part of it. Even with friends in the right places pushing their loyal minions right into your lap there is no guarantee they’ll actually stay there. And this has less to do with quality of content than you’d think. The ‘stick’ factor is usually a matter of luck; of being (or in this case writing) in the right place at the right time. That’s because once on your site the viewers need to be in a receptive mood for your particular content. In other words if they’re not open to the kind of material you are presenting, it doesn’t matter if it’s Pulitzer prize material; they won’t care and they’ll likely never come back. So while your excellent article on the conflict in Burma may never get more than 200 views a random post on an internet myth about an artist starving a dog to death may cause a furore and lead to an interview with BBC Radio.

Predicting the unpredictable

It must seem like TV producers have it easier: There is a finite number of networks and only so many hours in the day so if their show is on TV people are far more likely to stumble on it than they are to ever land on your blog. The reality is quite different. For every show that makes it to air there are hundreds standing in line to take their place with thousands in various stages of pre-production or pilot versions just waiting for the right time to shine. And unlike a blog which can usually survive for weeks, months, even years without any major visitor numbers, a TV shows have a tendency of getting shut down at the first sign of weakness. Just ask Jay Leno and Conan O’Brien.

With this in mind TV producers try to predict where things are going to go before they go there to get to the top. Often they’ll sit on fully developed shows for years waiting for the right time to come, and occasionally shows that were originally produced years ago but never aired are revamped when times change to fit the content. Unfortunately, though not surprisingly, this strategy can also be a disaster. Not only does it result in cascade effects where rivalling networks launch almost identical shows at the same time (case in point Trauma, Mercy and Nurse Jackie – all new nurse / ER themed shows that launched fall of 2009) but it produces duplicate shows hitching a ride on other popular shows and lots of shows that are either ahead or behind on the times and miss their mark all together. Looking into the future and predicting what people want to watch 6 months from now is not easy.

At the same time there’s a real danger in burning out because you don’t adapt. Remember Pink Is The New Blog? That site was on everyone’s lips several years ago but was quickly outscooped and outcontroversied by other blogs like Gawker, PerezHilton and TMZ. The dethroning of PISTNB had little to do with their content and more to do with their lack of evolution: The world simply changed quicker than expected and they didn’t keep pace. Sure, they’re still there but you don’t see them all over CNN and they don’t have their own TV show. The distance from the top of the world to irrelevance is measured in microns where the internet and television are concerned.

Getting to the top the hard way

Yes, I know. I paint a bleak picture. It’s what I do best. So what’s the solution? What can you as a blogger learn from my TV friends who I’ve so kindly portrayed like moguls one inch away from the homeless shelter? Like seasoned and successful TV producers the key to rising and sustained success in the blogging world is to invest in something that oozes quality and authority and at the same time be ready to adapt at any time, even if it means abandoning what you’re doing and coming up with something totally different on the fly. To quote one of my favourite movies Ghost in the Shell “overspecialize, and you breathe in weakness.” But don’t take that as an invitation to publish inconsequential drivel: Even if you’re dead on in your predictions of what’s popular people will quickly abandon you if your content is crap.

Getting and sustaining success means you need to produce good quality content that people like and want more of. It’s a difficult and illusive combination that may require years of honing before it reaches perfection. But it’s doable. It just requires a lot of ideas, willingness to fail, an ability to leave things behind and move on and most importantly time.

Let me leave you with this: On average a social publishing endeavour will take a year or more to achieve any type of success unless it’s already attached to a well known brand. And even then it’ll take another 6 months to establish the trust of the reader that will elevate it from mildly successful to a force to be reconed with. It’s an investment in time and energy that may or may not pay off in the long run, but only if you stick with it and learn to adapt.