My Opinion

The Spirit of Flickr and the Problem of Intent

I’m trying something new here: Audio versions of my essays. So, if you want to listen to me read this essay rather than read it, hit the play button below and let me know what you think about this idea!

Over the weekend a conversation has started over the move by photo sharing site Flickr to start selling canvas and other prints of photos published under various Creative Commons (CC) licenses with attribution but in some cases without financial benefit to the artist. The story started at Wall Street Journal, got picked up and went viral with Dazed, and gained further traction when authoritative figures like Jeffrey Zeldman chimed in.

I’m not going to argue the legalities of this issue. As has been stipulated by pretty much everyone who has spoken about it, Flickr – and by ownership Yahoo! – are well within their rights to do what they are doing from a legal standpoint. If you publish content under the CC-BY license you are explicitly granting anyone the rights to republish that content in any way including commercially (under which selling for money would fall) without reimbursing the original creator as long as they provide proper attribution to the same creator. By contrast the CC-BY-NC license grants anyone the right to republish that content under the same guidelines only for noncommercial purposes. If they wish to publish it for commercial purposes (including sale) they must be granted a separate individual license from the creator. (There is a lot more to Creative Commons and I urge you to educate yourself about this type of license, but that’s the gist of this particular story.)


Live Labs Pivot meets Flickr for the 12×12 Vancouver Photo Marathon

Guest post by Ole Rand-Hendriksen.

So my brother Morten came to me with this idea about making a pivot project for the 12×12 Vancouver Photo Marathon 2009 where he basically wanted to be able to sort the images in categories, photographer, gender, winners and so on. He hadn’t really looked into how pivot works, but he thought this could be something that I could probably figure out in a couple of hours or something.

I followed the links he gave me to the pivot site and some instructional demo videoes. But i didn’t really have the patience to go through them all. So I did what i normally do; which is take half a look at the specifications and then just try it out.

I started by downloading pivot from and looking at how it works, which is still a bit confusing to me because of the seadragon technology and the image sorting, but I’ll get into that later.

Then i started reading about how pivot works, and the data part is actually quite straight forward. It’s basically just xml files where each item has some properties, and in the beginning of the file, it says what kind of properties and if you should be able to use them to sort by.

The more confusing part is the deep zoom collection part, which is the part that makes all the trouble. Basically deep zoom collections aren’t dynamic at all (someone please prove me wrong), which is very anoying. Since it means that you have to host all the images locally on the server where you have the pivot collection.

And then i started to read up on how to make pivot collections. There are according to the pivot site 3 ways of making them;

  1. by using the commandline tools
  2. by using the excel tool
  3. by making the tool yourself.

Since i concidder my self proficient in excel i decided to use that method on this project. So I downloaded the tool and installed it (link).

Then i went on to figure out how to get all the data I wanted from the 12×12 Vancouver Photo Marathon Flickr sets. The easiest way i could think of was to use the rss feeds and try to parse them in some way or other. I ended up using a rss parsing library for python from and i wrote a very simple script that went through all the set feeds and parsed them into a more usable .csv file.

The print lines were just for debuging.

# -*- coding: utf-8 -*-
import feedparser
import codecs

url = [ *list of urls*]

f = open(” *path to file* “, “w”)
for u in url:
print u
d = feedparser.parse(u)
name = d.feed.title.split(“- “)[1]
print name
num = d.feed.title.split(” “)[2]
print num
for entry in d.entries:
h = entry.links[2].href
print h
t = entry.title
print t
s = num+’, ‘+name+’, ‘+h+’, ‘+t+’\n’
u = s.encode( “utf-8” )
print u

then i imported the data into a new excel file.

The pivot plugin for excel is a bit buggy, so you can’t really import data directly into the fields, but when you have the data in another document, you can just copy each column in where you want it. It takes some time for the previews to load though when you are working with a few hundred images that are all online, so be prepared to spend a few hours doing something else if you try.

Another bug that’s nice to be aware of, is that if you by accident make too many rows in your collection, you won’t really be able to remove it. When you’ve added all the data you want, you just push the publish pivot button, and then save it where you want to. This can also take a few hours. When it’s done, pivot will open and you can view your collection.

Since pivot utilizes Deep Zoom and Seadragon, the images are sorted into a gazillion small files that will take forever to upload to a webhotell by ftp. so make sure you are using as many connections as you can. Also it’s very annoying that deep zoom is almost completely static unless you trick seadragon by using the api like Lang Deng did for deep zoom images with his project, though i don’t know if there’s an easy way of doing something like that for deep zoom collections.

I’ve got some ideas for further pivot projects but I don’t know if they are possible to make yet.