Simplified, Narrative, and Transient Attribution

I’ve been thinking through attribution models for many years. Mainly for books, but it is also interesting to consider how credit is attributed in open source projects. For both open source and books, I believe narrative attribution is a great model which escapes many of the problems of simplified attribution, but there are some interesting gotchas… so, let’s take a look at it….

A copyright holder is different to those people we attribute credit to for producing a work. Michael Jackson, for example, held the copyright for the Beatles songs for a long time, however we attribute the credit for those songs to Ringo, George, Paul and John.

I like to think of this ‘naming’ of contributors to be a type of simplified attribution. It is a handy tool to be able to hang credit on a couple of names. Nice and clean, not complicated, and easy to remember. It is also the way copyright works. Copyright requires some names to be cited as the copyright holder and many times we, out of habit, co-relate the named copyright holders with the credit for producing the work. It’s just how our brains work.

However, there are a couple of things we need to tease out here. First, named credit is not synonymous with the named copyright holder and we should be a little more careful with this and avoid conflating the two. Michael Jackson, for example, clearly did not write the Beatles songs, so we should be careful to avoid conflating credit and copyright.

Secondly, simplifed attribution doesn’t actually tell us much. It is not rich information. It’s a couple of names and we tend to attribute far too much credit to those names. We have a romantic notion of creation and we tend to want to think there is a ‘solitary genius’ behind things. If not a solitary genius, then a couple of identifiable people that we can attribute genius to collectively. But this is really fantasy. Great things come from many people, not one. One person by themselves isn’t capable of much. How individuals work together is the real story and these stories are far more complicated than the vehicle of simplified attribution can convey.

Those who know the story of the Beatles, for example, know there is far more complexity to how the songs were written and by who. Each Beatles song has its own history and unique way of coming into existence. The Song Michelle, for example, was mainly written by Paul McCartney, with a small portion a collaboration between McCartney and Lennon. Even more interesting, is that other characters came into play. According to Wikipedia:

McCartney asked Jan Vaughan, a French teacher and the wife of his old friend Ivan Vaughan, to come up with a French name and a phrase that rhymed with it. "It was because I'd always thought that the song sounded French that I stuck with it. I can't speak French properly so that's why I needed help in sorting out the actual words", McCartney said.

Vaughan came up with "Michelle, ma belle", and a few days later McCartney asked for a translation of "these are words that go together well" — sont des mots qui vont très bien ensemble. When McCartney played the song for Lennon, Lennon suggested the "I love you" bridge. Lennon was inspired by a song he heard the previous evening, Nina Simone's version of "I Put a Spell on You", which used the same phrase but with the emphasis on the last word, "I love you".

The actual story of how the song was written reveals a much more interesting story than the typical way we assign credit for these songs. I find the story of how something was made, and by whom, way more interesting and informative than a list of a couple of names. I also think it’s a better way to attribute credit as it values the contribution of each individual and brings forward the interesting ways in which they made the contribution. I think this kind of narrative attribution is a far more important way to credit people. It honours both the people and the process a lot better than the more simplified naming of names.

I want to say something about transient attribution in a bit, but before moving on I’d like to give another example of why I think narrative attribution is necessary as we move increasingly into a collaborative future. At FLOSS Manuals, an org I founded 10 years ago to collaboratively produce free software manuals, we automated simplified attribution. If you made an edit, your name was automatically added to the credits page. That worked ok for the naming of names. But how useful is this?

Well, if one person claimed to write the book, this might be useful. But below is an example which illustrates the problem…this is a screenshot of the credits page of a freely licensed book that was updated and rewritten every year by a different team of people for 4 or 5 years. So, the simplified attribution looks like this :

fireshot-capture-1-credits-civicr_-https___docs-civicrm-org_user_en_stable_appendices_credits_

Well…you get the picture! It is pretty meaningless information, or at least, there is not much utility from this kind of attribution. It is, in short, too much information and too little information at the same time. A list of names like this is nothing more than a long list. It doesn’t highlight the many interesting ways individuals contributed, and no one here actually gets much credit at all from such as list because no one will read it. It is pretty much useless.

So…what can we do? Well, I believe we need to get away from simplified attribution at all times. We need to migrate to narrative attribution. We need to learn how to tell stories about how people make things and who was involved. It is for this reason that at FLOSS Manuals we started writing “How this book was written” chapters. Here is an example from the book Collaborative Futures which is now stored in the Wayback Machine. A brief snippet:

This book was first written over 5 days (18-22 Jan 2010) during a Book Sprint in Berlin. 7 people (5 writers, 1 programmer and 1 facilitator) gathered to collaborate and produce a book in 5 days with no prior preparation and with the only guiding light being the title ‘Collaborative Futures’.

These collaborators were: Mushon Zer-Aviv, Michael Mandiberg, Mike Linksvayer, Marta Peirano, Alan Toner, Aleksandar Erkalovic (programmer) and Adam Hyde (facilitator).

It is a short story but it goes some way to bring out a little of the nuance of how the book was made. We could have gone further but it gets the point across. We also made sure that this way of attributing credit was included as a chapter in the book. So wherever the book went, the story of how it was made travelled with it.

I think we could do the same with software. Let’s not conflate copyright with credit, for a start. Next, let’s proceed to a more interesting way of attributing credit. Not a naming of names, but telling a story of who was involved and how. Finally, let’s make sure this story is part of the software and travels with it ie. put the story in the source code repository.

But there is an obvious flaw to this approach. What happens when many people are involved over a long time. If you had been paying attention, for example, to that 29,000 pixel tall monstrosity I included as a screenshot above, you would have already deduced that narrative attribution is not going to do a better job of attributing credit than simplified attribution.

Just how long would the story be for that same book? It would be pretty long! So, how do we deal with this? Surely no one would read that story either? Good point! This is where I believe transient attribution comes into play.

Transient attribution is the recognition that large complicated works take a long time to make. In the case of software, the job pretty much never ends. A software must be updated to add features, or it may need to be refactored, updates needed for security fixes, or to meet the needs of new versions of operating system…it just doesn’t stop. That could mean that we just keep adding to the story, making it longer and longer as we go. But I would like to suggest another approach. I think it is more interesting if we were to tell only the story of the last phase leading up to the current version of the book / movie / software etc…there is really no need to tell the whole story all in one go…break it up into smaller parts. Learn to tell the story as it evolves. The software world already, kind of, does this in Release Notes. We commonly include Release Notes that document the differences between the previous and latest versions of the software. It is, in a way, the story of the software, but it is not the story of the people. Why not adopt this model for attribution too? Focus on the latest part of the story and call people out and celebrate their contributions for this most recent phase.

My recommendation is this:

  1. use narrative attribution to credit people for their work
  2. tell a story that says something about what they did
  3. use transient attribution to tell the story in smaller, more timely, pieces
  4. ensure the story travels with the software (ie. stored in the repo)

It doesn’t feel like such a stretch. It’s not that difficult to do either. The point is, software takes a long time to make, happens in stages, and different people with a diversity of skills come into play at different times. Let’s celebrate all the people that made it happen, and celebrate this as closely as possible to the moment they did the work. Further, the story of how a software is made is far more valuable to everyone than a simple naming of names. If we could take this short step (and it is not far) I think we would have richer attribution, happier communities, and a richer understanding about how software is actually made. And that, if you ask me, is a good thing.

Starting an Open Source Org

I was recently informed that only developers should start Open Source projects. Any other alternative was ‘unwise’.

Yet I think there is a great need to diversify open source operational models and starting a project is the most important culture-setting moment you will ever have. So this kind of advice is unintentionally limiting the possibilities for the new cultures and models that could evolve.  Further, these new models are much needed and are the way forward for open source into areas where it is not having much success. Open source needs to find better ways, for example, to produce software for ‘end users’ as the current culture/models are not doing this very well (and there are good reasons for this).

So, let’s ignore the advice. Instead, I want to suggest to anyone out there that cannot or will not write code (‘never admit you can type’) that you are the future of open source. Your vision, by virtue of the fact you do not write code, is exactly what we need to diversify cultures and methods in this sector. You need to bring this to the table as the ignition for a project and just find a way to make it happen which is consistent with your ideas and your vision. I’m proof that it can work. Don’t listen to those that tell you it is a bad idea, just make it happen.

Workflow Cost vs Pain

Today I talked with Lisa Gutermuth about workflow and software. We explored what avenues are available for finding the right software for your workflow. It is a common pastime and I suggested a simple taxonomy of solutions. It comes down to three simple categories:

  1. Just use anything –  a low cost, high pain strategy
  2. Find something useful – medium pain, medium cost
  3. Build a custom solution – high cost, low pain

Just use anything – this is where many organisations start. Essentially they grab ‘whatever is out there’ and cobble together a process to ‘make it work’. It might be that everyone has a word processor, for example, so they simply use spreadsheets and email them around. Or they may grab some wikis, use Google docs and sheets, rely on etherpad when needed (etc) and use these tools. This approach actually gets orgs quite far. The problem comes when your volume increases, or your operations diversify, or your staff increases etc. Over time, these types of tools can cause a lot of organisational pain and the inefficiencies created can force you to think about moving up the stack in the solutions taxonomy.

Find something useful – looking around your sector, seeing what others use and bringing these tools into your organisation, is often the next step. There are some good things and some bad things with this approach. Firstly, unless there is a startlingly obvious solution out there, you can spend a long time looking for the right tool. This can actually be harder than you think since software categories do not have a stable taxonomy. You can’t go anywhere, look up a table and understand what kind of software you need. So searching for the right software may take a long time.  Secondly, ‘off the shelf’ solutions will (most likely) only approximate your needs. That might be enough to get going. Bring these tools on board and start work. You might then, over time, need to hack it a little which might be cheap or it might be (if it is proprietary of if you get a bad vendor/developer), very very expensive. Or you could ‘Just grab anything’ and augment the tool with ‘whatever is out here’.

Sooner or later though, you are probably spending increasing amounts of money on the solution, and it doesn’t quite meet your needs so it is causing some amount of pain. So, while above options suggested this is a medium cost, medium pain approach, it can also turn out to be a high cost, high pain choice. I believe this is the position for many publishers today using expensive proprietary solutions that do not meet their needs.

The high pain – high cost effect takes place when the org ‘grows around’ the sore point (dysfunctional software). It is like the hiker learning to limp to cope with the pain of a stone in their shoe. Orgs will employ all sorts of tools to make up for the deficiencies and even employ staff to cope with the broken workflow. Best not to learn to limp as it can have long lasting organisational effects that are hard to dig out.

Build a custom solution –  the (seemingly) deluxe approach is to build the tool you need. This can be expensive if you take on all the costs yourself. The advantage is that you get what you need and if you do it well you build tools that help you improve your workflow into the future. Savings come in efficiencies and possibly, savings on staffing costs.

As you probably know, I am CoFounder of the Collaborative Knowledge Foundation. Our approach to the above is to design open source custom solutions for organisations but in such a way that they are easy to tweak and customise for similar orgs. Hence we are aiming to get the sectors we work with into the custom solution space and capture that elusive last category – low pain, low cost.

Down in Mississippi all I ever did was die

I’m a bit of an old school blues fan and 2 years ago I decided I would go check out the Mississippi. It was kind of an odd set up. I was in Columbia, Ohio for work and I figured…hell… this is about as close to the Mississippi as I ever got… so I should go check it out!

As it happened, that very weekend was a festival in honor of one of my blues heroes – Mississippi John Hurt. Amazing timing.

So, I hired a car and away I went. How far could it be? As it happens it could be over 800 miles. A drive that would also take me through Kentucky and Tennessee. This was all new territory for me and I was up for the adventure.

On the way I had some interesting stops. First stop was in Kentucky at a Saturday morning community fair. Awesome… I love fairs… cupcakes, maybe an espresso truck, second hand goodies…

img_4879

…fluffy fairies in goldfish tanks…

img_4886

…large men selling guns…

img_4884

…hand guns…

img_4877

…hand guns, rifles and gospel CDs…

img_4882

It wasn’t the kind of country fair from back home where the most malicious offering is an old scrabble set with some of the pieces missing. Instead I was surrounded with firearms in great quantities, casually sold to whoever wanted them.

I felt out of my depth. So I headed south again and watched as Kentucky faded away in the rear view mirror. I was in a bit of a hurry. I knew I wouldn’t make it to Avalon, where the festival was, until the next day but I had to get somewhere to sleep. My choice was Tupelo – a famous place for me as my favorite John Lee Hooker song is about Tupelo

The first few lines being

Did ya read about the flood?
Happened long time ago
In Tupelo, Mississippi
There were thousands o’ lives
Destroyed

It rained, it rained
Both night and day
The poor people was worried
Didn’t have no place to go

The thing is… as I came up upon Tupelo it started to rain. I was still some miles from the city line and as I came up to the city boundary the rain got harder and harder. It wasn’t long before I couldn’t see more than a few metres, forcing me to slow the car to walking speed. It was the hardest rain I have ever experienced in my life. I began to have the feeling that this was something more than just another road trip…

I got to Tupelo and stayed the night at a crappy soulless hotel. Getting up early I discovered that Tupelo is actually very famous as it was where Elvis was born. I stopped in to see the humble house, worked out which store sold him his first guitar and then headed out towards Avalon.

As I drove, the roads were long and narrow. The towns small. I saw unhappy posters taped to power poles calling for information about a missing young local woman. A short stop for gas allowed me to overhear the attendants agreeing that gas should be free for anyone in (army) uniform. I passed farms with large homesteads that I imagined were once plantations of old… reality was starting to agree with my imagination. I drove onward…

Avalon is famous in the blues world. It is where MJH grew up, and Avalon Blues is one of Mississippi John Hurt’s most well-known songs and the title of his first album. Avalon is also where he was rediscovered many years later when blues fan Tom Hoskins went on a legendary journey to look for him, after refusing to believe, as most did, that he had been dead for many years.

The thing about Avalon is, it doesn’t exist. At least, it doesn’t exist now. The spot where I thought I would find a small town and a festival was a wasteland of empty shacks and potholes. It was dusty, weird, and full of ghosts. Further, it had no connectivity so finding my way to the festival was going to be tricky.

I drove around a bit. I went down a long road which came to a dead end with a sign saying the road was closed. I turned back but a car, the only one I had seen for a while, passed me and continued up past the sign. I followed them but lost them. The road turned bumpy. Somewhere it became heavy duty road works, heaped dirt and the impressions of giant graders.

img_4943

Everything looked abandoned.

img_4945

I drove on, turned down a narrow road that got narrower. The trees seemed to hang closer to the road and slowly obscure more and more of the sky… I passed a home with abandoned cars in the front and a family sitting outside staring at me as I drove slowly past.

I finally turned a corner and came into a clearing. There was a brick house on a small open lawn. Two cars were parked by the house but I couldn’t see anyone. I parked up and walked over to the house. Behind it I found half a dozen friendly faces looking at me…. one of them walked up to me. She looked weirdly like Mississippi John Hurt. It was Mary Hurt, his granddaughter. She had a large smile and shook my hand. It was like history just reached out and grabbed me.

We sat around and talked and played blues on the lawn. There is even a video of me hanging with the gang, playing guitar. I didn’t bring my guitar so Mary gave me one to play given to her by John Sebastian from the Loving Spoonful. I felt overwhelmed.

We played a bit and then Mary took us down to visit her grandfather’s grave. We drove a bit then got out in a deeply wooded forest. On one side of the road was a reasonably maintained cemetery. It was where the white people were buried. On the other side of the road, throughout the forest and in unmarked graves was where the black people were buried. Mary reminded us to be careful where we stepped…

We came to a small clearing.

img_4953

It was the grave of Mississippi John Hurt. So simple and alone in this beautiful forest.

img_49532

Mary told us stories about her grandfather. About Mississippi itself. How she hated it for how hard it is. How mean it has been. I wondered how I could think of the blues so romantically until now. How I hadn’t understood the sadness. I wondered about the history of rock and roll. How one of the giants was right here in front of me. How humble the scene was, how humbling it was.

img_4948

We stood around and listened. We played some of his songs.

img_4950

It was a very moving experience and I emerged from the forest with some new friends.

OLYMPUS DIGITAL CAMERA

Oh manno

So, I realised I’m getting a little sick of talking about publishing. I love it so, sort of. But I never thought I would ever be ‘in publishing’, I kinda just fell into it. Or maybe more accurately, I fell, then woke up, and slowly came to realise I was in publishing.

But actually I’m not in publishing. I’m in a fascinating world and I kinda want to start talking about some of that. Isn’t that what blogging is meant to be about anyho? So…First up, baths. Yes…one of my favorite things. Infact I just built a bath platform…oh…maybe I’m grabbing too much credit…I didn’t actually grab the hammer and wood and stuff…hohoho….anyways…it looks like this:

1623cd35a6fd5e4b0b4abacb9f239dee

It is at my cottage in NZ. I will be going there in December and can’t wait! My bath will fit on the platform and I can stare at the stars and the sky. Its going to be awesome.

The view from the bath should be pretty good…looks something like this:

172fbe72f77f2142a89b30b29f46e0f3

…don’t expect me to hurry back 😉

Typescript and publishing systems

Wendell Piez and I just co-wrote a post for the Coko website about the quandary of going from MS Word to HTML:

The point being that publishers take badly structured Word documents and process them. Adding structure etc and then ‘throwing them over the wall’ to outsourced vendors to convert into other formats. When publishers add structure to documents, they often do this with MS Word and custom-built extensions. They simply click on part of the text, choose the right semantic tag, move to the next. Just imagine… how many publishers have built these custom macros (it is very common) and also imagine that each publisher must tweak the macro code with new releases of MS Word. Tricky and expensive!

So, the point is, why not do that in the browser using web-based editors? It not only brings the content into an environment that enables new efficiencies in workflow but it also means publishers don’t have to keep upgrading these macros all the time. Further, if the tools for doing this in the browser are Open Source…well… you get the picture – share the burden, share the love.

So the article is a small semantic manoeuvre to get the conversation away from the rather opinionated, but dominant position, that MS Word-to-HTML conversion is terrible because you can’t infer structure during the conversion process… The implication is that HTML isn’t ‘good enough’. Our point is, you don’t need to infer the structure because it wasn’t there in the first place. Plus, HTML is an excellent format for progressively adding structure since it is very forgiving – you can have as much, or as little, structure as you like with HTML. Hence we can look to shared efforts to build shared browser-based tools for processing documents rather than creating and maintaining one off macros.

Remix and Reshuffle Revisited

FLOSS Manuals Remix
FLOSS Manuals Remix

A few years ago, I wrote a brief post on Remix vs Shuffle. At the time, the Open Educational Resources (OER) movement was struggling to work out how existing teaching materials could be remixed and reused. No one had really cracked it. At the same time, we built remix into FLOSS Manuals. The primary use case was for workshop leaders to be able to compile their own workshop manual from existing resources. We had a large enough repository of works, so it was a question of how we went about enabling remix.

Recently, I have been in two separate conversations about remix (after not having thought or talked about it for some time). One conversation was in the context of OER, the other in the context of remixing many journal articles into a Collection. So, I’m revisiting some earlier thoughts on the topic and updating etc…

At the time, we expected the FLOSS Manuals remix to be used a lot. I was a workshop leader myself and thought I could benefit from the feature. However, remixing wasn’t used very much by me (I did get some very useful manuals from using it, but didn’t use it often) or anyone else. Hence I wrote the reflection on remix (linked above). My position is outlined by the following quotes from that article:

I have come to the understanding the ‘remix’ as such has only a limited use when it comes to constructing books from multiple sources.

And the following, where I liken book remix to remixing of music to illustrate the shortcomings:

Text requires the same kind of shaping. If you take a chapter from one book and then put it next to another chapter from another book, you do not have a book – you have two adjacent chapters. You need to work to make them fit together. Working material like this is not just a matter of cross-fading from one to the other by smoothing out the requisite intros and outros (although this makes a big difference in itself), but there are other aspects to consider – tone, tempo, texture, language used, point of view, voice etc as well as some more mundane mechanical issues. What, for example, do you do with a chapter that makes reference to other chapters in the book it originated from? You need to change these references and other mechanics as well as take care of the more tonal components of the text.

I think these are valid points, but I think, revising this, there is one nuance I would like to add. Sometimes ‘shuffling’ is adequate where you are compiling an anthology which is, as it happens, the case when you are putting multiple journal articles into a collection. Building tools to enable this kind of ‘reshuffle’ is very useful but still I would question the usefulness in certain contexts. It is a use case that, from my experience, would be great as a tool used by, for example, a publisher or curator. I’m not sure of its usefulness in a more generic ‘user space’. Journal publishers do, in fact, make collections where several articles are compiled together to form one ‘bound’ work (often a PDF). In this space, such a tool could make life much easier. Whether members of the research community, for example, would want, need, or use such a tool is still an open question to me.

For information on how FLOSS Manuals Remix worked see here:

http://write.flossmanuals.net/floss-manuals/remixing/

It is still working here:

http://archive.flossmanuals.net/index.php?plugin=remix

Here is a video (ogg vorbis) demo of it in action, with the resulting PDF linked below.

my_pdf (note, the colored text is because, as shown in the demo above, I edited the styles via CSS to make the body text red).

video made with recordmydesktop.

With Thanks to Raewyn Whyte

Many thanks to friend and colleague Raewyn Whyte who has been maintaining this blog, transferring over a heap of content, editing, forensically digging for images and old posts, filtering spam, tagging, cleaning, and helping me organise and maintain this new version of my site.

Now she has to read and edit this too without blushing. Thanks Raewyn 🙂

ps. if you need a good editor/writer you can find her here

The Case Against Demos

It is always tempting to develop demos when developing software. I have driven myself and others down this path many times. The aim being to come up with an inspiring ‘facade-only’ features quickly that can encourage ‘buy in’ from your target audience.

However, with a few exceptions, I have never actually found they lead to many interesting places . Demos have a few paradoxes that are not immediately apparent when you get that great idea which is where the software could be or should be or maybe, just maybe, might be. So it’s worth spelling out, for myself if for no one else, why demos are, generally speaking, a bad idea.

  1. a good demo works – the ultimate paradox. There is often no difference between building a good demo and building the thing itself. So, don’t kid yourself that a demo is going to be a magically shorter shot to the moon. It’s the same distance to the moon in a demo rocket as it is in a real one.
  2. demos are fake  – demos are fake ups, but you think it will better demonstrate to people what your software is capable of doing. But it is not doing it, because it is fake.
  3. demos can yield unreasonable expectations – so you make a great demo software. People buy into it! So when are you going to deliver? Soon, right!? It’s almost there! Wrong.
  4. demos waste development time – that speaks for itself.

The longer I get in the tooth, the more I think you should demo what you have. I think that many times demos are presented as a kind of proxy for the future state of the software. Almost smoothing over some deep anxiety that you aren’t far enough down the road yet. You want people to think you are further down the road than you are. Sure. I get it. I’ve been there. However, I think you have to be confident about what you are doing. Show what you have now and stand strong. It is where it is. Talk about the future, don’t demo it.

note: I’m not talking about exploratory prototypes. I think this is another thing altogether. These are necessary and useful explorations even if they don’t immediately lead anywhere, sooner or later the learnings will emerge when you need them.