Projects

Some of my projects. As you can see, partially complete. Will add more and then make this a static page.

Publishing-related

Collaborative Knowledge Foundation

Current
The Collaborative Knowledge Foundation’s mission is to evolve how scholarship is created, produced and reported. CKF is building open source solutions in scholarly knowledge production that foster collaboration, integrity and speed.

CKF envisions a new research communication ecosystem that gives rise to wholly unique channels for research output.

CKF was founded in October 2015 with support from the Shuttleworth Foundation.

Shuttleworth Foundation

Current
I have just been awarded a Shuttleworth Foundation Fellowship. I’m deeply honoured to have been selected. I was awarded a second year of Shuttleworth Fellowship for my work on reformulating how knowledge is produced.

Recent Presentations

The Future of Text

Google Headquarters in Silicon Valley, Aug 2016
http://www.thefutureoftext.org/
Organised by friends and followers of Douglas Englebart, Adam was invited to present on collaboration and book production.

Association of Learned and Professional Society Publishers

London, Sept 2016
http://www.alpsp.org/2016-Programme
Adam was invited to present at the annual ALPSP conference about ways that Open Source could change publishing.

International Society of Managing & Technical Editors

Brussels, Nov 2016
http://www.ismte.org/page/2016EuroConference
http://www.slides.com/eset/ismte
Adam was invited to present on Open Source tools for publishers at the Brussels edition of the 2016 IMSTE series of conferences.

Open Fields

Riga, Latvia, Oct 2016
http://rixc.org/en/festival/Open%20Fields%20Konference/
I was invited to speak about the intersection of art, science, and publishing at the cutting edge Open Fields festival.

Unlearning Collaboration

Berlin, Oct 2016
http://www.supermarkt-berlin.net/event/un-learning-networked-collaboration/
Adam was invited to facilitate a one day conference on collaboration and facilitation.


Development projects

PagedMedia

2016 – present
http://wwwpagedmedia.org
Founded the blog about paged media.

Substance Consortium

2016 – present
http://substance.io/consortium/
Foundational member of the Substance Consortium.

Book Sprints

2008 – present, New Zealand
http://www.booksprints.net/
Book Sprints is a methodology and a company I founded to rapidly produce books.

Nov 2016:  transitioned from founder and CEO to the board. I appointed Barbara Rühling as CEO.

A Book Sprint is a collaborative process where a book is produced from the ground up in just five days. But even more important, this collaborative process captures the knowledge of a group of subject-matter experts in a manner that would be nearly impossible using traditional methods. The result at the end of the Book Sprint is a high-quality finished book in digital and print-ready formats, ready for distribution.

Book Sprints Ltd, is a team of facilitators, book-production professionals, and illustrators specialised in Book Sprint facilitation and rapid book production. Our organisation developed the original methodology and has refined it since 2008 through the facilitation of more than 100 Book Sprints. Topics have ranged from corporate documentation to industry guides, government policies, technical documentation, white papers, academic research papers, and activist manuals.

Book Sprints clients include Cisco, PLOS, F5, the World Bank, USAID, African Development Bank, Open Oil, Liturgical Press, Ausburg Fortress, Cryptoparty, OpenStack, European Commission, JISC CETIS, UNECA, Mozilla, IDEA, Engine Room, Heidy Collective, Transmediale, Google… to name a few.

"If Book Sprints did not exist, we would be forced to invent them, so powerful is the knowledge production paradigm."
 --Allen Gunn, Aspiration

"Book Sprints get more brilliant work out of bright people in 1 week than most project can evoke across many months."
 --Loy Evans, Cisco

"Writing a book through a Book Sprint turned out to be efficient, thorough and enjoyable; I can’t imagine a better outcome."
 --Phil Barker, JISC CETIS

faith

Aperta

2013 – July 2015. Public Libary of Science, San Francisco
In 2013, I designed a platform for the Public Library of Science (PLOS), originally called Tahi but renamed to Aperta. In 2014 I was asked to lead a team to build the platform. I led the 15 strong team to the production-ready 1.0 release of this multi-million dollar project to completion, on time and under budget in June 2015.

Aperta is an entire submission and peer review platform for multiple scientific journals housed within the single instance. The entire system is designed to be highly collaborative and concurrent. The platform includes a manuscript production interface, HTML and LaTeX document editing support, Word ingestion, a workflow management system, task management interfaces, admin interfaces, reports, and user dashboards. The platform was built in Ember-CLI, Rails, implements a highly customised Wikimedia Foundations Visual Editor, and uses Slanger for concurrency. It is an HTML-first system, has many innovative new approaches to journal systems, and solved many long-standing problems in this space. The project also involved a separate codebase named iHat that provides Aperta with an API service for queue-managed file conversions.

"At its core, this new PLOS editorial environment brings simplicity to the submission and peer review process by providing advanced task-management technology and a vastly improved user interface, which will enhance the publishing experience for our community of authors, editors, and reviewers." 
http://blogs.plos.org/plos/2015/07/publishing-initiatives-at-plos-a-look-back-and-a-look-ahead/

NB: I only work on Open Source systems. The sources are not yet available for this project.

PubSweet

2012 – present
PubSweet is a platform designed to assist the rapid production of books in Book Sprints. The platform is very simple to use, with very little overhead for new users. The system provides dashboards, publishing consoles, card-based workflow management (task manager), discussions, data visualisations of contributions, a dynamic table of content management, and support for multiple chapter types. PubSweet can produce EPUB and leverages book.js (see below) to produce print-ready PDF (paginated in the browser). PubSweet is written in PHP, using Node on the backend, and CKEditor as the content editor.
pubsweet_project

Lexicon

2012
Lexicon is a platform produced for the United Nations Development Project to collaboratively produce a tri-lingual (Arabic, French, English) lexicon of electoral terms for distribution in Arabic regions. Lexicon provided concurrent editing for chapters with multiple terms, sorting by language, discussion forums and voting. Lexicon was written quickly in php with Node.

"The Lexicon was created with the aid of an innovative collaborative writing tool customized to suit the needs of this project. This web-based software allowed the authors, reviewers, translators and editors to simultaneously input their contributions to successive drafts from their various countries. "http://www.undp.org/content/undp/en/home/presscenter/pressreleases/2014/11/19/undp-launches-first-lexicon-of-electoral-terminology-in-three-languages.html

lexicon_panel

book.js

2012
book.js is a Javascript library that you can use to turn a web page into a PDF formatted for printing as a book. Take a web page, add the JavaScript, and you will see the page transformed into a paginated book complete with page breaks, margins, page numbers, table of contents, front matter, headers etc. When you print that page you have a book-formatted PDF ready to print.

book.js has given inspiration to a number of other JS pagination engines. See Vivliostyle, bookJS Polyfil, Pagination.js, simplePagination.js, and CaSSiuS.

2_bookjs

Booktype

2010 / 2012
Booktype is a book production platform. I brought this platform to Sourcefabric (Berlin) as ‘Booki’ in 2012. Booki was started in 2010. Booktype is written in Python (Django).

“Booktype resolves challenging issues in collaborative knowledge production resulting in high quality print and ebooks.” – Erik Möller, deputy director, Wikimedia Foundation

Google Summer of Docs

2011, 2012, 2013
The GSoC Doc Camp was an annual event over three years. It was a place for documenters to meet, work on documentation, and share their documentation experiences. The camp improved free documentation materials and skills in GSoC projects and helped form the identity of the emergent free-documentation sector.

The Doc Camp consisted of 2 major components – an unconference and 3-5 short form Book Sprints to produce ‘Quick Start’ guides for specific GSoC projects.

Each Quick Start Sprint brought together 5-8 individuals to produce a book on a specific GSoC project. The Quick Start books were launched at the opening party for the GSoC Mentors’ Summit immediately following the event.

gsoc

Bookimobile

2011
The Bookimobile was a a mobile print lab in a van – essentially a van that contained all the equipment necessary to create perfect bound books. It was designed to take the ideas of Booki to people and make real books that have been created in Booki. The first Bookimobile was based on the Internet Archives Book Mobile and we took it to several book fairs and events throughout Europe. It was sponsored by Mozilla, CiviCRM, Archive.org, Francophonie.org, Google Summer of Code, and iCommons.

bookimobile

Objavi

2008
Objavi is an API-software service originally written for Twiki Book (see below) but also serviced Booki and later Booktype.  Objavi converts books from their native HTML into PDF for printing. It also handled other file conversions (eg HTML to ODT, HTML to EPUB etc). I later produced a similar API-based conversion software for PLOS known as iHat. Objavi is written in Python.

TWiki Book

2006
TWiki Book didn’t have a real project name at the time. The project was the first publishing system I built. TWiki Books was created solely to meet the needs of FLOSS Manuals (see below) and it was built on top of TWiki, a Perl-based wiki. TWiki Book included book remixing features, side by side translation, table of contents building, publishing interfaces (I actually wrote a separate php-based system to manage this), edit notifications, versioning, diffs, live chats and many other features. It was a good system but reasonably difficult to extend and maintain since it re-purposed an existing wiki software (hence my approach to building purpose-built book production systems after this point).

FLOSS Manuals

2006
FLOSS Manuals was the project I founded in 2006 that got me started on this whole publishing thing. FM was, and is still, an active community of volunteers that creates free manuals about free software. There is now a foundation and several language communities (notably French and English). The contributors include designers, readers, writers, illustrators, free software fans, editors, artists, software developers, activists, and many others. Anyone can contribute to a manual – to fix a spelling mistake, add a more detailed explanation, write a new chapter, or start a whole new manual on a topic. The aim was produce high quality free works and we succeeded – creating many fantastic manuals in over 30 languages (and still growing).

“Introduction to the Command Line” is at least as clear, complete, and accurate as any I’ve read or written. But while there are countless correct reference works on the subject, FLOSS’s book speaks to an audience of absolute beginners more effectively, and is ultimately more useful, than any other I have seen.” 
-- Benjamin Mako Hill, Wikimedia Foundation Advisory Board, Free Software Foundation Board

Presentations about publishing

I am asked to talk about publishing from time to time. The following are some links to some of those presentations.

Choosing a document network
August 2015, Vancouver, Public Knowledge Project

Open Access and Open Standards
Oct 2014, San Francisco, Books in Browsers

Books are Evil
May 2014, Rotterdam,  Off the Press
Regional Lexicon Project
May 2014, San Francisco, I Annotate

Changing the Culture of Learning

May 2013, San Francisco, I Annotate

The Death of the Reader

Oct 2013, San Francisco, Books in Browsers

A Web Page is a Book
May 2012, Berlin, re:publica

Writings

I have been writing about publishing here and there. Since last year these efforts have been focused on this site. The following are some links to some of my other works:

Fantasies of the Library : After the Proprietary Model

Interview with me about the future models for publishing, published by k-verlag (Berlin).

Radar O’Reilly posts

When Paper Fails
What happens when books, ownership, authority and authors are all challenged by a network.

Visualizing Book Production
Why is no one visualising data on how we make books?

Zero to Book in 3 Days
A little bit about Book Sprints.

Forking the Book
When books are forked.

Over Thinking EPUB
Commentary on why EPUB might be confusing the issue.

Changing the Culture of Production
How to change the way we produce knowledge.

WYSIWYG vs WYSI
The evolution of the editor.

Math Typesetting
the sorry state of math typesetting.

InDesign vs CSS
Why HTML wins versus Desktop Apps.

The Blanche Dubois Economy
Why ‘Open API’ is really a silly idea.

Gutenberg Regions
Paginating in the browser.

Browser as Typesetting Engine
More commentary on the browser as a typeset engine.

The New New Typography
The evolution of Javascript Typesetting.


Other Projects

Seaweed

2008, Quarantine Island, New Zealand
This project was actually called ‘Intertidal’ but I like the name Seaweed better. Douglas Bagnall and I created a one-day community project to discover a new species of seaweed. We hosted this on Quaratine Island in the Dunedin Harbour and invited anyone to come work with us and a marine biologist from the local research center to search for a new species. The project was a community project and a reflection on the notions of species as an out-moded idea, and on taxonomy as a dying art. About 50 or 60 people – individuals, groups, and families – came out on the free boat (provided by the local sea scouts) and hiked across the island to participate on a coldish Dunedin day to search for and document seaweed. We possibly discovered a new species.

throwing into focus the ever-present potential for new knowledge. Drawing upon 19th century methods of species discovery, involving collecting, looking and drawing, their work formed questions around what we don't know.

seaweed

Geek-o-system

2008, Christchurch, New Zealand

Julian Priest, Dave Merritt and I drove about a tonne of old electronics 700km or so in an old landrover (top speed 35km/hr) from Daves warehouse in Wanganui to an art gallery in Christchurch, New Zealand. In the gallery, we served up the old electronics as a participatory art project and invited anyone to come and build new objects from the old. It took 3 days to get there. It was an adventure.

A Geekosystem was a participatory workshop based on a redundant technology collection created by David Merritt. Items were selected from the collection and packed into a Landrover and driven to The Physics Room in Christchurch. A call for participation was issued by The Physics Room and a group of geeks gathered to re-configure the technology into artworks. A workshop space was created and plinths were placed at once end of the gallery and populated with artworks made from the e-waste. The workshop was open to the public and continually added to during the duration of the show. Old technology books were formed into a library. Proprietary software manuals were shredded and mixed with coffee grounds. This was mulched into soil and silver beet seedlings were successfully germniated in floppy disk trays.

A Geekosystem was shown first at the Physics Room in Christchurch in 2008 and then at The Green Bench during the Whanganui Open studio week in 2008. The Geekosystem garden was transferred to a permanent location and produced vegetables for a number of years.

geekosystem2

Paper Cup Telephone Network

2006, Exhibited at the Exploratorium in San Francisco, Zero One Festival in San Jose, and many other venues.
This was a project I made with Matthew Biederman and Lotte Meijer. The Paper Cup Telephone Network (PCTN) was a free communication system and comment on how ‘simplicity’ in technical systems is trickery, and problematising the corporatisation and ever increasing individualisation of modern communications.

The PCTN was a network of paper cup telephones. Just like the games played by children, anyone could put a PCTN cup to their ear to listen, or to their mouth to speak. However, the difference between the PCTN and the original game is that the “string” is connected to the World Wide Web where your voice is streamed to all the cups on the network carrying it, blocks or even miles or a continent away. We built the entire system from free software telephony systems (asterisk and SIP phones), open and standards-based telephony protocols, cups, and string.

As simple as it was, it remains the most difficult technical project I have ever undertaken.

Wifio

2006, exhibited at the Waves exhibition in Riga, Latvia

Wifio was a project I did with Lotte Meijer and Aleksandar Erkalovic. Wifio was a comment on the naivety in which we broadcast our personal information. It was a hardware UI and software that allowed anyone to tune into the World Wide Web wifi traffic. If someone near you was browsing the web on a wifi network, you could simply tune in with Wifio by selecting the right channel and tuning into their IP address.

…but don’t worry, you don’t need to know what their “IP Address” is, in fact you don’t even need to know what an IP address is! Just move the dial until you hear their emails or what they are saying in chatrooms.

I was proud when Julian Oliver (an old buddy from NZ) referenced this project as inspiration for one of his works.

wifio

Sound Elevator

2007
r a d i o q u a l i a (see below) were commissioned to make a new work for Forte di Bard in Valle d’Aosta in Italy for “Cima alle stelle (Stars)”, a large exhibition showing historical works by major masters like Durer, Tintoretto and Guercino; contemporary artists such as Pierre Huygue, Olafur Eliasson and others; and astronomical instruments and writings by Copernicus, Galileo, Kepler, Newton and Einstein.

We made a new site-specific sound installation inside two of the glass elevators which take visitors from the arrival area of Forte di Bard, to the gallery levels. Much of the elevator travel is external to the Forte (pictured below). Sound Elevator consisted of two linked sound environments inside the elevators. As the elevators ascended to the exhibitions halls, visitors experienced an auditory journey from the local celestial environment to the edges of the Universe. In the first elevator, visitors sonically travelled through the Earth’s ionosphere and magnetosphere, hearing our closest star, the Sun, interacting with our planetary atmosphere. The upward and outward journey continued in the second elevator, with sounds from our planetary neighbours, the sonic echo of distant stars, and finally the sound of the Big Bang itself.

sound_elevator

I-TASC

2006/2007 Antarctica
I did a 2-month residency in Antarctica at SANAE (South African Research Base) as part of I-TASC, ultimately a failed network of individuals and organisations working collaboratively in the fields of art, engineering, science and technology on the interdisciplinary development and tactical deployment of renewable energy, waste recycling systems, sustainable architecture and open-format, open-source media. But it was still a great experience.

The coolest thing about it, was the 2 weeks each way on the beautiful icebreaker the SA Agulhas (now decommissioned). I kept some diary pages on the Interpolar site. I also did a few other projects while there including Polar Radio (see below).

The worst thing about it was that we shouldn’t have been there. There is no need for anyone to be in Antarctica. Most of the ‘science’ projects are strategic positioning for a land grab when the time comes. Some science might be justifiable… but arts projects?

Leaving Antarctica I cried my eyes out. It was just too much for me to deal with. Too amazing. After Antarctica, I gave up the art world. I couldn’t think of anything else the art world could do for me.

It was a conflicted but beautiful experience.

Polar Radio

2006/2007, Antartica
Polar Radio was a community radio project initiated by I-TASC and
r a d i o q u a l i a. The first prototype station began FM broadcasts on 29 December 2006 in the Dronning Maud Land sector of Antarctica, where South Africa maintains their base, SANAE IV. It was Antarctica’s first artist-run radio station. It was the first step towards establishing a permanent polar radio presence in Antarctica, which may eventually broadcast in between geographically dispersed Antarctic bases.

But y’know…I wish I hadn’t done it. When I first got to Antarctica I turned on a radio and went through many many frequencies… and I heard nothing… that was amazing. Where else in the world can you not hear anything on your radio? I then went ahead and polluted the spectrum. Darn. I regret it.

Polar Radio was part of a series of projects run by I-TASC – the Interpolar Transnational Art Science Constellation.

Mobicast

2005, Transiberian Express
Capturing the Moving Mind was a conference on board the Trans-Siberian train. It was about new forms of movement and control, war and economy, in the current situation. 50 international researchers, artists and activists participating in the mobile conference formed a mobile production unit aboard the train. For the audiovisual streams, Luka Princic and I developed a free software ‘mobicasting’ platform which enabled mobile transmission of material on the web from mobile phones on the train. Mobicast was initially developed during a residency I had at MAMA Media Lab (Zagreb, Croatia).

It was a great project but really really fragile. The tech of the time was not up to it. Mostly it ran on Puredata and some obscure bits of code from here and there. Still it worked. Best moments were hanging out on the train laughing at people trying to be ‘artists’ in real time… huh? …and getting sardonic with Dr Gillian Fuller – the world’s best queue hacker. Watching the train wind around the Gobi desert… also kinda cool.

mobicast was initially developed to overcome the problem of delivering live video from a moving train to the internet. Traditionally this is the domain of OB (Outside Broadcast) technologies or expensive vehicular satellite uplink hardware. However mobile phones are now very capable remote broadcast environments. Many modern phones record images, video, audio and allow the editing and transfer of these media through wireless data networks (eg. GPRS) with almost global coverage. The quality of these recorded media have generally been considered 'low-fi' but fidelity is increasing and importantly, the expectations of networked media are becoming more appropriate. Once upon a time there was a mythic "broadcast quality" threshold all media had to pass before being accepted by broadcast organisations and (theoretically) audiences. However, now there are active calls for content generated by "on the spot" accidental observers by large scale media organisations. The tide and scale of remote media is changing. The nature of experimental media on this type of platform is the intentional playground of mobicasting. With this emerging new type of media witness cultural forms are also emerging. Multiple networked media phones is in itself a platform for collaborative cultural development and opens interesting doors for experimental media.

ephemera_strip3 ephemera_strip2 ephemera_strip1

Images from Ephemera Journal

MiniFM and SilentTV

2001-2004, International
For many years I had a wonderful mentor – Tetsuo Kogawa. He is the father of MiniFM. I saw Tetsuo build a mini FM transmitter at the Next Five Minutes festival in Amsterdam. Sometime after that, I asked him if he would teach me how to make them too, and he very generously spent a good deal of time making sure I understood the ins-and-outs of the process. Together we designed a workshop and Tetsuo worked out even simpler ways to build the transmitters. For many years I travelled the world leading transmitter-building workshops and often Tetsuo would stream in from his studio in Tokyo to talk about the idea and give a quick demonstration before we started building.

Later Tetsuo and I created a project called SilentTV which was the same idea but using simple elements to broadcast TV.

I’m forever grateful to Tetsuo for his kindness and mentorship.

mini-fm-transmitter-1

re:Play

November 2003, South Africa
re:Play explored the world of the computer game. It featured an exhibition of artists’ computer games by Andy Deck, Josh On + Futurefarmers, Mongrel, Natalie Bookchin, the escapefromwoomera collective and Max Barry, and a programme of workshops and lectures. re:Play was a collaboration between the Institute for Contemporary Art, Cape Town and r a d i o q u a l i a. It launched at L/B’s – The Lounge at Jo’Burg Bar in central Cape Town, South Africa, and went on to be exhibited at Artspace and the Physics Room in New Zealand.

The games in the exhibition were not typical computer games. While all of them encouraged play, and involved a gaming objective, unlike regular computer games, they had a strong political dimension and explored how play, interaction and competition can be utilised in an artistic context.

The re:Play education programme included talks and workshops lead by Graham Harwood of Mongrel and r a d i o q u a l i a  at Cape Town High School, Fezeka Senior Secondary School in Gugulethu; the Alexandra Renewal Project, Johannesburg and at Wits School of Arts, University of the Witwatersrand, Johannesburg.

Free Radio Linux

2002, The World
Another project that got a lot of attention for  r a d i o q u a l i a,  most notably through being exhibited at the New Museum in NYC,  but it also at Banff and other places. I loved this project because it brought together several threads.

Formally, Free Radio Linux was an online and on-air radio station. The sound transmission was a computerised reading of the entire source code used to create the Linux Kernel, the basis of all distributions of Linux.

Each line of code was read by an automated computer voice – a speech.bot
utility I built for the work. The speech.bot’s output was encoded
into an audio stream, using the early open source audio codec, Ogg Vorbis, and was broadcast live on the internet. FM, AM and Shortwave radio stations from around the world also relayed the audio stream on various occasions.

The Linux kernel at that time had 4,141,432 millions lines of code. Reading the entire kernel took an estimated 14253.43 hours, or 593.89 days.
Listeners tracked the progress of Free Radio Linux by listening to the
audio stream, or checking the text-based progress field in the ./listen
section of the website (which is no longer up)…

Essentially this was all about how free radio and free software were wierdly the same. If you ever worked with free radio geeks, you will know they are nerdy technophiles who believe in the purity of what they are doing. Very much the same as Open Source geeks at the time. Most were interested in the tech and the political ideal of the respective mediums (radio and software). So, FRL was a comment on this. It was also a comment on how free radio was, ironically, very difficult to achieve on the internet unless serious attention was made to developing free codecs. But also FRL had two other elements going for it. The first was to (again) poke fun at the ridiculous hyperbole that surrounded the open source movement. People were expounding this ‘amazing new phenomenon’ and extrapolating how it would change the world (much as they did about wikis a short time after) when they had never come into contact with code or geeks. So this was an attempt to expose those people to the code…or what it sounded like. But also, at the time there was a lot of early talk about how to preserve digital media (a problem still not solved) and radio waves apparently never die… so by broadcasting the Linux kernel into space we were preserving it on the oldest medium ever, forever. Hehe…

I did, however, feel very sorry for the attendants at the New Museum who had to work 8-hour shifts listening to “one dollar sign dollar sign comma hatch four new line seven two dollar sign dollar sign…”

frl_radioqualia350x

Thing FM

2002, New York City
This was, in theory, a radio network but in reality, just a few transmitters got installed. Still, it was fun. Thing FM was based in NYC and built during a residency I did at the Thing in NYC. The same week that the Yes Men came into the office to film their ‘shit burger’ stunt. They came into the Thing and asked who wanted to go and I didn’t go! doh! Anyway, we built the network using internet audio (via wireless and wired connections) and miniFM. Each of the transmitters was about 0.1 W output and sourced their audio live from the internet using the Frequency Clock scheduling system I had built earlier.

This partly adopts the ethic of micro-radio as founded by Tetsuo Kogawa, where many low powered FM transmitters are coupled to create an effective broadcasting entity that ‘falls beneath the radar’ of the communication authorities. fm.thing.net combined this ethic with that of net.radio which was a relatively new phenomenon focusing on the use of the internet as a carrier signal, best illustrated by the practices of the Xchange network. By combining the net.radio and micro-radio we hoped to build an efficient radio network in New York that used the internet as a primary carrier of the audio for re-broadcasting on legal or almost legal microFM broadcasts.

Hanging with Ted Byfield and Jan Gerber was a highlight of this experience. Wolfgang Strauss was also pretty fun but I was so intimidated by him. He was just so cool. Also sharing a tenement apartment in Ludlow Street with 3 people (bath in the kitchen) was pretty fun.

thing_logo

Radio Astronomy

2001
Radio Astronomy was an art and science project which broadcasts sounds intercepted from space, live on the internet and on the airwaves. The project was a collaboration between r a d i o q u a l i a, and radio telescopes located throughout the world. Together we were creating ‘radio astronomy’ in the literal sense – a radio station devoted to broadcasting audio from our cosmos.

Radio Astronomy had three parts:

  • a sound installation
  • a live on-air radio transmission
  • a live online radio broadcast

Listeners heard the acoustic output of radio telescopes live. The content of the live transmission depended on the objects being observed by partner telescopes. On any given occasion, listeners may have heard the planet Jupiter and its interaction with its moons, radiation from the Sun, activity from far-off pulsars or other astronomical phenomena. Honor from rQ later made a TED Talk about it.

Dino, drummer from HDU, did the website design…thats gotta rate…

RT32

2001, Latvia
In 2001 I had the good fortune to be part of the Acoustic.Space.Lab project which started a long love affair with the RT32 radio telescope. Formerly a cold war device, this telescope was liberated when the Russian Army pulled out of Latvia. I worked with this telescope as an artistic device and with the generous scientists for many years after. The doco clip below introduces the explorations of the international Acoustic Space Lab Symposium which took place on the site of RT-32 in 2001.

Highlights of this period in Latvia included being evicted by Russian builders, getting a hernia, and being amazed Marc Tuters survived eating so many dodgy looking mushrooms he found in the forest.

Open Sauces

May 2001, Scotland and also later…
I have come to realise there is just too much stuff I have tinkered with to comment on. Open Sauces falls into that bucket. Google tells me this was 2001. Essentially I got sick of all the Open Source blah blah of the time.. everything was suffixed by OPEN and it got very tiring (Open Gov, Open Hardware, Open Society…). No critical reflection on the fact that geek methods are geek methods and they are not transportable – AND – OPEN processes, methods etc existed well before geeks came along and inherited the word. No geek invented openness. I’m still tired of this I have to say…still…. I created Open Sauces which was an open database of recipes… anyone that did a residency could add their favourite recipe and you could just tick all the ingredients you have in your fridge and get a recipe to suit… doesn’t sound too revolutionary but at the time this sort of thing didn’t exist. It was a comment on this abuse of the use of the word ‘open’ and how cooking way preceded sharing of ‘code’ / ‘instructions’ etc… and also how food is probably the most important part of any collaborative project, whereas unsocial nerdy talk is optional. Later Fo.am in Brussels were inspired by the idea and started an Open Sauces theme.

net.congestion

2000, Amsterdam
I’m particularly proud of this project. It came about when I was a very naive newly arrived resident of Amsterdam. I suggested to Geert Lovink this idea for a festival and he said to speak to Erik Kluitenberg. Both huge legends in my mind you understand… I mustered the courage up to suggest it to Erik who was a cultural curator at De Balie at the time. He said he would think about it and I thought I wasn’t very convincing. A week later he called me up and said let’s do it! Whoot!

The festival was held in Amsterdam in October 2000. Net.congestion was an intensive three-day celebration and critique of the new cultures that have arisen from all forms of micro-, narrow- and broad- casting via the internet, now collectively known as streaming media.

The event covered most of the interesting ground of the time for streaming media, from the transformation of issues surrounding intellectual property to the uses of streaming as a mobilisation tool for global resistance through to the more rarefied questions of aesthetics and how narratives are transformed when embedded in networks. The overwhelming experience of many visitors to Net.congestion was a sense of tools, networks and sensibilities being re-purposed, returning us, again and again, to a primary experience of the net as a social space.

Net.congestion occurred just months before dot.com bubble burst, exploding the ‘new economy’ and ‘the long boom’ with its fantasies of a world in which the economic laws of gravity had been repealed. There is no doubt that if the same event were to be held now, the atmosphere would be markedly different. It is not that Net.congestion was an industry event which depended on the hype for its existence, as the very title indicates that we mixed a healthy dose of skepticism with our festivities. But none of us, however critical, can entirely escape the zeitgeist and there is no doubt that in those brief heady days Warhol’s aphorism was re-written; we could all dream of becoming billionaires, if only for 15 minutes. A strange historical phase when (particularly for anyone involved in streaming media) the normally fixed boundaries between business, art, technology, science fantasy and just plain bullshit temporarily blurred to create a moment of unique cultural hysteria. In that sense our timing was perfect.

netcongestion

The Theory Machine

2000
In an attempt to make theorists a little more funky, I made a software they could use to put their brainy thoughts to glitchy syncopation. It was mainly used by Eric Kluitenberg including one memorable performance at Club Otok in Dubrovnik.

HelpB92

1999, Amsterdam
I helped found an organisation during the Nato bombing of Serbia and Kosovo in 1999. The international support campaign for independent media in Yugoslavia, including the famous Radio B92 media centre, in operation between March and July 1999. We did some pretty cool things but mostly I was very happy to be involved in what must have been one of the web’s early large-scale activist campaigns. It was also the start of my longish relationship with Amsterdam as XS4ALL offered me a job and I stayed for a few years. I still have a bike there somewhere.

What was tricky, though, is that I agreed to go to Skopje to assist an Albanian refugee radio station (Radio 21). It was kinda nerve wracking. There were literally bombs set to explode to take out as many Albanians as possible. Some kids lost their legs across the street from where I was working. I was a milk and cookies boy from NZ.. what was I doing here? Still, I stuck it out and we managed to set up quite an innovative way of getting radio transmissions out of the refugee camps to Radio Netherlands Shortwave.. .I’ll write that up when I get time.

helpb92_big-jpg

The Frequency Clock

1998 – 2004 or so, The World
This was one of the earliest  r a d i o q u a l i a  projects and how I learned to program. To understand this project you have to understand the dark ages of the internet when video and audio hardly existed. Essentially we built a media scheduling system that allowed you to build archives of live and pre-recorded content, tag them, and then schedule them. All built in JavaScript… remembering these were these days when Javascript was very rudimentary.

The Frequency Clock was originally conceived as a mechanism to control FM transmitters over the internet. In essence it was a networked timetabling system, connecting globally dispersed FM transmitters so they could broadcast the same internet audio simultaneously. The original player was a popup window but we also built desktop apps to do the same thing using VisualBasic (Win) and RealBasic (Mac). All open source.

However… then we realised that video could also work… and we used it to control community TV channels in Amsterdam and Linz and we also controlled giant video billboards in Estonia and a whole lot of other things. It was exibited a lot, most notably at the Walker when Steve Dietz was still there. We even installed a transmitter in the roof of De Waag! It was a remarkable experiment for its time. Yes, yes, pre-Napster and YouTube and all those other toys… while writing this I found some kind of prototype online.

Sound Performances

1996 – 2008, the world.
Performing solo as ‘eset’ and with Honor Harger as  r a d i o q u a l i a  I did a lot of sound performances, most using sounds from space and either live performances in real space or on radio. Some stuff still exists online:
https://soundcloud.com/radioqualia

Or eg:

r a d i o q u a l i a

This was the project that liberated me from the south and the reason I moved to Europe with no money and no return ticket. My plan was to make coffee and do some arty stuff in London. Thankfully, Nato bombed Serbia (hoho) and everything changed.

What I really loved about this time, was that I felt part of a lovely international community of artists. We used to travel around and bump into each other in various crazy places. This group included people like Marko Pelijhan, Heath Bunting, Rachel Baker, James Stevens, Luka Frelih, the Mama crew, Lev Manovich, Steven Kovats, Matthew Beiderman, Giovanni D’Angelo, Zita Joyce, Adam Willetts, Rasa Smits, Raitis Smits and so many many others…it was an awesome time.

r a d i o q u a l i a was an artist project that consisted of myself and Honor Harger. I have described some of our exhibitions and performance projects above, and listed some below. There were many more.

In August 2004,  r a d i o q u a l i a  was awarded a UNESCO Digital Art Prize for the project Radio Astronomy. In September 2003, we were awarded the Leonardo-@rt Outsiders 2003 New Horizons Prize together with the participants of the Open Sky installation at the @rt Outsiders exhibition at the Museum of European Photography in Paris.

Selected r a d i o q u a l i a exhibitions and performances:

Lecture & performance at the Centre Pompidou, Paris, France
Work: Sonifying Space, as part of the Space Art conference

Exhibition at the New Museum of Contemporary Art, New York, USA
Work: Free Radio Linux, as part of the OpenSourceArt_Hack exhibition

Exhibition at the NTT InterCommunication Centre, Tokyo, Japan
Work: Radio Astronomy, as part of open nature, a show curated by Yukiko Shikata

Online exhibition / commission / installation at Gallery 9, Walker Art Centre, USA
Work: Free Radio Linux

Exhibition at Arsenals Exhibition Hall, Riga, Latvia
Work: solar listening_stations, part of WAVES

Exhibition at HMKV, Dortmund, Germany
Work: solar listening_stations, part of Solar Radio Station

Exhibition at the Walter Philips Gallery, Banff, Canada
Work: Free Radio Linux, as part of The Art Formerly Known As New Media

Exhibition at Centre d’Art Santa Monica, Barcelona, Spain
Work: Radio Astronomy, as part of Sonar 2005

Performance at Tesla, Berlin, Germany
Work: from polar radio to solar wind

Performance, La Batie Festival, Geneva, Switzerland
Work: signals as part of signal-sever

Exhibition at Ars Electronica, Linz
Work: Radio Astronomy

Exhibition at ISEA 2004, Helsinki, Finland
Work: Radio Astronomy

Symposium & Performance, Ventspils International Radio Astronomy Centre, Latvia
Work: Acoustic Space: RT32: Orchestrating the Solar System

Broadcast on Radio New Zealand
Work: Revolutions Per Minute 1: Frequency Shifting Paradigms in Broadcast Audio

Broadcast on Radio New Zealand
Work: Revolutions Per Minute 2: Little Star

Exhibition at Small Gallery, Los Angeles, USA
Work: comma.data.space: 11 Ghz

Performance at the Moving Image Centre in Auckland, New Zealand
Work: comma.data.return :: 56:30 – 21:1

Performance at Version festival, Auckland, New Zealand
Work: listening_stations v0.3: langmuir waves

Exhibition at the Physics Room, Christchurch, New Zealand
Work: re:Play

Exhibition at Artspace in Auckland, New Zealand
Work: re:Play

Exhibition & education programme, South Africa
Work: re:Play

Exhibition at the Reg Vardy Gallery, Sunderland, UK
Work: Free Radio Linux, as part of the Art for Networks exhibition

Exhibition at Museum of European Photography/ Maison Europeenne de la Photographie, Paris, France
Work: listening_stations, as part of @rt Outsiders exhibition

Exhibition at the Physics Room, Christchurch, New Zealand
Work: data.spac.ereturn, as part of the Audible New Frontiers exhibition

Locative media Residency at K2, Karosta, Latvia
Work: Locative Media

Exhibition at Turnpike Galleries, Leigh, UK
Work: Free Radio Linux, as part of the Art for Networks exhibition

Exhibition at Fruitmarket Galleries, Edinburgh, UK
Work: Free Radio Linux, as part of the Art for Networks exhibition

Radio show on Resonance 104.4FM, London, UK
Work: r a d i o q u a l i a  on resonanceFM

Exhibition at the Generali Foundation, Vienna, Austria
Work: listening_stations as part of the Geography and the Politics of Mobility exhibition

Exhibition at Chapter, Cardiff, UK
Work: Free Radio Linux, as part of the Art for Networks exhibition

Performance & Broadcast, Ars Electronica, Linz Austria
Work: Radiotopia @ Ars Electronica

Radio Broadcasts on Austrian National Radio, Vienna, Austria
Work: i s o l

Exhibition at CCCB, Barcelona, Spain
Work: frequency clock – gallery installation – [sNr v.0.1]
Sonar 2001, Barcelona, Spain

Action & Broadcast, Ars Electronica, Linz, Austria
Work: Take Over Cultural Channel

Performance, Residency & Symposium at, Ventspils International Radio Astronomy Centre, Latvia
Work: Acoustic Space-Lab

Exhibition at Video Positive, Liverpool, UK
Work: Frequency Clock – gallery installation – [ vp00 v.0.0.3 ]

Workshop & Performance, Adelaide Festival of the Arts, Adelaide, Australia
Work: Closing the Loop 2000

Seminar & Performance at Lux Centre, London, UK
Work: Tuning the Net

Performance at the Stockton Festival, Stockton, UK
Work: transitions & undercurrents part of live-stock

Exhibition & Performance at OK Centrum, Ars Electronica, Linz, Austria
Work: pso.Net, as part of Sound Drifting

Exhibition at Experimental Art Foundation, Adelaide, Australia
Work: Frequency Clock – gallery installation – [ eaf v.0.0.2 ]

Exhibition at Experimental Art Foundation, Adelaide, Australia
Work: Illata

Exhibition at Contemporary Art Centre of South Australia, Adelaide, Australia
Work: e Q

Performance at LADA98 Festival, Rimini, Italy
Work: we are alive and well but terribly uncommunicative

Exhibition, Ars Electronica, Linz, Austria
Work: Frequency Clock – gallery installation – [ aec98 v.0.0.1 beta ]

Performance, Ars Electronica, Linz, Austria
Work: 56h LIVE!: Acoustic Space

Exhibition at Bregenz Festival, Bregenz, Austria
Work: gl^tch.bot

Performance and presentation at net.radio.days 98, Berlin, Germany
Work: self.e x t r a c t i n g.radio (.ser)

Online Project
Work: self.e x t r a c t i n g.radio (.ser)

Exhibition at the Machida City Museum of Graphic Arts, Tokyo, Japan
Work: The Qualia Dial

Exhibition at Fabrica New Media Art Institution, Italy
Work: Balance

more to do….

Streaming Suitcase

PliegOS

Re:mote

Low Res

Skint Stream

Open Source Streaming Alliance

Open Channels for Kosovo

self.extracting.radio

ovalmaschina

Gema

simpel

Books are Antagonistic to Free Culture

Books are proprietary knowledge repositories.

Books, whether they are electronic or paper, are built to do the same thing – enforce IP. If we want a definition of what a book is, books are proprietary knowledge repositories.

Now in the era of free culture and digital media, we need to look back and realise that this ‘beautiful object’ actually is a premiere mechanism of IP control.

Books are antagonistic to free culture.

Ad hoc-racy in Design

Just reading a great book – Peopleware, by Lister and DeMarco. They are talking about how to manage dev teams but I would like to take a quote of theirs entirely out of context and celebrate it as a best practice for designing workflow systems:

“if the system has a sufficient degree of natural ad hoc-racy, it’s a mistake to automate it”.
Lister & DeMarco.

There is a great tendency in organisations to over prescribe workflow through technical means. I think this is very often a mistake. Publishing has such broken workflows that I used to say they are in need of optimisation and then radicalisation. However, using another term from Lister and DeMarco’s book Peopleware, the workflow also has to be self-healing.

In other words, it is the people and not the machine that is going to work out the better way to do things. Whether it is optimising an existing workflow or scrubbing down the decks and starting again, we need people to see the error and imagine the improvement. Those same people then need to be able to easily change the environment to effect change. Overly deterministic, prescriptive, technical systems don’t allow for that. We must instead leave gaps in the machine’s innards so humans can ‘be the re-wiring’.

Ad-hocracy is a necessary part of a self-healing system. Humans do it very well, machines don’t do it at all. If you want your workflow to improve, then design a system that allows for the ad hoc. Design a system that allows for the human.

Learning Node

It’s that time again… time to learn something new. On my radar right now is Node – the environment for executing JavaScript on the server. It’s a long time since I gave up learning to program. I built several systems in Perl and PHP and tinkered in Python. I also learned some JavaScript a long time ago…as I’m writing I am also remembering I did a few projects in Visual Basic and RealBasic (Mac)… eeeek…. those were… err… the days… the days, that is, when I couldn’t afford to pay someone to do it properly. I would describe myself as a cut’n paster, not a programmer.

However, Node really moves the game on. It has been around for a while now but everything interesting I see online these days is more than likely to be using Node in some capacity. So, after having recently finished a reasonably schedule-heavy job, I have time to work on my own projects and start tinkering again. I’m really enjoying it. I don’t want to be a programmer, but I find that gaining a working knowledge of interesting new technologies really helps me imagine what is possible and talk pseudo-intelligently with programmers about ideas.

Node looks to be one very interesting option for whatever I do next. I’m using Talentbuddy to get a start. They have some good free starter tutorials. I’ll try them out and see where I get. Having said that I have, just this minute, discovered learnjs , so it looks like there’s no shortage of learning opportunities. Awesome.

publish.js

A list of some interesting Open Source Javascripts (mostly JS) that hold good possibilities for knowledge production and publishing.

Content Production

Lens
License: FreeBSD
Code: https://github.com/elifesciences/lens
WWW: http://lens.elifesciences.org/about/

Tangle
License: MIT
Code: https://github.com/worrydream/Tangle
WWW: http://worrydream.com/Tangle/

Flatsheet
License: MIT
Code: https://github.com/flatsheet/flatsheet
WWW: http://flatsheet.io/

Realtime Markdown Editor
License: TBD (emailed dev)
Code: https://github.com/scotch-io/node-realtime-markdown-viewer
WWW: https://scotch.io/tutorials/building-a-real-time-markdown-viewer(tutorial)
Demo: http://realtimemarkdown.herokuapp.com/

PlumbJS
License: MIT & GPL
Code: https://github.com/sporritt/jsplumb/
WWW: http://jsplumbtoolkit.com/

Ice
License: GPL 2
Code: https://github.com/NYTimes/ice
WWW: http://nytimes.github.io/ice/demo/

Annotator
License: MIT & GPL v3
Code: https://github.com/openannotation/annotator/
WWW: http://annotatorjs.org/

Space.js
License: MIT
Code: https://github.com/gopatrik/space.js
WWW: http://www.slashie.org/space.js/

MathJax
License: Apache
Code: https://github.com/mathjax/MathJax
WWW: https://www.mathjax.org/

KaTeX
License: MIT
Code: https://github.com/Khan/KaTeX
WWW: http://khan.github.io/KaTeX/

Velocity.js
License: MIT
Code: https://github.com/julianshapiro/velocity
WWW: http://julian.com/research/velocity/

JuxtaposeJS
License: Mozilla
Code: https://github.com/NUKnightLab/juxtapose
WWW: https://juxtapose.knightlab.com/

TimelineJS3
License: Mozilla
Code: https://github.com/NUKnightLab/TimelineJS
WWW: http://timeline.knightlab.com/

StoryMap
License: Mozilla
Code: https://github.com/NUKnightLab/StoryMapJS
WWW: https://storymap.knightlab.com/

Chartbuilder
License: MIT
Code: https://github.com/NUKnightLab/Chartbuilder
WWW: http://quartz.github.io/Chartbuilder/

Soundcite
License: Mozilla
Code: https://github.com/NUKnightLab/soundcite
WWW: http://soundcite.knightlab.com/

Pagination

BookJS
License: AGPL
Code: https://github.com/booktype/BookJS
Web: none

Cassius
License: AGPL
Code: https://github.com/MartinPaulEve/CaSSius
Web: https://www.martineve.com/2015/07/24/getting-started-typesetting-with-cassius/

Vivliostyle
License: Apache
Code: https://github.com/vivliostyle
Web: http://vivliostyle.com/

BookJS Polyfil
License: AGPL
Code: https://github.com/BookSprints/bookjs-polyfill
Web: none

Typography

Hyphenation

Sweet Justice
License: BSD
Code: https://github.com/aristus/sweet-justice
WWW: http://carlos.bueno.org/2010/04/sweet-justice.html

Hyphenator
License: LGPL
Code: https://code.google.com/p/hyphenator/downloads/list
WWW: https://code.google.com/p/hyphenator/

Hypher
License: BSD
Code: https://github.com/bramstein/hypher
WWW: http://www.bramstein.com/projects/hypher/

Font Resizing

FlowType.js
License: MIT
Code: http://github.com/simplefocus/FlowType.JS
WWW: http://simplefocus.com/flowtype/

Squishy
License: unspecified (argh! please include a license file!)
Code: https://github.com/lemonmade/squishy
WWW: http://cmsauve.com/projects/squishy/

Fittext
License: WTFPL
Code: https://github.com/davatron5000/FitText.js
WWW: http://fittextjs.com/

SlabText
License: MIT
Code: https://github.com/freqDec/slabText/
WWW: http://freqdec.github.io/slabText/

Responsive Text
License: MIT
Code: https://github.com/ghepting/jquery-responsive-text
WWW: n/a

Line Spacing

Typeset
License: unspecified (argh… )
Code: https://github.com/bramstein/typeset
WWW: http://www.bramstein.com/projects/typeset/

Kerning

Kern.js
License: WTFPL
Code: https://github.com/bstro/kern.js
WWW: http://www.kernjs.com/

Kerning.js
License: MIT (license file is in the wrong place – check the README)
Code: https://github.com/endtwist/kerning.js
WWW: http://kerningjs.com/

TypeButter
License: CC-BY-SA 3.0
Code: https://github.com/hudsonfoo/typebutter
WWW: http://typebutter.com/

Drop Caps

DropCap.js
License: confused (All rights reserved and apache??)
Code: https://github.com/adobe-webplatform/dropcap.js
WWW: http://blogs.adobe.com/webplatform/2014/10/02/drop-caps-are-beautiful/

Color

Color Font
Licence: WTFPL
Code: http://manufacturaindependente.com/colorfont/media/js/colorfont.js
WWW: http://manufacturaindependente.com/colorfont/

Font Tricks

Lettering JS
License: WTFPL
Code: https://github.com/davatron5000/Lettering.js
WWW: http://letteringjs.com/

jqisotext
License: MIT & GPL
Code: https://code.google.com/p/jqisotext/downloads/list
WWW: http://workshop.rs/2010/01/jqisotext-jquery-text-effect-plugin/

Arctext.js
License: MIT
Code: http://tympanus.net/Development/Arctext/Arctext.zip
WWW: http://tympanus.net/codrops/2012/01/24/arctext-js-curving-text-with-css3-and-jquery/

Blast.js
License: MIT
Code: https://github.com/julianshapiro/blast
WWW: http://julian.com/research/blast/

Base Lines

Baseline.js
License: WTFPL
Code: https://github.com/daneden/Baseline.js
WWW: n/a

Baseline CSS
License: CC-BY-SA 3.0
Code: http://baselinecss.com/download/baseline.zip
WWW: http://baselinecss.com/

HUGrid
License: GPL
Code: http://bohemianalps.com/tools/grid/HeadsUpGrid_download.zip
WWW: http://bohemianalps.com/tools/grid/

CSS Stuff

Tufte CSS
License: MIT
Code: https://github.com/daveliepmann/tufte-css
WWW: http://www.daveliepmann.com/tufte-css/

Other Resources
This and That

https://github.com/yumyo/js-type-master
http://stateofwebtype.com/
http://nytlabs.com/projects/editor.html
https://www.youtube.com/watch?v=VbCqFQ1sTYQ
http://dat-data.com/
https://github.com/sharelatex/clsi-sharelatex
https://anmolkoul.wordpress.com/2015/06/05/interactive-data-visualization-using-d3-js-dc-js-nodejs-and-mongodb/
http://webapplayers.com/inspiniaadmin-v2.2/cssanimation.html
http://yusugomori.com/projects/css-sans/fonts
http://codepen.io/juliangarnier/pen/idhuG
https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills
https://github.com/WebComponents/webcomponentsjs

Some Good Reading

https://medium.com/@mql/self-host-a-scientific-journal-with-elife-lens-f420afb678aa
https://medium.com/@
mql/centralised-vs-decentralised-publishing-626055376c81
http://book.pressbooks.com/

Platforms

Booktype
License: AGPL
Code: https://github.com/sourcefabric/Booktype
WWW: https://www.sourcefabric.org/en/booktype/

PressBooks
License: not specified 🙁 emailed org
Code: https://github.com/pressbooks/pressbooks
WWW: http://pressbooks.com/

PubSweet
License: Apache
Code: https://github.com/BookSprints/PubSweet
WWW: none

More coming…

Building Book Production Platforms p5

Workflow

Much more to come.

Most of the book production platforms in circulation have very little workflow tools to speak of. This is not necessarily a bad thing. A platform that is ‘just an editing environment’ is still pretty powerful. If you do need tools to assist with workflow, then in situations where a small group know each other well they can use email or, if in real space, Post-it notes or paper to track what needs to be done next. In many cases, a live chat in the interface, or integrated topic-based forum, will be enough to satisfy many workflow needs, and in other cases the platform can be augmented by external systems such as wikis, online spreadsheets, content management systems and other tools to meet particular requirements.

However, there are a number of situations where these ‘solutions’ become unsatisfactory. This is especially true for organsiations which have a large number of people involved in processing content, or which have sophisticated content-processing needs (such as book publishers).

Before going too much further, let me clarify what “workflow tools” are. In the broadest sense, they are tools that help you to know what needs to be done, and when it needs to be done by. Using this very broad definition, we can see that mechanisms such as discussion forums and live chats are workflow tools. By chatting with colleagues through a live chat or forum, you can work out what needs to be done next, or get a ‘notification’ (a shout out) that it needs to be done now… From there, systems can evolve into complex technical environments which are either relatively open-ended (such as Trello) or relatively closed, such as hard-coded workflow pipelines.

The first book production system I built for FLOSS Manuals was ‘built’ on top of Twiki in 2006-2007, had some basic workflow tools, namely:

  • a basic live chat
  • a dropdown status-selector for marking chapter statuses (needs content, needs images, finished, and so on)
  • notifications in the table of contents when someone is editing a chapter
  • a mailing list where efforts could be coordinated

blog-notif-en-1

These tools were simple and effective and served us well for a number of years. I also incorporated similar mechanisms into Booktype and PubSweet. In addition, when we used these platforms for Book Sprints, lots of whiteboard scribbles and Post-its were utilised.

whiteboard_scribbles

nameless

In a Book Sprint, notably, the facilitator is the main coordinating workflow mechanism. I point that out because it is important to understand that workflow tools can include humans – often the easiest way to know what needs to be done and when is to be done by, is to get someone else to tell you.

sprinters

And let’s not forget that human factor! We are living at a time when we tend to want to programmatically solve problems with overly prescriptive technical systems. But sometimes underdetermining the technical systems is the right way to go.

I first tried pushing past these basic software workflow tools with Booktype – a book production system I founded, now housed with Sourcefabric. I leveraged the kanban idea of multiple columns (phases) populated by ‘todo’ items to build the equivalent of a digital kanban system, making the first simple prototype in a demo for the Frankfurt Book Fair in 2012. The inspiration came from Pivotal Tracker and the Open Source Fulcrum.

Most often the technology used to set up a kanban system is a whiteboard, with marker pens to draw and label the columns, and Post-it notes as a marker of the tasks. This kind of system is popular in unconferences, and also often used by software development houses. We also use this type of kanban approach a lot in Book Sprints.

booktype

The task manager (as I called it) and the production system were linked to each book and worked nicely. Although this system didn’t make it into the core code of Booktype, this version got the idea across, and later Juan Gutierrez made an integrated version for PubSweet. (During 2014, I also built this idea into a system for PLOS).

The task manager used a whiteboard-like interface in which the user could use to create columns (phases). Cards could be added to each phase and simple notes kept on each card. It was simple but effective.

In time I discovered Trello, and Why Cards are the Future of the Web by Paul Adams – these examples placed cards nicely within evolving design paradigms of the Internet, and I started to think about this model in more detail.

There are many advantages to cards, not the least being that cards can ‘follow the user’ – think of them as powerful work-unit-applications that can be accessed by a user within any context where they are needed.

cards_web

Additionally, when thinking of digital cards within the digital workflow-kanban paradigm, the nice thing is that it is a very simple model. There are essentially just 2 elements – cards and columns. You can create as many of each as you like. Further, you can name the columns and cards anything you like. That means these two devices can be used to represent any number of simple or complex workflows. You can start from the kanban default – three columns marked ‘to do’, ‘doing’ and ‘done,’ and add cards for each task – progressing them from left to right as tasks progress from ‘to do’ to ‘done.’ This is the default configuration when creating a new Trello board.

Replicating this system in an application is pretty easy to do. Trello is an excellent example. While Trello is not easily integrated into another technical system (such as an in-house publishing system), it is interesting in that the designers, while surely tempted by all that a web application could offer, have endeavoured to keep the Trello system true to the kanban ideology of useful but simple. With Trello, therefore, you can add columns, and cards to columns, naming each as required. When you open a card, however, you have some nice widgets for making lists, comments, discussions, attaching files etc. This is something paper cannot easily do, at least not with the small real estate afforded by Post-it notes.

Trello is a lovely application precisely because these systems, like the paper kanban, have been designed to be simple to use and serve as many generic use cases as possible.

However,while digital kanban systems like this are useful as standalone ‘context agnostic’ systems, they could be much more powerful for publishers (or anyone) if this simplicity and flexibility could be preserved while the system also served their specific use case. The trick is to preserve the simplicity and flexibility to allow publishers to model existing and future workflows in an easily ‘grok-able’ drag and drop manner (similar to Trello), while building cards that reflect the publisher’s specific needs (to invite editors, push content to external vendor services, perform peer review etc).

Building cards like this, means pushing cards away from the Trello/kanban generic-use paper metaphor towards a more sophisticated specific-use digital and networked paradigm. This means embracing the idea that cards are networked applications and building cards that precisely serve the publisher’s needs and integrate into their existing internal and external systems.

The Four Phases of Knowledge Production

Knowledge production consists of four basic phases – manage, create, process, and share. When designing knowledge production systems, it pays to keep these four phases in mind and to build platforms that have an eye on this high-level abstraction. These phases can be linear dependencies, overlap and/or be concurrent.

‘Thinking from above’ can help us to better understand where the needs of each phase might be placed within the system we are designing.

Manage – when producing a knowledge asset, there needs to be some management of the context. This mainly includes things like the setting up and processing of users, roles, permissions, and groups. However, ‘manage’ can also be an appropriate way to think about the ways users access content. A dashboard is a management interface, as are interfaces to create a Table of Contents (book) or keep track of a Collection (eg a set of related journals). The usefulness of thinking of management needs in this way is that it really highlights that a dashboard (consider Google Drive as a dashboard) and a Table of Contents interface for managing an online book production system are the same category of interface. They might have a different treatment according to the use case, but they facilitate much the same kinds of activities.

Create – content needs to be created, so we need content-creation interfaces. We are familiar with these: think about blogging and where you write your posts – that is a content creation interface; now think about a book production system where you write a chapter – that is also a content creation interface. These interfaces both belong in the same high level abstract container.

We need to think of content creation interfaces at a high level of abstraction when designing these kinds of systems for many reasons.

First, it is useful to think about how components may be re-used. What, for example, is the difference between an interface that is used to create a blog post, and one used to create a chapter? For a great deal of use cases, the answer is nothing. This re-use approach is taken by PressBooks, for example. PressBooks uses WordPress as a book production platform (note: I don’t agree this is a good idea, as WordPress is an entire suite best suited to blogs and not books, but I am pointing out the similarity of the content production needs at a very high level).

Second, it is interesting to ask ourselves what kind of content we are trying to produce and whether we have the right type of content production interface. Think, for example, of a book production system. All (except one) of the online book production tools I am familiar with have one kind of content creation interface – a WYSIWYG editor attached to a blank page. You use the editor to fill up the page until your chapter is done.

But what if you need to produce a glossary? Most book production systems use the same tool. That doesn’t seem like a good idea. Glossaries have very specific needs – users need to be able to sort, create new items, perhaps even translate terms into other languages and sort by those languages etc. A ‘waterfall’ cascading content creation interface (WYSIWYG and a blank page) doesn’t meet that need very well. What if you wish to produce 2 page spreads (in the case of paper books), or an index ? …then imagine that instead of books, we are are looking to produce annotated data sets.

We need to liberate ourselves from the one-size-fits-all approach to content production, and placing these needs in a high level abstract container allows us to think of the needs and not the UI.

Process – one thing I have come to learn through working in STM publishing, is that publishers add value to content by improving it. That is pretty much what every publisher does. In my mind, we should be re-framing publishers and calling them processors. After all, making content public these days is a doddle. And the difference between ‘self-publishing’ and ‘publishing’… is the processing bit. Self- publishers (generally speaking) do not have access to the same level of experienced and useful processing that can improve a work. As the day goes on we will have less and less value for the ‘publishing’, and in the case of STM, the long wait to making science public is already an impediment to progress.

The processing of content is an important part of knowledge production. If we want good knowledge, we need to be able to bring into play all those people and (sometimes) machines at the right moment to improve the work. That is essentially what workflow is all about. Processing is a high-level abstraction for workflow. It’s important to consider processing at a high-level abstraction because far too many systems build hard-coded workflow pipelines into their platforms, and that retards the opportunity to reconsider, optimise, and even radicalise workflows. I also consider other UI elements such as discussions to belong in the ‘process bucket’. Discussions are workflow and they are often the only type of mechanism that can account for the high level of specificity and issue resolution required on a per-knowledge asset (eg issues for each manuscript, chapter etc) level.

Share – formerly this was getting the book to the bookshop, but under current conditions this is something else entirely. Sharing works in the age of digital assets and network communications is all about file formats, APIs, and syndication. Avenues for sharing and the requirements of this process are prolfierating daily and the needs can be so complex that any system built to manage ‘sharing’ needs to be extremely flexible.

Thinking about sharing at a high-level of abstraction helps us with this enormously. For example, ingestion of .docx to an HTML-based knowledge production system is actually an act of sharing. It consists of file conversion and feeding the result into the production system for others to access. That ‘feeding’ of content into our production system, is a type of syndication. And the next ‘export’ of the finished product to some other system (eg a book sales system), requires a further process of file conversion (to the target book format) and feeding that format into the target system. Exactly the same high-level process as ‘ingestion’. They both belong to the ‘sharing’ bucket.

So, we can see that actually ‘import/ingestion’ and ‘export’ are actually the same thing. Consequently, we can save ourselves a lot of effort by building a framework in any knowledge production system that will ‘do both’ (ie recognise that there is no difference between import and export, and manage both rather than building redundant parallel processes).

The beauty of this ‘conceptual schema’ for knowledge production is that we can apply it to a wide variety of use cases to understand the knowledge production process at hand, and the variation and similarities between any of them. For example, Book Sprints traverse each of these 4 phases, as does a typical book from a publisher, as does a Wikipedia article, as does a grant application to a funder. Thinking of the process that way helps us see where the variance is – and helps us to better focus on designing for the needs of each. This suggests that a really good, efficient, single system can be designed to enable the production of a vast range of knowledge types, and accommodate apparently different processes which were formerly housed in standalone single use case platforms.

Colophon: written in Piha after walking on the beach, then cleaned somewhat by Raewyn. Written using Ghost blogging software (free software!).

Why Persistent Identifiers are the Wrong Idea

I think I will rewrite this. It seems to me it’s only half the solution…more coming shortly.
This afternoon I was reading Elizabeth Eisenstein’s “The Printing Revolution in Early Modern Europe” when I came across this passage:

To consult different books, it was no longer so essential to be a wandering scholar...The era of the glossator and the commentator came to an end, and a new "era of intense cross-referencing between one book and another" began.

The point here is that after the printing press came about, there were more books available. An obvious point. As a result, cross-referencing became a feature, since it was possible to access and, consequently, reference other literature more readily.

This made me think about the current discussions around persistent identifiers for scholarly content. It seems the current solution is to offer a layer of indirection: this enables a stable identifier to persist, and should the ‘actual location’ of the content be changed, then we can re-configure the redirect to point to the new location.

Martin Fenner and Geoff Bilder point to this solution in their very good postings on this topic. However, this method does not overcome the real problem. What if either:

  • no one updates the redirection after the location has changed
  • the content really goes offline (through loss of domain, for example)

It appears to me that we have the wrong solution. There is really no way to solve this issue with URIs. We can only minimise it.

So, how to go about resolving this (so to speak).

One way to get some insight into the issues is to wind back the clock and look at the way content was located in the age of the newly-born printing press. In this age, scholars were liberated because they didn’t have to wander the world looking for a particular book. Instead, identical printed copies proliferated, and it was just a matter of finding a copy of the work you were pursuing. Preferably you found a copy in a library or shop nearby. To find that work, one merely needed the cross-reference information to track it down. That “persistent identifier,” comprised of author’s name, book’s title, page of reference for the quoted material, publisher’s name and location, publication date, was commonly referred to as a citation (we still use that term). The citation helped the reader or researcher find a copy of the work cited. Not a particular printed book, but a copy of that book.

So, how is it that in an age of digital media we have gone backwards? Where copying something is even easier than in the printed age, why are we still pointing to ‘one’ authoritative copy? In essence, we are still referencing a book by stating the exact book that sits in a specific institution, on a particular shelf, with the blue (not green) cover.

It feels a little like the great leap backwards to me.

A way to get around this problem would be simply to allow and encourage content to be copied. Let digital media do what it does best – copy and distribute itself. The ‘unique identifier’ would then not be an URL (with a layer of indirection) but would take the form of a checksum or hash. Finding the right work would then be a matter of searching for a copy of the material with the right checksum.

I don’t know. I’m probably missing something. But it seems we have no problem tracking down YouTube videos when they spawn into the ether. We can also tell one version of software from another, no matter where it is and how it is labelled. Why not just let the content go and provide mechanisms to find a specific version of the content via hash search (also solving the issues of versioning URIs)?

Colophon: written on a lovely Sunday afternoon in the Mission. Adam got up Monday morning with that nagging feeling… rethought it. Talked to Raewyn. Rewrite coming. Written using Ghost (free) software.