Working with Folks

Yannis and Christos, the two Coko developers that have been working on Editoria from the beginning, have been in San Francisco for 2 weeks.

dsc03130

They are super people to work with. I met Yannis in the back of a taxi on the way to a mutual friends wedding in Montemvasia (Greece). It was a cool wedding, and somewhere along the way I realised he was also a great programmer. I eventually talked to Yannis about maybe working together on Coko and he was keen and also introduced me to Christos.

So, last week we did a number of presentations together to various people in San Francisco. Showing Editoria and talking about the technology behind it (INK and PubSweet).

We have also been taking the time to work at UCP together with Kate Warne and Cindy Fulton. I facilitated Kate and Cindy over this last year to design Editoria, so we took the opportunity to spend more time together and do some faster iterations.

dsc03121

This week Kate and Cindy iterated on ideas for an uploader – essentially they wanted to upload an entire directory of MS Word files at once into Editoria and have those files automatically populate the structure of the book. Yannis and Christos had a demo the next day and demonstrated it. We were all pretty happy with the result.

dsc03119

In addition, we met with, amongst others, many UCP production staff and demonstrated and discussed Editoria.

dsc03134

We discussed where it is now and where it is going and there were very many fantastic questions and pointers on things we need to keep in mind going forward. It was also a very cool meeting.

dsc03124

Finally, we are going to build out the diacritics interface, the multiple uploader, and a few other small bells and whistles to production ready code and test next week with Kate and Cindy before Yannis and Christos return to Athens on Thursday. All round, a cool couple of weeks. These weeks also reflect the ‘Coko way’ of working – having as many conversations as possible with those interested in the technologies and processes we employ, and designing systems with the major stakeholders (people who will use the system). Not only does this produce better software, its way more fun.

Ghost 1.0

Cool to see Ghost at 1.0. I like Ghost as a blogging system, not to use, but because I love seeing a system that has made some design decisions and stuck to them. In Ghost the takeaway design decisions are minimalism and markdown. They have seen that through to a harmonised, coherent, system and huge congratulations to them for that.

I haven’t changed my opinion on Markdown! but its great to see well made and well thought-through products in the open source world even if you disagree with some of the decisions. Congrats to the Ghost team!

In case you missed it

I’ve been pondering innovation a lot recently. I think this has come about because of a whole bunch of 1.0 releases that the Coko team has produced over the last weeks. There is a lot of publishing and software systems innovation popping out of those releases – Editoria 1.0, INK 1.0 (now 1.1 already), and PubSweet 1.0 (alpha – beta coming soon).

However, I want to focus on PubSweet because what it offers the publishing sector might be easy to overlook and what it offers is rather an astonishing rewrite of the legacy landscape we currently occupy.

You may wish to grab a sneak peak at the beta, and yet to be promoted, PubSweet website (wait a few weeks before trying PubSweet, we are in alpha and need to test both the docs and the alpha release). The site looks beautiful which is a testament to the hard work put into it by Julien Taquet and Richard Smith-Unna.

The layers of innovation happening in PubSweet is pretty remarkable and this is mainly a product of the initial vision and the fantastic work Jure Triglav has put into architecting an exciting software to realise this vision. You can read a little more about this here.

While I could enumerate the innovations, which include Authsome (Attribute Based Access Control for publishing), a component based architecture (back end and front end) and many others – I want to focus on the effect of PubSweet.

PubSweet, as the website now states, is an open toolkit for building publishing workflows. What does this mean exactly? Well to find out we need to look at two features in particular in a little detail:

There are two features of the PubSweet ecosystem that illustrate this statement best:

  1. The PubSweet component library
  2. PubSweet CLI

First up – the PubSweet component library.

screenshot-from-2017-07-24-04-47-12

Just what are you looking at on this page? Well, PubSweet is not a publishing platform in the way we typically talk about these things. Typically a publishing platform is one big monolith or, at best, a partially decoupled, platform that prescribes a workflow. Or, in the case of Aperta (which I designed for PLoS), supports a variety of possible workflows after some additional software development.

However, PubSweet takes this idea that I designed into Aperta, a whole lot further. PubSweet is not a platform but essentially a framework/toolkit that you can use to build any kind of publishing platform you want. The way to do this is by assembling the platform from components, hence the component library.

The library consists of both back and front end components, since PubSweet can be extended ‘on both ends’. Hence the INK-backend component you see in that library enables the system to interact with INK for file conversions etc. This component is not visible to the user but extends the overall functionality of the platform ‘under the hood’.  While the Editoria-bookbuilder component is the bookbuilder interface for the book platform Editoria.

2016-06-28_15-41-37

And the wax-editor component that you see further down the page is the best of breed editor we built on top of Substance.

mockup-v4-page-9

What does this mean? Well, it means that you can assemble the platform you want out of all these components. No more building the platform for book production (eg booktype etc) or the Manuscript Submission System (eg Aperta), rather you can take the components you want and assemble them as you like. Not only does this support a tremendous number of use cases, but it has the following knock on effects:

  1. efficient reuse – if you need something new in your publishing platform you need only build the difference – making changes small and affordable.
  2. shared reduced effort – if you contribute what you build back to the library you reduce the burden of development for others.
  3. innovation – anyone with some JavaScript skills or JS folks on staff can innovate on the component level and slot that into their publishing platform. Making innovation and the associated risk cheap and easy to back out of.
  4. continual optimisation – no need to ‘jump platforms’ when you run out of functionality. Instead you can continually optimise your workflow by reconfiguring and extending the system.
  5. developing conversations – now this might not be so obvious but I find this tremendously exciting. By sharing components and innovations conversations will start to emerge between publishers. At this moment this doesn’t happen because the systems are owned by software vendors. Publishers don’t talk to each other about platforms, which also means that they hardly spend the time learning from each other about how to optimise their workflow (and less time on how to innovate). However, publishers will naturally fall into these conversations if they use each other’s components…I have already seen this in action at the PubSweet 2.0 meeting I facilitated last week in San Francisco. It was very exciting to see.
  6. Jumpstart a new solutions ecosystem – currently publishers are reliant on ‘big box’ software vendors and service providers. If a publisher wants to improve their system they must deal with an expensive, slow moving. vendor. That vendor must then balance legacy code with a legacy client base to determine if they will make the requested changes. However, with PubSweet and the component approach, we might see the evolution of hundreds or thousands of small existing JavaScript shops assembling platforms out of the components for publishers. A small university publisher, for example, could work with a small local development house to assemble and extend a PubSweet component based platform to meet their needs. Effectively that means publishers have a whole lot more choice about who they do business with, it could also mean opening up a market for programmers they did not previously exist.

This is tremendously exciting. It’s so exciting I can hardly speak right now! No hyperbole intended, I can literally feel the adrenaline flowing through my veins as I write this.

So…the second feature of the PubSweet universe that is going to blow peoples minds is the PubSweet CLI. So, CLI is an acronym for ‘Command Line Interface’. Geeky stuff. I won’t go into the full details of the PubSweet CLI as it is intended as a tool for the technically minded, but the item that will probably interest and excite you is scheduled for the 1.2 release. And that is – single line platform installs.

Yep…just imagine. If you want to try out a particular configuration of components which someone else has put together as a platform, then you need only run one command to install it.

screenshot-from-2017-07-24-04-45-05

Imagine something like:

pubsweet install editoria

or perhaps…

pubsweet install journal-platform

or…

pubsweet install open-peer-review-journal

…can you imagine?…. it is currently ridiculously difficult to install publishing platforms even if you have the code (most publishing platforms are closed source so you don’t even have the option). But… a single command line and you will have the system you want up and running…

Interestingly, this has been part of the vision from the beginning (notes from the early sessions I had with Jure, and Michael and Oliver from Substance can be found here), but it was almost too exciting to speak out loud in case we couldn’t get there. Now it is within striking distance and we should see this functionality within some weeks.

Even better is that this is all Open Source. So a big vendor won’t be able to buy this ever. They can use it, provide services on it, but they will never be able to have it all exclusively to themselves. We aren’t going to, nor can we, sell PubSweet to some proprietary vendor so don’t even bother asking. It is free for now and forever, and this might just be the most disruptive part of the whole plan.

Supporting Other People’s Innovations

As much as I have innovated, I also like to support innovation. I’ve done this throughout my career but in recent years I’ve been extremely lucky to have had the resources (both time and finances) to play a more substantial role in supporting some pretty interesting, and extremely innovative, projects.

First, with my Shuttleworth Fellowship the first thing I invested in was support for Substance.io. We then followed this up with additional funding from Coko. The aim here was to support, what I believe to be, a critical player in the open source publishing ecosystem.

Next Coko also helped, both financially and with additional support and advice, Nokome Bentley and the wonderful Stenci.la project. Nokome has since received a grant from the Sloan Foundation. Which is pretty awesome and very much deserved.

And most recently Coko has supported ScienceFair financially so Richard Unna-Smith can work half time on the project while continuing to work with us on PubSweet. I wrote a little more about Sciencefair here.

In addition, Coko has helped the bring together of a bunch of very interesting open science/open source/open publishing projects under the umbrella of the (still very new) Open Source Alliance for Open Science (I facilitated the first OSAOS meeting in Portland a few months ago). I’m hoping OSAOS will foster a lot of cross-collaboration and innovations will appear out of this mixing of minds.

Anyways, innovation is really about creating a culture of possibility and an ecosystem of connected thinking as much as it about supporting individual projects/approaches, and I’m very proud of having played a part in helping support some of these people and bring some very smart folks into conversation with each other.

INKy Thinky

Recently we released INK 1.1, it is a great milestone to arrive at, just a few weeks after 1.0. Charlie Ablett, the lead dev for INK, has been working very hard nailing down the framework and has done an amazing job. It is well engineered and INK is a powerful application. You may wish to ink1 if you wish to learn more about where INK is now.

For the purposes of the rest of this post (and if you don’t wish to read the above PDF) I should provide a bit of background, cut directly from the PDF:

INK is intended for the automation of a lot of publishing tasks from file conversion, through to entity extraction, format validation, enrichment and more.

INK does this by enabling publishing staff to set up and manage these processes through an easy to use web interface and leveraging shared open source converters, validators, extractors etc. In the INK world these individual converters/validators (etc) are called ‘steps’. Steps can be chained together to form a ‘recipe’.

Documents can be run through steps and recipes either manually, or by connecting INK to a platform (for example, a Manuscript Submission System). In the later case files can be sent to INK from the platform, processed by INK automatically, and sent back to the original platform without the user doing anything but (perhaps) pushing a button to initiate the process.

The idea being that the development of reusable steps should be as easy as possible (it is very easy), and these can be chained together to form a chain (recipe) that can perform multiple operations on a document. If we can build out the community around step building then we believe we will be able to provide a huge amount of utility to publishers who either can’t do these things because they don’t know how or they rely on external vendors to perform these tasks which is slow and costly.

A basic step, for example, might be :

  • convert a MS Word docx file to HTML

Which is pretty useful to many publishers who either prefer to work on an HTML document to improve the content, or have HTML as a final target format for publishing to the web. Another step might be :

  • Validate HTML

Which is useful in a wide variety of use cases. Then consider a step like:

  • convert HTML to PDF

That would be useful for many use cases as well, for example, a journal that wishes to distribute PDF and HTML.

While these are all very useful conversions/validations in themselves, imagine if you could take any of these steps and chain them together:

  1. convert MS Word docx to HTML
  2. convert HTML to PDF

So, you now have a ‘recipe’ comprising of several steps that will take a document through each step, feeding the result of one step into another, and at the end you have PDF and HTML converted from a MS Word source file. Or you could then also add validations:

  1. convert MS Word docx to HTML
  2. convert HTML to PDF
  3. validate HTML

That is what INK does. It enables file conversion experts to create simple steps that can be reused and chained together. INK then manages these processes in very smart ways, taking care of error reporting, inspection of results for each individual step, logs, resource management and a whole lot more. In essence it is a very powerful way to easily create pipelines (through a web interface) and, more importantly, takes care of the execution of those pipelines in very smart ways.

I think, interestingly, conversion vendors themselves might find this very useful to improve their services, but the primary target is publishers.

In many ways, INK is actually more powerful that what we need right now. We are using it primarily to support the conversion of MS Word to HTML in support of the Editoria platform. But INK is well ahead of that curve, with support for a few case studies that are just ahead of us. For example, INK supports the sending of parameters in requests that target specific steps in the execution chain, and it supports accounts for multiple organisations, and a few others things that we are not yet using. It is for this reason that we will change our focus to producing steps and recipes that publishers need since it doesn’t make sense to build too far ahead of ourselves (I’m not a fan of building anything for which we don’t currently have a demonstrated need).

These steps will cover a variety of use cases. In the first instance we will build some very simple ‘generic steps’ and some ‘utility steps’.

A Utility Step is something like ‘unzip file’ or ‘push result to store x’ etc…general steps that will be useful for common ‘utility operations’ (ie not file processing) across a variety of use cases.

A Generic Step is more interesting. Since there are a lot of very powerful command line apps out there for file processing we want to expose these easily for publishers to use. HTMLTidy, Pandoc, ebook-convert for example, are very powerful command line apps for performing conversions and validations etc. To get these tools to do what you want it is necessary to specify some options, otherwise known as parameters. We can currently support the sending of parameters (options) to steps and recipes to INK when requesting a conversion, so we will now make generic steps for each of these amazing command line apps.

That means we only have to build a generic ‘pandoc’ step, for example, and each time you want to use it for a different use case you can send the options to INK when requesting the conversion.

The trick is, however, that you need to know each of these tools intimately to get the best out of them. This is because they have so many options that only file conversion pros really know how to make them do what you need. You can check out, for example the list of options for HTMLTidy. They are pretty vast. To make these tools to do what you need it is necessary to send a lot of special options to them when you execute the command. So, to help with this, we will also make ‘modes’ to support a huge variety of use cases for each of these generic steps. Modes are basically shortcuts for a grouping of options to meet a specific use case. So you can send a request to run step ‘pandoc’ to convert an EPUB to ICML (for example) and instead of having to know all the options necessary to do this you just specific the shortcut ‘EPUBtoICML’…

This makes these generic steps extremely powerful and easy to use and we hope that file conversion pros will also contribute their favorite modes to the step code for others to use.

Anyway… I did intend to write about how INK got to where it was, similar to the write up I did about the road to PubSweet 1.0. I don’t think I have the energy for it right now so I will do it in detail some other time. But in short … INK has its roots in a project called Objavi that came out of FLOSS Manuals around 2008 or 2009 or so. I’ll get exact dates when I write it up properly. The need was the same, to manage conversion pipelines. It was established as a separate code base to the FLOSS Manuals book production interfaces and that was one of it’s core strengths. When booki (the 2nd gen FLOSS Manuals production tool) evolved to Booktype Objavi became Objavi2. Unfortunately after I left the conversion code was integrated into the core of Booktype which I always felt was a mistake, for many reasons of which I will leave also for a later post. So when I was asked by PLoS to develop a new Manuscript Submission System for them (Tahi/Aperta) I wanted to use Objavi for conversions but it was no longer maintained. So I initiated iHat. Unfortunately that code is not available from PLoS. Hence INK.

The good news is, this chain of events has meant INK, while conceptually related to Objavi and iHat, is an order of magnitude more sophisticated. It fulfils the use cases better, and does a better job of managing the processing of files. It also opens the door for community extensions (Steps are just a light wrapper, and they can be released and imported as Gem files into the INK framework).

At this moment we are looking at a number of steps, still deciding which to do first so if you have a use case let us know! A beginning list of possible steps is listed below:

Utility steps

  • Zip/tar into an archive (or unzip/untar)
  • Third-party API call
  • Collect all modified files from previous steps
  • Email all files
  • Download a file from a URI
  • SFTP files to external store
  • Generic terminal command

Generic Steps

  • Calibre
  • Mogrify (batch image processing)
  • Convert (Imagemagik command)
  • HTMLTidy
  • PDFTKF
  • Vivliostyle
  • Pandoc
  • WKHTMLTOPDF

Steps

  • HTML Validation
  • JATS Validation
  • Plagiarism checks
  • Interpolation of document structure and extraction/annotation of parts (Abstract, methods, results etc)
  • Subject matter taxonomy identification
  • Image extraction
  • DOI Assignment and registration
  • EpubCheck (being written by Richard Smith-Unna)
  • Natural Language Processing to identify authors, title, institution names, etc in an academic paper
  • Identify place names in a document
  • Identify people names in a document
  • Convert HTML body to JATS
  • Create valid JATS metadata
  • Merge JATS body and metadata
  • Syndicate content to various services
  • Push to Continuum