Home made bio gas demo, Cairo 2010

Why Persistent Identifiers are the Wrong Idea

I think I will rewrite this. It seems to me it’s only half the solution…more coming shortly.
This afternoon I was reading Elizabeth Eisenstein’s “The Printing Revolution in Early Modern Europe” when I came across this passage:

To consult different books, it was no longer so essential to be a wandering scholar...The era of the glossator and the commentator came to an end, and a new "era of intense cross-referencing between one book and another" began.

The point here is that after the printing press came about, there were more books available. An obvious point. As a result, cross-referencing became a feature, since it was possible to access and, consequently, reference other literature more readily.

This made me think about the current discussions around persistent identifiers for scholarly content. It seems the current solution is to offer a layer of indirection: this enables a stable identifier to persist, and should the ‘actual location’ of the content be changed, then we can re-configure the redirect to point to the new location.

Martin Fenner and Geoff Bilder point to this solution in their very good postings on this topic. However, this method does not overcome the real problem. What if either:

  • no one updates the redirection after the location has changed
  • the content really goes offline (through loss of domain, for example)

It appears to me that we have the wrong solution. There is really no way to solve this issue with URIs. We can only minimise it.

So, how to go about resolving this (so to speak).

One way to get some insight into the issues is to wind back the clock and look at the way content was located in the age of the newly-born printing press. In this age, scholars were liberated because they didn’t have to wander the world looking for a particular book. Instead, identical printed copies proliferated, and it was just a matter of finding a copy of the work you were pursuing. Preferably you found a copy in a library or shop nearby. To find that work, one merely needed the cross-reference information to track it down. That “persistent identifier,” comprised of author’s name, book’s title, page of reference for the quoted material, publisher’s name and location, publication date, was commonly referred to as a citation (we still use that term). The citation helped the reader or researcher find a copy of the work cited. Not a particular printed book, but a copy of that book.

So, how is it that in an age of digital media we have gone backwards? Where copying something is even easier than in the printed age, why are we still pointing to ‘one’ authoritative copy? In essence, we are still referencing a book by stating the exact book that sits in a specific institution, on a particular shelf, with the blue (not green) cover.

It feels a little like the great leap backwards to me.

A way to get around this problem would be simply to allow and encourage content to be copied. Let digital media do what it does best – copy and distribute itself. The ‘unique identifier’ would then not be an URL (with a layer of indirection) but would take the form of a checksum or hash. Finding the right work would then be a matter of searching for a copy of the material with the right checksum.

I don’t know. I’m probably missing something. But it seems we have no problem tracking down YouTube videos when they spawn into the ether. We can also tell one version of software from another, no matter where it is and how it is labelled. Why not just let the content go and provide mechanisms to find a specific version of the content via hash search (also solving the issues of versioning URIs)?

Colophon: written on a lovely Sunday afternoon in the Mission. Adam got up Monday morning with that nagging feeling… rethought it. Talked to Raewyn. Rewrite coming. Written using Ghost (free) software.

Its not US and THEM, its a TEAM, stupid

hi y’all

I have just been reading some posts on Scholarly Kitchen about content creation and the next wave of authoring systems.

It seems the STM sector has long been in need of developing a solution to get their publishing processes out of various traps. The most obvious trap is MS Word. A horrible format, to be sure, but it has long been the default file format for manuscript production, with a small tip of the hat to LaTeX for the technologically gifted. Their recent discussions have mainly been about online ‘authoring systems,’ going beyond MS Word to anticipate documents that are fully transparent to whatever combination of machine and human interactions play a part in understanding and processing the information.

This ‘get-out-of-MS-Word-free’ card is a very attractive proposition. MS Word is basically a binary blob to most publishing systems (even though in actual fact the format of .docx is XML -thereby also abruptly ending the false argument that XML inherently brings structure). As a ‘binary’ (go with me on this for now) MS Word is not transparent to the publication system, there is no record of when the author has worked on it; finding out what they have done since version xxx.xxx and version xxxx.xxxxxxxx is very difficult; nobody else can work on it when the author is also working on it, and there is no control over structure etc etc etc

So getting away from reliance on MS Word is the aim. But getting into (what I might rename as) an ‘authoring only‘ platform – is not the solution.

What is interesting about the SK forum, is that there seems to be a very clear distinction in the minds of publishers between the worlds of the author and the publisher. Most of the comments make this split, and there is much talk of ‘authoring systems’.

It seems a little bizarre to me, as I don’t think it’s wise to think about the author and the publisher as being distinct entities. It’s not a matter of author and publisher working on separate processes to shepherd a manuscript through to publication: it is very much a team effort. Authors and publishers work together in a way that should not be dichotomised: they are a team.

If we don’t acknowledge that, then we will not be able to design good publication systems. There is a lot of unclear thinking around this topic at the moment. The “authoring system” model assumes that content is made in an authoring system by a writer, and then migrates to the publisher’s submission, processing and publishing system, where the publisher does some stuff, and then at various times pings the author back to make changes to metadata, submission information, the manuscript and attendant assets (eg figures)…

In this model, next the author takes the manuscript out of the publisher’s system, ingests to the old authoring system, works on it, exports it, and re-ingests it into the publisher’s system… Hmmm…this cycle is exactly one of the pain points we were trying to avoid by getting away from Microsoft Word.

It seems to me that the current trend to build better authoring systems is a mistake. It is based on the false assumption that ‘MS Word’ is the problem, without realising that there is more to it. Word has been seen as the problem only because it has been the only problem in town. We don’t need better ‘authoring systems’ that repeat the separation between writing and publishing that is inherent in reliance on MS Word. We shouldn’t invest in new authoring systems and believe in them purely because they are ‘not Microsoft Word’. Rather, we need documents to be contained within submission and processing systems for the entire duration of their life, and they need to be completely operational and transparent within that system to all parties that must work on them. Without understanding that need, we are merely mitigating the problem by small steps whilst fooling ourselves that we have solved the larger problem.

We don’t want the author-publisher response/change cycle (a collaborative effort by the team which includes author and publisher) to be in separate systems. We want them working together in the same system. We need teams to work together in the most efficient way possible, and that is in the same (real world- or cyber-) space. Teams work best when they work in the same *space.

Though I see the current efforts towards authoring system development to be interesting, unless they are integrated with processing and workflow features, they will sooner or later be made redundant.

Colophon: written by Adam in 30 mins in a tizz. Tinkered with by Raewyn for another 30 mins. Written using Ghost software (free software!)

Single Voice

version 1.0 ‘not as raw’

During a Book Sprint, or when talking about Book Sprints, the question very quickly arises – ‘what about the author’s single voice?’

The fear is that collaboratively produced books will lose that personal, individual voice that we know so well from all the books we have read and loved.

Wouldn’t Frankenstein be a little lumpy if it was written by a collective? Same goes for any Tom Clancy book (he famously said that “Collaboration on a book is the ultimate unnatural act”). Clancy’s books are not high art, but they do seem to contain a particular ‘Clancy’ style. What about good contemporary literature? Could, for example, the wonderful The Art of Fielding be as wonderful if written by anyone other than Chad Harbach? And what about poetry by the father of English literature – Chaucer? It’s unimaginable that his works could be produced by anyone other than Chaucer.

We believe that both high and low literature would suffer if the works weren’t produced by a single author. There is only one Chaucer, one Clancy (thankfully), one Harbach, one Mary Shelley. We can tell their works apart because each contains a distinctive authorial voice. We know these writers. We know those voices.

We can only imagine what a mess would be created if books were written by more than one person. They would lose the single point of view. That special perspective. That special voice.

Well… first of all, it might be worth knowing that each of these examples actually had more than one contributing author, and each in its own interesting way. From Erick Kelemen’s work in the forensic field of textual criticism, there is good evidence that both Byron and Percy Shelly had a hand in at least some of Frankenstein. According to Kelemen, the extent of the collaboration is not exactly known, and we need to be aware that the discussion is also tainted by a good ole sexist lens. However, there is good evidence of collaboration, not just in the Preface (which some say is written entirely by Percy Shelly), but also in the content of the rest of the story.

Tom Clancy, in his own mind the enemy of collaborative book production, actually collaborated with others on many of his books. Some of the books he has credit for were actually written mostly by others, a common practice amongst authors of best-selling thriller and mystery series for at least the past twenty years.

And in fact, manuscripts produced at the time Chaucer was writing were shared documents, and it is extremely likely the exact words that we now consider to be Chaucer’s were not his at all. As Lawrence Liang has noted, in his discussion of the process of Chaucer’s canonisation, the process was essentially a gathering of manuscripts after Chaucer’s death by experts who decided which words were, and which were not, Chaucer’s, for all time.

In the disclaimer before the Miller’s Tale for instance, Chaucer states that he is merely repeating tales told by others, and that the Tales are designed to be the written record of a lively exchange of stories between multiple tellers, each with different, sometimes opposing, intents.

Interestingly, Chaucer seems not only to recognize the importance of retelling stories, but also a mode of reading that incorporates the ability to edit and write.

If you want to understand the role of collaboration in single-author-culture right now, there is no better story to read than The Book on Publishing which provides a great tale about the publishing of Harbach’s The Art of Fielding and acknowledges the huge value an editor can play in re-writing and restructuring a book.

There are two points here to keep in mind.

Firstly, we don’t know much about how books are written, nor how models of the writing process have changed over time. Paper is not a good medium for preserving versioning, and we lack an on-paper-process mechanism like git blame that can backtrack to show how the text was created. A great pity. The lack of this kind of tool for the vast majority of publishing history means publishing has been able to propagate the very marketable myth of the single author. Collaboration has been obscured and de-valued. Worse, the extent and value of collaboration is not understood. We don’t even have a good language for talking about it.

Secondly, we are left believing claims such as “books have a single voice because they are written by a single author” when this is demonstrably false. Almost every published book has had at least two authorial contributors – the author and the editor; and most books will have been improved during the drafting process by the contributions of test readers.

Collaboration exists to improve works. It is why there are editors in publishing. Editors give feedback and shape the work to, amongst other things, strengthen the impression of the single authorial voice. It is very probably true that an effective single voice can only be achieved by 2 or more people collaborating.

So next time you find yourself asking “how can an authoritative singular voice be preserved in collaborative book production?” it might be better to take a deep breath and ask yourself “how could a single voice ever be effectively realised without collaborating?” That is the real question at play.

Colophon: version 1.0 Written in an hour by Adam Hyde. Raewyn Whyte then improved it (‘made it stronger’). Also, some references still need to be checked as the needed books are in storage in NZ somewhere! Written with Ghost Blog free software (MIT) https://github.com/tryghost/Ghost.

Fantasies of the Library

Fantasies of the Library is a book released last week by Berlin publisher k-verlag. There is an interview in it with me about the future of book publishing beyond the proprietary model. I also talk about my current work for the Public Library of Science and the relationship between Open Access and Open Source.

fantasies_cover

The full interview is also online and can be read here.

My favourite passage is this:
Charles Stankievech: “But why should one value open source and open access? What are the political ramifications of such a philosophy and practice?”

Adam Hyde: “Because both provide more value to humanity. Political ramifications are vast and complex. I like to think about the personal aspects of this choice, however. Living a life of open source and open access forces you to peel away layer by layer the proprietary way of thinking, doing, and being that we have all grown up with. It can be a very painful process, but it’s also extremely liberating and healthy. Largely, it actually means learning to live without fear and paranoia of people ‘stealing your ideas’. That’s quite a freedom in itself.

Books are Evil, Really Evil pt1

Right now books are something of an ironic artefact for me. I am involved in the rapid production of books through a process known as a Book Sprint. We create books. We throw a bunch of people in a room for a week, and carefully facilitate them through a process, progressing them step by step, from zero to finished book, in 5 days or less.Write a book in a week?! An astonishing proposal. Most people who attend a Book Sprint for the first time think it is impossible. Create a book in a week?! Most think that maybe they can get the table of contents done in that time. Maybe even some structure. But a book? 5 days later they have a finished book and they are amazed.

There are many essential ingredients to a Book Sprint. An experienced Book Sprint facilitator is a must. A venue set up just so… Lightweight and easy-to-use book production software. A toolchain that supports rapid rendering of PDF and EPUB from HTML. Good food… A writing team… and a lot more.

One of the contributing factors to success is the terror caused by the seemingly impossible idea that the group will create a book. It is a huge motivator. Such is the enormity of the task in the participants’ minds that they follow the facilitator and dedicate themselves to extremely long hours, working on minute details even when exhausted. There is a lot of chemistry in there. Camaraderie and peer pressure are pushed to maximum effect as a motivational factor, as is fear of failure, especially fear of failure before your peers, both inside and outside the Sprint room. The pleasure of helping your peers is a strong motivator, as is the idea that together we will do this! But the number one motivator is the idea that we are going to produce a book.

We all know that books these days, paper books, are published from a PDF. You send a PDF to the printer, and the final output is a perfect bound book. This happens for most Book Sprints – we send the final PDF to a printer for them to produce the printed book. So what we are creating is actually a PDF (along with an EPUB) …but imagine if we were to call the event “PDF Sprint”. At the beginning of the PDF Sprint we could announce that we have gathered everyone together…so that…at the end of the week…they will have….(gasp!)…a PDF!

Nope. Doesn’t work. Doesn’t even nearly work. A book is the seemingly impossible outcome that Book Sprint participants have come to conquer. Even though the definition of ‘what a book is’ is completely up for grabs, it is abook they are determined to produce. A book is the pinacle of knowledge products, and writing a book is about equal in cerebal achievements to climbing Everest. A PDF is merely getting to base camp, or perhaps the equivalent to planning the trip from your armchair.

So, what’s the problem? Books are good then! A great motivator for Book Sprints. Where exactly is the irony? How can I complain?

Book Sprints are extraordinary events. The people are not just put into a room and left to write. They are led through a process where notions of single authorship and ownership of content just no longer make sense. Such ideas are unsustainable and nonsensical in this environment, and participants slowly deconstruct ideas of authorship over the 5 days.

The participants actively collaborate during the event. Really collaborate. Book Sprints are a kind of collaborative therapy. Each participant learns to let go of their own voice so they can contribute to constructing a new shared voice with the rest of the team. They learn new ways to contribute to group processes, to communicate, to improve each other’s contributions, to synthesize, to empower and encourage others to improve the work without having to ask permission.

The resulting book has no perceivable author. It has been delivered by what is now a community. And as a result, most of the books, about 99% I would say, end up being freely licensed. A book born by sharing is more easily shared. More easily shared than a book created with the notions of author-ownership. The idea of sharing is embedded in the DNA of the Book Sprint, part of the genesis of the product, and sharing more often than not becomes part of the life of the book after the Book Sprint is completed.

But books are evil

So, how is it possible I can take the position that books are evil? Where exactly is the irony? It is a lovely story I just painted. Lots of flowers and warm fuzzy feelings. Wow. Sharing, sharing, sharing… it’s a book love-in!

Well… with some regret, I have to admit that most books do not come into the world this way. They are produced and delivered through legacy processes. Cultural norms shape the production and reception of books, and the ideas contained within them are not born into freedom. These books are, normatively, created by ‘single author geniuses’, born into All Rights Reserved knowledge incarceration, and you cannot recycle them.

Try as we may, we are a little group of people. A small band of Book Sprinters, and it is unlikely that we can sway the mainstream to our way of doing things. We have many victories – Cisco released one of its Book-Sprinted books freely online! Whoot! That’s massive! But… as big as Cisco is, one Cisco book in the sea of publishing is merely a grain of salt in the Pacific. By adding our special grain of salt to this ocean we are by no means making our point more salient.

Books are doomed to be the gatekeepers of knowledge. If you make a book, you are, more than likely, sentencing the words in it to life + 50 years (depending on where you live).

Books are in fact the very artefacts that maintain proprietary knowledge culture.

It comes down to these three issues for me:
1. books gave birth to copyright
2. books gave birth to industrialised knowledge production
3. books gave birth to the notion of the author genius

These three things together are the mainstays of proprietary knowledge culture, and proprietary knowledge culture has been firmly encased and sealed, with loving kisses, between the covers of the book. Ironically these three things, through the process of the Book Sprint, are what we are trying to deconstruct.

many thanks to Raewyn Whyte for improving this post

Building Book Production Platforms p4

The renderer

Note: this is an early version. It has been cleaned up some, but is still needing links and screenshots…. Apologies if the rawness offends you 🙂

This series is skipping around the toolchain, depending on what’s most in my mind at the moment. Today it’s file conversion, otherwise known as ‘rendering’. This is the process of converting one file type to another, for example, HTML-to-EPUB or Word-to-HTML, and so on.

It’s important to have file conversion in the book production world because we often want to convert the HTML to a book format – like book-formatted PDF, or EPUB, mobi and so on, or to import into a new document existing content contained in a file like MS Word.

Manual conversions

It is, of course, quite possible to do all your file conversion manually.

Should you wish to convert HTML into a nice book-formatted PDF, one possible strategy is to go out to InDesign or Scribus and lay it all out like our ancestors did as recently as 2014. Or, if you want to convert MS Word, for example, to HTML, you can just save it as HTML in Word… Yes, Word copies across a lot of formatting junk, but you can clean it up using purpose-built freely available software (such as HTMLTidy and CleanUp HTML), online services (like DirtyMarkup),or a handy app (such as Word HTML Cleaner)…

Manual conversion is not too bad a strategy, as long as it doesn’t take you too long, and it is often more efficient and faster than those convoluted hand-holding technical systems which promise to do it for you in one step. Despite the utopian promises made by automation… you often get better results doing the conversion manually.

I sometimes hear people in Book Sprints, for example, complain something to the tune of “why can’t I just click a button and import part of this paragraph from Wikipedia into the chapter, and then if the entry is updated in Wikipedia, I can just click the button again and it will be updated here”…

I try not to sigh too loudly when I hear this kind of ‘I have all the solutions!’ kind of ‘question’. Some day that may be feasible, but in the meantime, all the knowledge production platforms I have built have an OS-independent trans-format import mechanism which allows those handy keyboard shortcuts ‘control c’ and ‘control p’… sigh. Don’t knock copy and paste! It can get you a long way.

You can also build an EPUB by hand…

But, who really wants to do any of this? Isn’t it better to just push a button and taaadaaa! out pops the format of choice! (I have all the solutions! haha).

I think we can agree it is better if you are able to use a smart tool to convert your files, and the good news is that within certain parameters and for loads of use cases, this is possible. But don’t under-estimate the amount of tweaking for individual docs that might, at times (not always), be required.

Import and export are the same thing

The process of ‘importing’ a document is also sometimes known as ingestion. Before delving down into this, the first gotcha with file transformation is to avoid thinking about import and export as separate technical systems. That can, and has, caused a lot of extra work when building file conversion into a toolchain.

Both import and export are, actually, file conversion. The formats might differ, import might solely be Word-to-HTML in your system and the export HTML-to-EPUB. However, the process of file conversion has many needs that can be abstracted and applied to both of these cases. A quick example – file conversion is often processor and memory intensive. So effective management of these processes is quite important, and in addition, fallbacks for errors or fails need to be managed nicely. These two measures are required independent of the filetypes you are converting from or to. So don’t think about pipelining specific formats, try and identify as many requirements as possible for building just one file conversion system, not an import system plus an export system.

Ingestion

In importing documents to an HTML system, the big use case is MS Word. Converting from MS Word is a road full of potholes and gotchas. The first problem is that there is no single ‘MS Word’ file format, rather there are many many different file formats that all call themselves MS Word. So to initiate a transformation, you need to know what variety of MS Word you are dealing with.

Your life is made much easier if you can stipulate that your system requires one variety – .docx. If you do have to deal with other forms of Word, then it is possible to do transformations on the backend from miscellaneous Word file type X to .docx and then from .docx to HTML. Libreoffice, for example, offers binaries that do this in a ‘headless’ state (it can be executed from the command line without the need to fire up the GUI). However, the more transformations you undertake, the more errors in the conversion you are likely to introduce. Obviously, this then causes QA issues and will increase your workload per transform required.

Another real problem with MS Word versions before .docx, is that .docx is transparent, actually is just XML. So you can view what you are dealing with. Versions before this were horrible binaries – a big clump of ones and zeros – and after that a bunch of gunk. That same problem also exists when you use binaries like soffice (the Libreoffice binary for headless conversions) as it is also a big bucket of numbers. You can’t easily get your head into improving transformations with soffice unless you want to learn to etch code into your CPU with a protractor.

If you have to deal with MS Word at all, I recommend stipulating .docx as the accepted MS Word format. I am not a file type expert, far from it, but from people who do know a lot about file formats I know that .docx looks like it has been designed by a committee… and possibly, a committee whose members never spoke to each other. Additionally, Microsoft, being Microsoft, likes to bully people into doing things their way. .docx is a notable move away from that strategy, and does make it substantially easier to interoperate with other formats, however, there are some horrible gotchas like .docx having its own non-standard version of MathML. Yikes. So, life in the .docx lane is easier, but not necessarily as easy as it should be if we were all playing in the same sandbox like grownups.

I have tried many strategies for Word to HTML conversion. There are many open source solutions out there, but oddly, not as many good ones as you would hope. Recently I looked at these three rather closely:

  • Calibre’s Python based ebook converter script
  • OxGarage
  • soffice (Libreoffice)

There are others…I can’t even remember which ones I have looked at in detail over the years. I have trawled Sourceforge and Github and Gitorious and other places. But the web is enormous these days and maybe there is just the oh-so-perfect solution that I have missed. If you know it then please email it to me, I’ll be ever so grateful (only Open Source solutions please!).

These three are all good solutions, but at the end of the day, I like OxGarage. I won’t go into too much detail about all of them but a quick top-of-mind whys and why-nots would include:

  • Calibre’s scripts are awesome and extendable if you know Python, however they don’t support MS MathML to ‘real’ MathML conversions. That’s a show stopper for me.
  • On the good side, though, Calibre’s developer community is awesome, and they are heroes in this field and deserve support, so if you are a Python coder or dev shop then, by all means, please pitch in and help them improve their .docx to HTML transforms. The world will be a better place for it.
  • soffice does an ok job but it’s a black box, who knows what magic is inside? It tends to make really complex HTML and it is also really heavy on your poor hardware. I have used it a lot but I’m not that big a fan.
  • OxGarage…well…I love OxGarage, so I really recommend this option…

OxGarage was developed by a European Commission-funded project and then, as is common for these kinds of projects, it dried up and was left on a shelf. Along came Sebastian Rhatz, a guru of file transformation, big Open Source guy, and also a force behind the Text Encoding Initiative. Sebastian is also the head of Academic IT Sevices at Oxford University. The guy has credentials! Also, he’s a terribly nice and helpful guy. He has so much experience in this area I feel the trivialness of my questions about our .docx to HTML woes at PLOS… afraid he might absentmindedly swipe me out of the way like I was an inconsequential little midge.. but he’s such a nice chap, instead he invites midges out to lunch.

So, Sebastian picked up the Java code and added some better conversions. OxGarage is essentially a Java framework that manages multiple different types of conversions. You feed it and are fed from it by a simple web API. It doesn’t have the best error handling, but it does do a good job. The .docx to HTML conversion is multi-step. First, the .docx is converted to TEI – a very rich, complex markup, and then from TEI via XSL to HTML. That means that all you really need to worry about is tweaking the XSL to improve the transformation and that’s not too tricky. It could be argued that the TEI conversion is a redundant step. I think it is. But OxGarage works out of the box and does a pretty good job so we have adopted it for the project I am working on for PLOS, and we are happy with it. We have added some special (Open) Sauce but I’ll get to that later. We are using it and will shoot for more elegant solutions later (and we have designed a framework to make this an easy future path).

If you are looking for Word-to-HTML conversion tools, I recommend OxGarage. Im not saying it’s the optimal way to do things, but it will save you having to build another file conversion system from scratch, and from what I can tell from Sebastian, that would take considerable effort.

HTML to books

The other side of the tracks is the conversion of the HTML you have into a book file format. We live in a rather tangled semantic world when it comes to this part of the toolchain. Firstly, it’s hard to know what a book file format actually is these days… on a normal day, I would say a book file format is a file format that can display a human readable structured narrative. Yikes. That’s not particularly helpful… Let’s just say for now that a book file format is – EPUB, book formatted PDF, HTML, and Mobi.

So, transforming from HTML to HTML sounds pretty easy. It is! The question is really how do you want your book to appear on the web? Make that decision first, and then build it. Since you are starting with HTML this should be rather easy and could be done in any programming language.

The next easiest is EPUB. EPUB contains the content in HTML files stored in a zip file with the .epub suffix. That is also easy to create and, depending on your programming language, there are plenty of libraries to help you do this. So moving on…

Mobi. Ok.. mobi is a proprietary format and rather horrible. It contains some HTML, some DB stuff…  I don’t know…  a bit of bad magic, frogs legs… that kind of thing. My recommendation is to first create your EPUB and then use Calibre’s awesome ebook converter script to create the mobi on the backend. Actually, if you use this strategy, you get all the other Calibre output formats for free, including (groan) .docx if you need it. Honestly, go give those Calibre guys all your love, some dev time, and a bit of cash. They are making our world a whole lot easier.

Ok… the holy grail… people still like paper books, and paper books are printed from PDF. Paper these days is a post-digital artifact. So first you need that awkward sounding book-formatted PDF.

Here there are an array of options and then there is this very exciting world that can open to you if you are willing to live a little on the bleeding edge…. I’m referring to CSS Regions… but let’s come back to that.

First, I want to say I am disappointed that some ‘Open Source’ projects use proprietary code for HTML-to-PDF conversion. That includes Press Books and Wikipedia. Wikipedia is re-tooling their entire book-formatted-PDF conversion process to be based on LaTeX and that is an awesome decision. However, right now they use the proprietary PrinceML as does Press Books. I like both projects, but I get a little disheartened when projects with a shared need don’t put some effort into an Open Source solution for their toolchain.

All book production platforms that produce paper books need an HTML-to-PDF renderer to do the job. If it is closed source then I think it needs to be stated that the project is partially Open Source. I’m a stickler for this kind of stuff but also, I am saddened that adoption of proprietary components stops the effort to develop the Open Source solutions we need, while simultaneously enabling proprietary solutions to gain market dominance – which, if you follow the logic through, traps the effort to develop a competitive Open Source solutions in a vicious circle. I wish that more people would try, like the Wikimedia Foundation is trying, to break that cycle.

The browser as renderer

There is one huge Open Source hero in this game. Jacob Truelson. He created WKHTMLTOPDF when he was a university tutor because he wanted his students to be able to write in HTML and give him nicely formatted PDF for evaluation. So he grabbed a headless Webkit, added some QT magic, some tweaks, and made a command line application that converts HTML to book-formatted PDF. We used it in the early days of FLOSS Manuals and it is still one of the renderer choices in the Booktype file conversion suite (Objavi). It was particularly helpful when we needed to produce books in Farsi which contain right to left text. No HTML to PDF renderer supported this at the time except WKHTMLTOPDF because it was based on a browser engine that had RTL support built in.

Some years later WKHTMLTOPDF was floundering, mainly because Jacob was too busy, and I tried to help create a consortium around the project to find developers and finance. However I didn’t have the skills, and there was little interest. Thankfully the problem solved itself over time, and WKHTMLTOPDF is now a thriving project and very much in demand.

WKHTMLTOPDF really does a lot of cool stuff, but more than this, I firmly believe the approach is the right approach. The application uses a browser to render the PDF…that is a HUGE innovation and Jacob should be recognised for it. What this means is – if you are making your book in HTML in the browser, you have at your fingertips lots of really nice tools like CSS and JavaScript. So, for example, you can style your book with CSS or add javaScript to support the rendering of Math, or use typography JavaScripts to do cool stuff… When you render your book to PDF with a browser, you get all that stuff for free. So your HTML authoring environment and your rendering environment are essentially the same thing…  I can’t tell you how much that idea excites me. It is just crazy! This means that all those nice JavaScripts you used, and all that nice CSS which gave you really good looking content in the browser will give you the same results when rendered to PDF. This is the right way to do it and there is even more goodness to pile on, as this also means that your rendering environment is standards-based and open source…

Awesome. This is the future. And the future is actually even brighter for this approach than I have stated. If you are looking to create dynamic content – let’s say cool little interactive widgets based on the incredible tangle! Library – for ebooks (including web-based HTML) … if you use a browser to render the PDF you can actually render the first display state of the dynamic content in your PDF. So, if you make an interactive widget, in the paper book you will see the ‘frozen’ version, and in the ebook/HTML version you get the dynamic version – without having to change anything. I tested this a long time ago and I am itching to get my teeth into designing content production tools to do this.

So many things to do. You can get an idea how it works by visiting that Tangle link above… try the interactive widgets in the browser, and then just try printing to PDF using the browser… you can see the same interactive widgets you played with also print nicely in a ‘static’ state. That gets the principle across nicely.

So a browser-based renderer is the right approach, and Prince, which is, it must be pointed out, partly owned by Håkon Wium Lie, is trying to be a browser by any other name. It started with HTML and CSS to PDF conversion and now…oo!… they added Javascript… so…are they a browser? No? I think they are actually building a proprietary browser to be used solely as a rendering engine. It just sounds like a really bad idea to me. Why not drop that idea and contribute to an actual open source browser and use that. And those projects that use Prince, why not contribute to an effort to create browser-based renderers for the book world? It’s actually easier than you think. If you don’t want to put your hands into the innards of WebKit, then do some JavaScript and work with CSS Regions (see below).

This brings us to another part of the browser-as-renderer story, but first I think two other projects need calling out for thanks. Reportlab for a long time was one of the only command line book-formatted-PDF rendering solutions. It was proprietary but had a community license. That’s not all good news, but at least they had one foot in the Open Source camp. However, what really made Reportlab useful was Dirk Holtwick’s Pisa project that provided a layer on top of Reportab so you could convert HTML to book-formatted-PDF.

The bleeding edge

So, to the bleeding edge. CSS Regions is the future for browser-based PDF rendering of all kinds. Interestingly Håkon Wium Lie has said, in a very emphatic way, that CSS Regions is bad for the web…perhaps he means bad for the PrinceML business model? I’m not sure, I can only say he seemed to protest a little too much. As a result, Google pulled CSS regions out of Chrome. Argh.

However CSS Regions are supported in Safari, and in some older versions of Chrome and Chromium (which you can still find online if you snoop around). Additionally, Adobe has done some awesome work in this area (they were behind the original implementation of CSS Regions in WebKit – the browser engine that used to be behind Chrome and which is still used by Safari). Adobe built the CSS Regions polyfil – a javaScript that plays the same role as built-in CSS regions.

When CSS regions came online in early 2012, Remko Siemerink and I experimented with CSS Regions at an event at the Sandberg (Amsterdam) for producing book- formatted PDF. I’m really happy to see that one of these experiments is still online (NB this needs to be viewed in a browser supporting CSS Regions).

It was obviously the solution for pagination on the web, and once you can paginate in the browser, you can convert those web pages to PDF pages for printing. This was the step needed for a really flexible browser-based book-formatted-PDF rendering solution. It must be pointed out however, that it’s not just a good solution for books… at BookSprints.net we use CSS Regions to create a nicely formatted and paginated form in the browser to fill out client details. Then we print it out to PDF and send it…

Adobe is on to this stuff. They seem to believe that the browser is the ‘design surface’ of the future. Which seems to be why they are putting so much effort into CSS Regions. Im not a terribly big fan of InDesign and proprietary Adobe strategies and products, but credit where credit is due. Without Adobe CSS Regions ^^^ would just be an idea, and they have done it all under open source licenses (according to Alan Stearns from Adobe, the Microsoft and IE teams also contributed to this quite substantially).

At the time CSS Regions were inaugurated, I was in charge of a small team building Booktype in Berlin, and we followed on from Remko’s work, grabbed CSS Regions, and experimented with a JavaScript book renderer. In late 2012, book.js was born (it was a small team but I was lucky enough to be able to dedicate one of my team, Johannes Wilm, to the task) and it’s a JavaScript that leverages CSS Regions to create paginated content in the browser, complete with a table of contents, headers, footers, left-right margin control, front matter, title pages…etc… we have also experimented with adding contenteditable to the mix so you can create paginated content, tweak it by editing it directly in the browser, and outputting to PDF. It works pretty well and I have used it to produce 40 or 50 books, maybe more. The Fiduswriter team has since forked the code to pagination.js which I haven’t looked at too closely yet as I’m quite happy with the job book.js does.

CSS Regions is the way to go. It means you can see the book in the browser and then print to PDF and get the exact same results. It needs some CSS wizardry to get it right, but when you get it right, it just works. Additionally, you can compile a browser in a headless state and run it on the command line if you want to render the book on the backend.

Wrapping it all up

There is one part of this story left to be told. If you are going to go down this path, I thoroughly recommend you create an architecture that will manage all these conversion processes and which is relatively agnostic to what is coming in and going out. For Booktype, Douglas Bagnall and Luka Frelih built the original Objavi, which is a Python based standalone system that accepts a specially formatted zip file (booki.zip) and outputs whatever format you need. It manages this by an API, and it serves Booktype pretty well. Sourcefabric still maintains it and it has evolved to Objavi 2.

However, I don’t think it’s the optimal approach. There are many things to improve with Objavi, possibly the most important is that EPUB should be the file format accepted, and then after the conversion process takes place EPUB should be returned to the book production platform with the assets wrapped up inside. If you can do this, you have a standards-based format for conversion transactions, and then any project that wants to can use it. More on this in another post. Enough to say that the team at PLOS are building exactly this and adding on some other very interesting things to make ‘configurable pipelines’ that might take format X though an initial conversion, through a clean up process, and then a text mining process, stash all the metadata in the EPUB and return it to the platform. But that’s a story for another day…