Its not US and THEM, its a TEAM, stupid

hi y’all

I have just been reading some posts on Scholarly Kitchen about content creation and the next wave of authoring systems.

It seems the STM sector has long been in need of developing a solution to get their publishing processes out of various traps. The most obvious trap is MS Word. A horrible format, to be sure, but it has long been the default file format for manuscript production, with a small tip of the hat to LaTeX for the technologically gifted. Their recent discussions have mainly been about online ‘authoring systems,’ going beyond MS Word to anticipate documents that are fully transparent to whatever combination of machine and human interactions play a part in understanding and processing the information.

This ‘get-out-of-MS-Word-free’ card is a very attractive proposition. MS Word is basically a binary blob to most publishing systems (even though in actual fact the format of .docx is XML -thereby also abruptly ending the false argument that XML inherently brings structure). As a ‘binary’ (go with me on this for now) MS Word is not transparent to the publication system, there is no record of when the author has worked on it; finding out what they have done since version xxx.xxx and version xxxx.xxxxxxxx is very difficult; nobody else can work on it when the author is also working on it, and there is no control over structure etc etc etc

So getting away from reliance on MS Word is the aim. But getting into (what I might rename as) an ‘authoring only‘ platform – is not the solution.

What is interesting about the SK forum, is that there seems to be a very clear distinction in the minds of publishers between the worlds of the author and the publisher. Most of the comments make this split, and there is much talk of ‘authoring systems’.

It seems a little bizarre to me, as I don’t think it’s wise to think about the author and the publisher as being distinct entities. It’s not a matter of author and publisher working on separate processes to shepherd a manuscript through to publication: it is very much a team effort. Authors and publishers work together in a way that should not be dichotomised: they are a team.

If we don’t acknowledge that, then we will not be able to design good publication systems. There is a lot of unclear thinking around this topic at the moment. The “authoring system” model assumes that content is made in an authoring system by a writer, and then migrates to the publisher’s submission, processing and publishing system, where the publisher does some stuff, and then at various times pings the author back to make changes to metadata, submission information, the manuscript and attendant assets (eg figures)…

In this model, next the author takes the manuscript out of the publisher’s system, ingests to the old authoring system, works on it, exports it, and re-ingests it into the publisher’s system… Hmmm…this cycle is exactly one of the pain points we were trying to avoid by getting away from Microsoft Word.

It seems to me that the current trend to build better authoring systems is a mistake. It is based on the false assumption that ‘MS Word’ is the problem, without realising that there is more to it. Word has been seen as the problem only because it has been the only problem in town. We don’t need better ‘authoring systems’ that repeat the separation between writing and publishing that is inherent in reliance on MS Word. We shouldn’t invest in new authoring systems and believe in them purely because they are ‘not Microsoft Word’. Rather, we need documents to be contained within submission and processing systems for the entire duration of their life, and they need to be completely operational and transparent within that system to all parties that must work on them. Without understanding that need, we are merely mitigating the problem by small steps whilst fooling ourselves that we have solved the larger problem.

We don’t want the author-publisher response/change cycle (a collaborative effort by the team which includes author and publisher) to be in separate systems. We want them working together in the same system. We need teams to work together in the most efficient way possible, and that is in the same (real world- or cyber-) space. Teams work best when they work in the same *space.

Though I see the current efforts towards authoring system development to be interesting, unless they are integrated with processing and workflow features, they will sooner or later be made redundant.

Colophon: written by Adam in 30 mins in a tizz. Tinkered with by Raewyn for another 30 mins. Written using Ghost software (free software!)

Single Voice

version 1.0 ‘not as raw’

During a Book Sprint, or when talking about Book Sprints, the question very quickly arises – ‘what about the author’s single voice?’

The fear is that collaboratively produced books will lose that personal, individual voice that we know so well from all the books we have read and loved.

Wouldn’t Frankenstein be a little lumpy if it was written by a collective? Same goes for any Tom Clancy book (he famously said that “Collaboration on a book is the ultimate unnatural act”). Clancy’s books are not high art, but they do seem to contain a particular ‘Clancy’ style. What about good contemporary literature? Could, for example, the wonderful The Art of Fielding be as wonderful if written by anyone other than Chad Harbach? And what about poetry by the father of English literature – Chaucer? It’s unimaginable that his works could be produced by anyone other than Chaucer.

We believe that both high and low literature would suffer if the works weren’t produced by a single author. There is only one Chaucer, one Clancy (thankfully), one Harbach, one Mary Shelley. We can tell their works apart because each contains a distinctive authorial voice. We know these writers. We know those voices.

We can only imagine what a mess would be created if books were written by more than one person. They would lose the single point of view. That special perspective. That special voice.

Well… first of all, it might be worth knowing that each of these examples actually had more than one contributing author, and each in its own interesting way. From Erick Kelemen’s work in the forensic field of textual criticism, there is good evidence that both Byron and Percy Shelly had a hand in at least some of Frankenstein. According to Kelemen, the extent of the collaboration is not exactly known, and we need to be aware that the discussion is also tainted by a good ole sexist lens. However, there is good evidence of collaboration, not just in the Preface (which some say is written entirely by Percy Shelly), but also in the content of the rest of the story.

Tom Clancy, in his own mind the enemy of collaborative book production, actually collaborated with others on many of his books. Some of the books he has credit for were actually written mostly by others, a common practice amongst authors of best-selling thriller and mystery series for at least the past twenty years.

And in fact, manuscripts produced at the time Chaucer was writing were shared documents, and it is extremely likely the exact words that we now consider to be Chaucer’s were not his at all. As Lawrence Liang has noted, in his discussion of the process of Chaucer’s canonisation, the process was essentially a gathering of manuscripts after Chaucer’s death by experts who decided which words were, and which were not, Chaucer’s, for all time.

In the disclaimer before the Miller’s Tale for instance, Chaucer states that he is merely repeating tales told by others, and that the Tales are designed to be the written record of a lively exchange of stories between multiple tellers, each with different, sometimes opposing, intents.

Interestingly, Chaucer seems not only to recognize the importance of retelling stories, but also a mode of reading that incorporates the ability to edit and write.

If you want to understand the role of collaboration in single-author-culture right now, there is no better story to read than The Book on Publishing which provides a great tale about the publishing of Harbach’s The Art of Fielding and acknowledges the huge value an editor can play in re-writing and restructuring a book.

There are two points here to keep in mind.

Firstly, we don’t know much about how books are written, nor how models of the writing process have changed over time. Paper is not a good medium for preserving versioning, and we lack an on-paper-process mechanism like git blame that can backtrack to show how the text was created. A great pity. The lack of this kind of tool for the vast majority of publishing history means publishing has been able to propagate the very marketable myth of the single author. Collaboration has been obscured and de-valued. Worse, the extent and value of collaboration is not understood. We don’t even have a good language for talking about it.

Secondly, we are left believing claims such as “books have a single voice because they are written by a single author” when this is demonstrably false. Almost every published book has had at least two authorial contributors – the author and the editor; and most books will have been improved during the drafting process by the contributions of test readers.

Collaboration exists to improve works. It is why there are editors in publishing. Editors give feedback and shape the work to, amongst other things, strengthen the impression of the single authorial voice. It is very probably true that an effective single voice can only be achieved by 2 or more people collaborating.

So next time you find yourself asking “how can an authoritative singular voice be preserved in collaborative book production?” it might be better to take a deep breath and ask yourself “how could a single voice ever be effectively realised without collaborating?” That is the real question at play.

Colophon: version 1.0 Written in an hour by Adam Hyde. Raewyn Whyte then improved it (‘made it stronger’). Also, some references still need to be checked as the needed books are in storage in NZ somewhere! Written with Ghost Blog free software (MIT) https://github.com/tryghost/Ghost.

Fantasies of the Library

Fantasies of the Library is a book released last week by Berlin publisher k-verlag. There is an interview in it with me about the future of book publishing beyond the proprietary model. I also talk about my current work for the Public Library of Science and the relationship between Open Access and Open Source.

fantasies_cover

The full interview is also online and can be read here.

My favourite passage is this:
Charles Stankievech: “But why should one value open source and open access? What are the political ramifications of such a philosophy and practice?”

Adam Hyde: “Because both provide more value to humanity. Political ramifications are vast and complex. I like to think about the personal aspects of this choice, however. Living a life of open source and open access forces you to peel away layer by layer the proprietary way of thinking, doing, and being that we have all grown up with. It can be a very painful process, but it’s also extremely liberating and healthy. Largely, it actually means learning to live without fear and paranoia of people ‘stealing your ideas’. That’s quite a freedom in itself.

Books are Evil, Really Evil pt1

Right now books are something of an ironic artefact for me. I am involved in the rapid production of books through a process known as a Book Sprint. We create books. We throw a bunch of people in a room for a week, and carefully facilitate them through a process, progressing them step by step, from zero to finished book, in 5 days or less.Write a book in a week?! An astonishing proposal. Most people who attend a Book Sprint for the first time think it is impossible. Create a book in a week?! Most think that maybe they can get the table of contents done in that time. Maybe even some structure. But a book? 5 days later they have a finished book and they are amazed.

There are many essential ingredients to a Book Sprint. An experienced Book Sprint facilitator is a must. A venue set up just so… Lightweight and easy-to-use book production software. A toolchain that supports rapid rendering of PDF and EPUB from HTML. Good food… A writing team… and a lot more.

One of the contributing factors to success is the terror caused by the seemingly impossible idea that the group will create a book. It is a huge motivator. Such is the enormity of the task in the participants’ minds that they follow the facilitator and dedicate themselves to extremely long hours, working on minute details even when exhausted. There is a lot of chemistry in there. Camaraderie and peer pressure are pushed to maximum effect as a motivational factor, as is fear of failure, especially fear of failure before your peers, both inside and outside the Sprint room. The pleasure of helping your peers is a strong motivator, as is the idea that together we will do this! But the number one motivator is the idea that we are going to produce a book.

We all know that books these days, paper books, are published from a PDF. You send a PDF to the printer, and the final output is a perfect bound book. This happens for most Book Sprints – we send the final PDF to a printer for them to produce the printed book. So what we are creating is actually a PDF (along with an EPUB) …but imagine if we were to call the event “PDF Sprint”. At the beginning of the PDF Sprint we could announce that we have gathered everyone together…so that…at the end of the week…they will have….(gasp!)…a PDF!

Nope. Doesn’t work. Doesn’t even nearly work. A book is the seemingly impossible outcome that Book Sprint participants have come to conquer. Even though the definition of ‘what a book is’ is completely up for grabs, it is abook they are determined to produce. A book is the pinacle of knowledge products, and writing a book is about equal in cerebal achievements to climbing Everest. A PDF is merely getting to base camp, or perhaps the equivalent to planning the trip from your armchair.

So, what’s the problem? Books are good then! A great motivator for Book Sprints. Where exactly is the irony? How can I complain?

Book Sprints are extraordinary events. The people are not just put into a room and left to write. They are led through a process where notions of single authorship and ownership of content just no longer make sense. Such ideas are unsustainable and nonsensical in this environment, and participants slowly deconstruct ideas of authorship over the 5 days.

The participants actively collaborate during the event. Really collaborate. Book Sprints are a kind of collaborative therapy. Each participant learns to let go of their own voice so they can contribute to constructing a new shared voice with the rest of the team. They learn new ways to contribute to group processes, to communicate, to improve each other’s contributions, to synthesize, to empower and encourage others to improve the work without having to ask permission.

The resulting book has no perceivable author. It has been delivered by what is now a community. And as a result, most of the books, about 99% I would say, end up being freely licensed. A book born by sharing is more easily shared. More easily shared than a book created with the notions of author-ownership. The idea of sharing is embedded in the DNA of the Book Sprint, part of the genesis of the product, and sharing more often than not becomes part of the life of the book after the Book Sprint is completed.

But books are evil

So, how is it possible I can take the position that books are evil? Where exactly is the irony? It is a lovely story I just painted. Lots of flowers and warm fuzzy feelings. Wow. Sharing, sharing, sharing… it’s a book love-in!

Well… with some regret, I have to admit that most books do not come into the world this way. They are produced and delivered through legacy processes. Cultural norms shape the production and reception of books, and the ideas contained within them are not born into freedom. These books are, normatively, created by ‘single author geniuses’, born into All Rights Reserved knowledge incarceration, and you cannot recycle them.

Try as we may, we are a little group of people. A small band of Book Sprinters, and it is unlikely that we can sway the mainstream to our way of doing things. We have many victories – Cisco released one of its Book-Sprinted books freely online! Whoot! That’s massive! But… as big as Cisco is, one Cisco book in the sea of publishing is merely a grain of salt in the Pacific. By adding our special grain of salt to this ocean we are by no means making our point more salient.

Books are doomed to be the gatekeepers of knowledge. If you make a book, you are, more than likely, sentencing the words in it to life + 50 years (depending on where you live).

Books are in fact the very artefacts that maintain proprietary knowledge culture.

It comes down to these three issues for me:
1. books gave birth to copyright
2. books gave birth to industrialised knowledge production
3. books gave birth to the notion of the author genius

These three things together are the mainstays of proprietary knowledge culture, and proprietary knowledge culture has been firmly encased and sealed, with loving kisses, between the covers of the book. Ironically these three things, through the process of the Book Sprint, are what we are trying to deconstruct.

many thanks to Raewyn Whyte for improving this post

Building Book Production Platforms p4

The renderer

Note: this is an early version. It has been cleaned up some, but is still needing links and screenshots…. Apologies if the rawness offends you 🙂

This series is skipping around the toolchain, depending on what’s most in my mind at the moment. Today it’s file conversion, otherwise known as ‘rendering’. This is the process of converting one file type to another, for example, HTML-to-EPUB or Word-to-HTML, and so on.

It’s important to have file conversion in the book production world because we often want to convert the HTML to a book format – like book-formatted PDF, or EPUB, mobi and so on, or to import into a new document existing content contained in a file like MS Word.

Manual conversions

It is, of course, quite possible to do all your file conversion manually.

Should you wish to convert HTML into a nice book-formatted PDF, one possible strategy is to go out to InDesign or Scribus and lay it all out like our ancestors did as recently as 2014. Or, if you want to convert MS Word, for example, to HTML, you can just save it as HTML in Word… Yes, Word copies across a lot of formatting junk, but you can clean it up using purpose-built freely available software (such as HTMLTidy and CleanUp HTML), online services (like DirtyMarkup),or a handy app (such as Word HTML Cleaner)…

Manual conversion is not too bad a strategy, as long as it doesn’t take you too long, and it is often more efficient and faster than those convoluted hand-holding technical systems which promise to do it for you in one step. Despite the utopian promises made by automation… you often get better results doing the conversion manually.

I sometimes hear people in Book Sprints, for example, complain something to the tune of “why can’t I just click a button and import part of this paragraph from Wikipedia into the chapter, and then if the entry is updated in Wikipedia, I can just click the button again and it will be updated here”…

I try not to sigh too loudly when I hear this kind of ‘I have all the solutions!’ kind of ‘question’. Some day that may be feasible, but in the meantime, all the knowledge production platforms I have built have an OS-independent trans-format import mechanism which allows those handy keyboard shortcuts ‘control c’ and ‘control p’… sigh. Don’t knock copy and paste! It can get you a long way.

You can also build an EPUB by hand…

But, who really wants to do any of this? Isn’t it better to just push a button and taaadaaa! out pops the format of choice! (I have all the solutions! haha).

I think we can agree it is better if you are able to use a smart tool to convert your files, and the good news is that within certain parameters and for loads of use cases, this is possible. But don’t under-estimate the amount of tweaking for individual docs that might, at times (not always), be required.

Import and export are the same thing

The process of ‘importing’ a document is also sometimes known as ingestion. Before delving down into this, the first gotcha with file transformation is to avoid thinking about import and export as separate technical systems. That can, and has, caused a lot of extra work when building file conversion into a toolchain.

Both import and export are, actually, file conversion. The formats might differ, import might solely be Word-to-HTML in your system and the export HTML-to-EPUB. However, the process of file conversion has many needs that can be abstracted and applied to both of these cases. A quick example – file conversion is often processor and memory intensive. So effective management of these processes is quite important, and in addition, fallbacks for errors or fails need to be managed nicely. These two measures are required independent of the filetypes you are converting from or to. So don’t think about pipelining specific formats, try and identify as many requirements as possible for building just one file conversion system, not an import system plus an export system.

Ingestion

In importing documents to an HTML system, the big use case is MS Word. Converting from MS Word is a road full of potholes and gotchas. The first problem is that there is no single ‘MS Word’ file format, rather there are many many different file formats that all call themselves MS Word. So to initiate a transformation, you need to know what variety of MS Word you are dealing with.

Your life is made much easier if you can stipulate that your system requires one variety – .docx. If you do have to deal with other forms of Word, then it is possible to do transformations on the backend from miscellaneous Word file type X to .docx and then from .docx to HTML. Libreoffice, for example, offers binaries that do this in a ‘headless’ state (it can be executed from the command line without the need to fire up the GUI). However, the more transformations you undertake, the more errors in the conversion you are likely to introduce. Obviously, this then causes QA issues and will increase your workload per transform required.

Another real problem with MS Word versions before .docx, is that .docx is transparent, actually is just XML. So you can view what you are dealing with. Versions before this were horrible binaries – a big clump of ones and zeros – and after that a bunch of gunk. That same problem also exists when you use binaries like soffice (the Libreoffice binary for headless conversions) as it is also a big bucket of numbers. You can’t easily get your head into improving transformations with soffice unless you want to learn to etch code into your CPU with a protractor.

If you have to deal with MS Word at all, I recommend stipulating .docx as the accepted MS Word format. I am not a file type expert, far from it, but from people who do know a lot about file formats I know that .docx looks like it has been designed by a committee… and possibly, a committee whose members never spoke to each other. Additionally, Microsoft, being Microsoft, likes to bully people into doing things their way. .docx is a notable move away from that strategy, and does make it substantially easier to interoperate with other formats, however, there are some horrible gotchas like .docx having its own non-standard version of MathML. Yikes. So, life in the .docx lane is easier, but not necessarily as easy as it should be if we were all playing in the same sandbox like grownups.

I have tried many strategies for Word to HTML conversion. There are many open source solutions out there, but oddly, not as many good ones as you would hope. Recently I looked at these three rather closely:

  • Calibre’s Python based ebook converter script
  • OxGarage
  • soffice (Libreoffice)

There are others…I can’t even remember which ones I have looked at in detail over the years. I have trawled Sourceforge and Github and Gitorious and other places. But the web is enormous these days and maybe there is just the oh-so-perfect solution that I have missed. If you know it then please email it to me, I’ll be ever so grateful (only Open Source solutions please!).

These three are all good solutions, but at the end of the day, I like OxGarage. I won’t go into too much detail about all of them but a quick top-of-mind whys and why-nots would include:

  • Calibre’s scripts are awesome and extendable if you know Python, however they don’t support MS MathML to ‘real’ MathML conversions. That’s a show stopper for me.
  • On the good side, though, Calibre’s developer community is awesome, and they are heroes in this field and deserve support, so if you are a Python coder or dev shop then, by all means, please pitch in and help them improve their .docx to HTML transforms. The world will be a better place for it.
  • soffice does an ok job but it’s a black box, who knows what magic is inside? It tends to make really complex HTML and it is also really heavy on your poor hardware. I have used it a lot but I’m not that big a fan.
  • OxGarage…well…I love OxGarage, so I really recommend this option…

OxGarage was developed by a European Commission-funded project and then, as is common for these kinds of projects, it dried up and was left on a shelf. Along came Sebastian Rhatz, a guru of file transformation, big Open Source guy, and also a force behind the Text Encoding Initiative. Sebastian is also the head of Academic IT Sevices at Oxford University. The guy has credentials! Also, he’s a terribly nice and helpful guy. He has so much experience in this area I feel the trivialness of my questions about our .docx to HTML woes at PLOS… afraid he might absentmindedly swipe me out of the way like I was an inconsequential little midge.. but he’s such a nice chap, instead he invites midges out to lunch.

So, Sebastian picked up the Java code and added some better conversions. OxGarage is essentially a Java framework that manages multiple different types of conversions. You feed it and are fed from it by a simple web API. It doesn’t have the best error handling, but it does do a good job. The .docx to HTML conversion is multi-step. First, the .docx is converted to TEI – a very rich, complex markup, and then from TEI via XSL to HTML. That means that all you really need to worry about is tweaking the XSL to improve the transformation and that’s not too tricky. It could be argued that the TEI conversion is a redundant step. I think it is. But OxGarage works out of the box and does a pretty good job so we have adopted it for the project I am working on for PLOS, and we are happy with it. We have added some special (Open) Sauce but I’ll get to that later. We are using it and will shoot for more elegant solutions later (and we have designed a framework to make this an easy future path).

If you are looking for Word-to-HTML conversion tools, I recommend OxGarage. Im not saying it’s the optimal way to do things, but it will save you having to build another file conversion system from scratch, and from what I can tell from Sebastian, that would take considerable effort.

HTML to books

The other side of the tracks is the conversion of the HTML you have into a book file format. We live in a rather tangled semantic world when it comes to this part of the toolchain. Firstly, it’s hard to know what a book file format actually is these days… on a normal day, I would say a book file format is a file format that can display a human readable structured narrative. Yikes. That’s not particularly helpful… Let’s just say for now that a book file format is – EPUB, book formatted PDF, HTML, and Mobi.

So, transforming from HTML to HTML sounds pretty easy. It is! The question is really how do you want your book to appear on the web? Make that decision first, and then build it. Since you are starting with HTML this should be rather easy and could be done in any programming language.

The next easiest is EPUB. EPUB contains the content in HTML files stored in a zip file with the .epub suffix. That is also easy to create and, depending on your programming language, there are plenty of libraries to help you do this. So moving on…

Mobi. Ok.. mobi is a proprietary format and rather horrible. It contains some HTML, some DB stuff…  I don’t know…  a bit of bad magic, frogs legs… that kind of thing. My recommendation is to first create your EPUB and then use Calibre’s awesome ebook converter script to create the mobi on the backend. Actually, if you use this strategy, you get all the other Calibre output formats for free, including (groan) .docx if you need it. Honestly, go give those Calibre guys all your love, some dev time, and a bit of cash. They are making our world a whole lot easier.

Ok… the holy grail… people still like paper books, and paper books are printed from PDF. Paper these days is a post-digital artifact. So first you need that awkward sounding book-formatted PDF.

Here there are an array of options and then there is this very exciting world that can open to you if you are willing to live a little on the bleeding edge…. I’m referring to CSS Regions… but let’s come back to that.

First, I want to say I am disappointed that some ‘Open Source’ projects use proprietary code for HTML-to-PDF conversion. That includes Press Books and Wikipedia. Wikipedia is re-tooling their entire book-formatted-PDF conversion process to be based on LaTeX and that is an awesome decision. However, right now they use the proprietary PrinceML as does Press Books. I like both projects, but I get a little disheartened when projects with a shared need don’t put some effort into an Open Source solution for their toolchain.

All book production platforms that produce paper books need an HTML-to-PDF renderer to do the job. If it is closed source then I think it needs to be stated that the project is partially Open Source. I’m a stickler for this kind of stuff but also, I am saddened that adoption of proprietary components stops the effort to develop the Open Source solutions we need, while simultaneously enabling proprietary solutions to gain market dominance – which, if you follow the logic through, traps the effort to develop a competitive Open Source solutions in a vicious circle. I wish that more people would try, like the Wikimedia Foundation is trying, to break that cycle.

The browser as renderer

There is one huge Open Source hero in this game. Jacob Truelson. He created WKHTMLTOPDF when he was a university tutor because he wanted his students to be able to write in HTML and give him nicely formatted PDF for evaluation. So he grabbed a headless Webkit, added some QT magic, some tweaks, and made a command line application that converts HTML to book-formatted PDF. We used it in the early days of FLOSS Manuals and it is still one of the renderer choices in the Booktype file conversion suite (Objavi). It was particularly helpful when we needed to produce books in Farsi which contain right to left text. No HTML to PDF renderer supported this at the time except WKHTMLTOPDF because it was based on a browser engine that had RTL support built in.

Some years later WKHTMLTOPDF was floundering, mainly because Jacob was too busy, and I tried to help create a consortium around the project to find developers and finance. However I didn’t have the skills, and there was little interest. Thankfully the problem solved itself over time, and WKHTMLTOPDF is now a thriving project and very much in demand.

WKHTMLTOPDF really does a lot of cool stuff, but more than this, I firmly believe the approach is the right approach. The application uses a browser to render the PDF…that is a HUGE innovation and Jacob should be recognised for it. What this means is – if you are making your book in HTML in the browser, you have at your fingertips lots of really nice tools like CSS and JavaScript. So, for example, you can style your book with CSS or add javaScript to support the rendering of Math, or use typography JavaScripts to do cool stuff… When you render your book to PDF with a browser, you get all that stuff for free. So your HTML authoring environment and your rendering environment are essentially the same thing…  I can’t tell you how much that idea excites me. It is just crazy! This means that all those nice JavaScripts you used, and all that nice CSS which gave you really good looking content in the browser will give you the same results when rendered to PDF. This is the right way to do it and there is even more goodness to pile on, as this also means that your rendering environment is standards-based and open source…

Awesome. This is the future. And the future is actually even brighter for this approach than I have stated. If you are looking to create dynamic content – let’s say cool little interactive widgets based on the incredible tangle! Library – for ebooks (including web-based HTML) … if you use a browser to render the PDF you can actually render the first display state of the dynamic content in your PDF. So, if you make an interactive widget, in the paper book you will see the ‘frozen’ version, and in the ebook/HTML version you get the dynamic version – without having to change anything. I tested this a long time ago and I am itching to get my teeth into designing content production tools to do this.

So many things to do. You can get an idea how it works by visiting that Tangle link above… try the interactive widgets in the browser, and then just try printing to PDF using the browser… you can see the same interactive widgets you played with also print nicely in a ‘static’ state. That gets the principle across nicely.

So a browser-based renderer is the right approach, and Prince, which is, it must be pointed out, partly owned by Håkon Wium Lie, is trying to be a browser by any other name. It started with HTML and CSS to PDF conversion and now…oo!… they added Javascript… so…are they a browser? No? I think they are actually building a proprietary browser to be used solely as a rendering engine. It just sounds like a really bad idea to me. Why not drop that idea and contribute to an actual open source browser and use that. And those projects that use Prince, why not contribute to an effort to create browser-based renderers for the book world? It’s actually easier than you think. If you don’t want to put your hands into the innards of WebKit, then do some JavaScript and work with CSS Regions (see below).

This brings us to another part of the browser-as-renderer story, but first I think two other projects need calling out for thanks. Reportlab for a long time was one of the only command line book-formatted-PDF rendering solutions. It was proprietary but had a community license. That’s not all good news, but at least they had one foot in the Open Source camp. However, what really made Reportlab useful was Dirk Holtwick’s Pisa project that provided a layer on top of Reportab so you could convert HTML to book-formatted-PDF.

The bleeding edge

So, to the bleeding edge. CSS Regions is the future for browser-based PDF rendering of all kinds. Interestingly Håkon Wium Lie has said, in a very emphatic way, that CSS Regions is bad for the web…perhaps he means bad for the PrinceML business model? I’m not sure, I can only say he seemed to protest a little too much. As a result, Google pulled CSS regions out of Chrome. Argh.

However CSS Regions are supported in Safari, and in some older versions of Chrome and Chromium (which you can still find online if you snoop around). Additionally, Adobe has done some awesome work in this area (they were behind the original implementation of CSS Regions in WebKit – the browser engine that used to be behind Chrome and which is still used by Safari). Adobe built the CSS Regions polyfil – a javaScript that plays the same role as built-in CSS regions.

When CSS regions came online in early 2012, Remko Siemerink and I experimented with CSS Regions at an event at the Sandberg (Amsterdam) for producing book- formatted PDF. I’m really happy to see that one of these experiments is still online (NB this needs to be viewed in a browser supporting CSS Regions).

It was obviously the solution for pagination on the web, and once you can paginate in the browser, you can convert those web pages to PDF pages for printing. This was the step needed for a really flexible browser-based book-formatted-PDF rendering solution. It must be pointed out however, that it’s not just a good solution for books… at BookSprints.net we use CSS Regions to create a nicely formatted and paginated form in the browser to fill out client details. Then we print it out to PDF and send it…

Adobe is on to this stuff. They seem to believe that the browser is the ‘design surface’ of the future. Which seems to be why they are putting so much effort into CSS Regions. Im not a terribly big fan of InDesign and proprietary Adobe strategies and products, but credit where credit is due. Without Adobe CSS Regions ^^^ would just be an idea, and they have done it all under open source licenses (according to Alan Stearns from Adobe, the Microsoft and IE teams also contributed to this quite substantially).

At the time CSS Regions were inaugurated, I was in charge of a small team building Booktype in Berlin, and we followed on from Remko’s work, grabbed CSS Regions, and experimented with a JavaScript book renderer. In late 2012, book.js was born (it was a small team but I was lucky enough to be able to dedicate one of my team, Johannes Wilm, to the task) and it’s a JavaScript that leverages CSS Regions to create paginated content in the browser, complete with a table of contents, headers, footers, left-right margin control, front matter, title pages…etc… we have also experimented with adding contenteditable to the mix so you can create paginated content, tweak it by editing it directly in the browser, and outputting to PDF. It works pretty well and I have used it to produce 40 or 50 books, maybe more. The Fiduswriter team has since forked the code to pagination.js which I haven’t looked at too closely yet as I’m quite happy with the job book.js does.

CSS Regions is the way to go. It means you can see the book in the browser and then print to PDF and get the exact same results. It needs some CSS wizardry to get it right, but when you get it right, it just works. Additionally, you can compile a browser in a headless state and run it on the command line if you want to render the book on the backend.

Wrapping it all up

There is one part of this story left to be told. If you are going to go down this path, I thoroughly recommend you create an architecture that will manage all these conversion processes and which is relatively agnostic to what is coming in and going out. For Booktype, Douglas Bagnall and Luka Frelih built the original Objavi, which is a Python based standalone system that accepts a specially formatted zip file (booki.zip) and outputs whatever format you need. It manages this by an API, and it serves Booktype pretty well. Sourcefabric still maintains it and it has evolved to Objavi 2.

However, I don’t think it’s the optimal approach. There are many things to improve with Objavi, possibly the most important is that EPUB should be the file format accepted, and then after the conversion process takes place EPUB should be returned to the book production platform with the assets wrapped up inside. If you can do this, you have a standards-based format for conversion transactions, and then any project that wants to can use it. More on this in another post. Enough to say that the team at PLOS are building exactly this and adding on some other very interesting things to make ‘configurable pipelines’ that might take format X though an initial conversion, through a clean up process, and then a text mining process, stash all the metadata in the EPUB and return it to the platform. But that’s a story for another day…

Building Book Production Platforms p3

The editor

This series is based on HTML as a source file format for book production platforms. I have looked at many HTML editors over the years and can remember when the first in-browser editors appeared…it was a shock. Prior to that, all HTML creation was done by writing directly in HTML code, then came fully-featured environments like Front Page and Dreamweaver which allowed you to create HTML in a desktop app, then came wiki mark-up to liberate us all from the tedium of writing HTML, and then finally…the browser-based WYSIWYG editor…

It’s worth noting that the Wiki markup and WYSIWYG solutions were a different category to the previous solutions in that they weren’t designed for creating web pages, rather they were designed to enable the production of wikis and content management systems.

What-You-See-Is-What-You-Get at that time was a refreshing and liberating idea, a newcomer to this scene (although WYSIWYG as a concept and approach to document creation predates the web, with the first true WYSIWYG editor being a word processing program called Bravo, invented by Charles Simonyi at the Xerox Palo Alto Research Center in the 1970s, the basis for MS Word and Excel). Many WYSIWYG strategies have been explored, and many weaknesses unearthed, including the very important critique that What-You-See-Isn’t-What-You-Get, because the HTML created by these editors is unreliable, but more on this later…

As far as I can tell, the first HTML-based WYSIWYG editor was Amaya World, first released in 1996. I don’t know what WYISWYG editor was the first to be embedded in a browser (if you know, please email me). However, I remember TinyMCE like it was a revelation. According to the Sourceforge page, they started building it around 2004 to solve the need to produce HTML in content management systems. It was, and is a great product. The strategy at the time was pretty much to emulate rendered HTML within an HTML text field. TinyMCE (and the others that followed until contenteditable came along) used a heap of JavaScript to turn a simple editable text field into a window onto the browser’s layout engine.

alt.typesetting

From this point, a number of plugins were developed for use with WYSIWYG editors like TinyMCE to extend the functionality.

Some of these plugins ventured into the ever-important area of typesetting. TinyMCE even tried at times to make up for the lack of browser functionality in this area – for example, there were some early and workable attempts to bring equation editing into TinyMCE. I can’t remember when it was, but surely around 2006/2007 that IMathAs had an experimental jab at this. I thought it was pure genius at the time as there was no other solution (I searched! a lot!). As I can remember they used a very clever round-tripping to achieve the result… essentially, since browsers didn’t then support Math, IMathAs supported inline equation writing using ASCIIMath syntax. When the user clicked out of the field, the editor sent the equation markup to the server, and the server returned the rendered equation as either a bitmap (PNG, JPG etc) or as vector graphics (SVG). It was genius and I built it into the workflow for FLOSS Manuals around 2010 because we wanted to write books with equations for software like CSound (produced in 2010/11). It worked great – the equations always looked a bit ‘bit-mappy’ but we could write and print books with equations using in-browser editors and HTML as source (the HTML produced included equations as images so we could render PDF direct from the HTML). Awesome.

It’s also worth noting that these days math typesetting has largely moved to the client side with the evolution of fantastic libraries like MathJax  and KateX. These are JavaScript typesetting libraries designed to be included in web pages and render math from markup on the client side. There are one or two tools that still use server-side rendering, notably Mathoid, and this is often used to reduce the burden on the client’s browser, however, they have possible additional bandwidth costs as the client and server must remain in communication with each other, otherwise nothing will be rendered or displayed.

Mature solutions for math and other typesetting issues are only just starting to come online – no surprise to historians who inform us that notations such as math and music were the last to come online for the printing press as well. The first book to contain music notation post-Gutenberg, the Mainz Psalter, was printed with moveable type, and the music notation was added manually by scribe. It seems the first thing to get right is the printing of text, all other notations come later in print systems. These solutions are slowly evolving – even music notation has its champions. However what is really surprising, is that Google, a company priding itself on being built from ground up by math-heads, seems to struggle to bring native math typesetting to their own browser . I would say that is embarrassing.

Contenteditable

Moving on from typesetting… The initial WYSIWYG editors proved an admirable solution for many content management systems. The name persisted but the background technology fundamentally changed in when the first implementations of the  W3C contenteditable specification for HTML5 was brought to the browser. Contenteditable is an attribute that you can add to a number of HTML container elements (like ‘P’ or ‘div’) that make their contents editable. So, in essence, you are directly editing the content in the browser rather than through some JavaScript text field trickery. This strategy might be called WYSI (What-you-see-IS). This strategy also spawned a whole new generation of editors leveraging this new native browser functionality. Aloha Editor was one of the first to grab the spotlight but there were many many others to follow. Additionally, the big legacy WYSIWYG editors such as TinyMCE and CKEditor added support for contenteditable although they were a little slow to the party.

Contenteditable at first promised a lot… native editing of the browser … phew … that certainly lowers the technology burden and opens the door to innovation and experimentation. Additionally, the idea that this is a read-write web suddenly comes more keenly into focus when you can just edit the web page right there and inherit all the same JavaScript and CSS that operates on the element you are editing. It’s good stuff.

Inevitably, though, some problems soon emerged, first some wobbly things about not being able to place a caret (your mouse pointer) between block elements (eg between two divs) was really a problem, but later a more serious issue was identified – contenteditable does not produce stable results across different browsers, such that if you edit one page in browser A, the resulting HTML could look different if you edited the same page in browser B. That might not affect many people – if you just want some text with bold and italics and simple things, then it doesn’t really matter… the HTML created will render results that will look pretty much the same across any browser. However there are use cases where this is a problem.

In the world I work in at the moment – scholarly publishing – we don’t want a manuscript that contains inconsistent HTML depending on the browser it was edited in … it hurts us down the road when we want to translate that HTML into different formats (eg JATS) or if we want to render that HTML directly to PDF and get consistent results.

So, unfortunately, editors like CKEditor (used by many book production platforms including Atlas), and TinyMCE (used by Press Books AKA WordPress), or Aloha (used by Booktype 2) have to use a lot of JS magic to produce consistent HTML to overcome the problems with contenteditable, and this doesn’t always succeed. I would recommend reading this article from the Guardian tech team about these issues. You also may wish to look at this video from the Wikimedia Foundation Visual Editor core devs for the comments on contenteditable (audio is lousy, jump to 1.14.00) (readable subtitles can be found here .

A better way

So…what can you do? The answer is kind of threefold.

First choice: decide not to care – an entirely legitimate approach. You can still do huge amounts with these editors, and if you need to tweak the HTML now and then, so what? I can clean up the HTML by hand for a 300-page book in an hour, not too tough really and it enables me to cash in on all the other enormous gains to be had from a single-source HTML environment.

Second choice: provide client-side and server-side cleanup tools. Most editors have these built in, but it’s also good to implement backend clean up tools to ‘consistify’ the HTML at save-time (or at least at pre-render time).

Third choice: find an editor that is designed to produce consistent HTML.

In my opinion, the third choice is the best long term option and the ‘right way’ to do things. Being able to produce reliable results with ease, and without having to do things twice, will make everyone’s life easier.

Thankfully there is a new editor on the scene that is designed to do just this – the Wikimedia Foundations Visual Editor. This editor was developed to help the Wikimedia Foundation solve an uptake problem … essentially there are not enough people these days prepared to sit around learning Wiki markup (which is pretty much a complicated scripting language these days). The resulting need to drop the threshold on the foundation’s contributions environment has resulted in the development of the Visual Editor (VE). New contributors can use an easy WYSIWYG-like environment instead of having to learn markup.

Obviously, the entire Wikimedia universe is already stored as wiki markup, so the editor needs to be able to translate between HTML and wiki markup on-the-fly (interestingly, it is actually part of much larger plan to store all Wikimedia Foundation content in HTML. To do this there is a back end called Parsoid that converts markup to HTML and vice versa. Also, the HTML produced by the editor obviously needs to be tightly controlled, otherwise the results are going to be a mess when converting back to wiki markup. VE does this by replicating the content in its own internal (JSON) model and displaying the results in a contenteditable region. When the content is edited, the edits are strictly controlled by the VE internal rules, and then rendered to display. The result… consistent HTML is produced across any edit session regardless of the browser used…

That’s pretty good news. This is one reason amongst many that the platform I am working on for the Public Library of Science has adopted VE software (we were the first to use it outside of the Wikimedia Foundation) and we are extending it considerably and contributing the results upstream to the VE repos. So far we have added table, equation, and citation plugins – all of which are in an early alpha state. If you want a peek, you can see some of the work here.

I highly recommend to anyone building a book platform, or any other kind of knowledge production platform, that you examine VE more closely. It is a sophisticated software and has been carefully thought through. It is still relatively immature, and development is happening at an incredible pace, which can make testing new plugins against an unstable API a little arduous … still, it is a great solution. VE also approaches content editing in a way that will open the door to concurrent editing via operational transformations in HTML, which is a hard problem and currently only solved by Google and Wikidocs (recently acquired by Atlassian.)

If you are in the process of choosing an editor, choose VE and contribute to the effort to make it not just the best Open Source solution to editing in the browser, but the best solution, full stop.

Many thanks to Raewyn Whyte for improving this article.

PLOS

I’m working on a platform with the Public Library of Science (PLOS) in San Francisco. I’m Designer and Product Owner, and working with a talented team of approximately 15 full-time people. We are creating a platform for the production, processing, and publishing of science. It is a very versatile platform and could easily be utilised for many other purposes. Over the next months I’ll be blogging a little about some of the approaches we have adopted and highlighting some interesting technical solutions. The platform will be Open Source.

The platform is an HTML-first environment and includes ingestion of MS Word (and other formats) and conversion to HTML. I first presented some information about these strategies at Books in Browsers V last week in San Francisco. The video of my presentation can be found around the 26th minute here:
http://www.ustream.tv/recorded/54426830

Staticness as a Symptom of an Unwell Book

In the past few years, I’ve been constructing a set of practices around knowledge production. Its been a Lego-like process. I add one brick, move it a bit, choose one of another colour and try and work out where it fits… It’s not so much a process of deconstruction of publishing as the construction of something else. Mainly because I don’t know enough about publishing to deconstruct it, so I have to start with what I know.
Sometimes, however, I realise just how odd that construction is. Usually, this occurs when I see an articulation of ‘how things are’ in ‘the real world’ and I realise… oops! I don’t at all relate to that or see the sense in it. That occurred recently with a discussion on the Read 2.0 list. Someone made a throwaway comment about how books might be changing and one day we might not think of them as static objects. A few comments followed about what the future of the book might be. I was left feeling very much on the outside in my Lego- constructed world. The only thing I could add to this conversation would have to pull apart the founding assumptions of the future ponderings – and I just didn’t know where to begin.

Books are mostly static objects in this world. You make them, ship them, consume them. Next. However, my experience with FLOSS Manuals is that this is exactly what we are trying to avoid. Since 2006, we have been avoiding staticness – rather the aim was, and is, to keep books alive. To a certain extent manuals about software present an obvious case where the value of ‘live’ books is evident. However, I don’t think that advantage is restricted to books about software. Books should be living entities and grow with time, expanding or contracting with input from many people.

So, staticness, through the lens of FLOSS Manuals and a ‘living book’ practice is actually a symptom of an ‘unwell’ book. A book that is not growing is a neglected work. It is left alone on the shelf to gather dust and die, where, by comparison, healthy books are attended to. They have growth spurts, or sometimes slower, prolonged periods of affection. They may fork, or become a central discussion, they might transit into other contexts entirely, or traverse languages. They are alive and more useful to us, vibrant and engaging. They also reveal the fundamental humanity behind the text… the living book as a conversation between living beings. A book, at its best, is a thriving community.

So, I have learned to look for staticness, and when I find it I literally get sad. I see this as a failed work, something that we were not able to diagnose, or failed to get to in time. At the same time, each failed work is a study and we have much to learn about how and why books die.

I think it’s important to learn to look for staticness as an early symptom of a failed book.

Building Book Production Platforms p2.

Amongst the core requirements for a book production platform are the source file format and the editor, and of course, these are intimately linked. The development team is usually faced with choosing the format first, then the editor.

Choosing a format

The choice is pretty much HTML? or not HTML?

Currently, HTML is the ruling choice of format for a web-based book production platform. HTML is native to the browser and has associated standards-compliant support, such as CSS and javascript. Inversely, not choosing HTML puts you in a bit of a hole and can create a lot of overhead.

It might be interesting to look back a little and learn from some others since there have already been projects in this space that started down non-HTML roads and then gave it up for HTML. Kathi Fletcher, originally the project manager and technical director for Connexions (now OpenStax) which built a custom XML editing environment for academic materials, later researched in-browser XML vs HTML editing environments for her Shuttleworth Foundation-funded OERPUB project. Kathi became convinced HTML was the way to go and did some great work on HTML editor usability with the Aloha HTML editor.

We have chosen to use HTML5 as the canonical format for open textbooks, because developers and tools are more plentiful for web technologies than XML technologies.

http://www.w3.org/2012/12/global-publisher/statements-of-interest/29-oerpub.html

The (closed source) O’Reilly Atlas platform also started with the complex AsciiDoc format (a form of markdown) and eventually awoke to the power of HTML in 2012.

HTML5-based authoring offers a streamlined production workflow for producing both print and digital outputs, facilitates “digital first” content development, and is a perfect fit for creating a WYSIWYG, web-based writing experience.

http://radar.oreilly.com/2013/09/html5-is-the-future-of-book-authorship.html

They then got an extra dose of religion and started a project called HTML Book which is a suggested ‘spec’ for a subset of HTML elements to be used in books.

So far I have not seen a book production platform travel the reverse direction, from HTML to something else. Instead, we are seeing more and more platforms start with, or change to, HTML as a source file format.

Markdown

Markdown is sometimes put forward as the way to go but I’m not going to go into that in too much detail here. I have talked about this elsewhere. The only additional thing I will say is that markdown causes even more issues for book production platforms than those included in that article. Namely, in an in-browser markdown environment, the markdown will most likely be displayed as rendered HTML next to the authoring pane. That is a huge amount of lost screen space and extra UI junk for no apparent gain. Think of the UX cost. If you don’t have that rendered display then you will most likely only see pure markdown in a text field with no rendered display. The user won’t really know if their document looks right until it is rendered somewhere down the line, which is also a tremendous cost to the user for no apparent gain. Markdown: all pain, no gain.

NB: There is a possible good use case for markdown as a helpful add-on for HTML WYSI editors but I will cover that later.

LaTeX

There is a more valid use case for LaTeX in the browser since some scientists and academics will never use anything else, and you’ll never convince them to adopt HTML regardless of the benefits. You are up against the great Church of Knuth and I don’t fancy your chances. If your audience is comprised of LaTeX addicts, then I think you have no choice other than to support that.

Many times I have talked about remedies for unstructured MS Word documents (for scientific manuscripts) only to have someone earnestly comment that if everyone just learned LaTeX we would be in a much better position… They might be right, but I’m pretty sure it’s never going to happen.

The preference for LaTeX is a legacy issue, and problematic, but needs to be dealt with. (Unfortunately, today’s Markdown heroes are growing legacy issues like this with each passing day, and that is going to cost us down the road).

Recently there has been some interesting work on in-browser LaTeX editing including the (closed source) Authorea platform and, most notably the (open source) ShareLatex platform. ShareLatex round trips the LaTeX syntax displayed and edited in a text area (in the browser), renders that to a bitmap on the server, and returns it to the browser for a side-by-side ‘WYSIWYG’. The effect is that you can see a just-in-time rendered view of the LaTeX as you type. It’s a neat trick and effective if you insist on LaTeX in a web-based platform. Then you just have to live with the UI costs. However ,you only need this approach if you wish to support the full LaTeX syntax. If you wish to just support LaTeX equations, you can use an HTML editor with a LaTeX plugin based on MathJax or the Khan Academies KaTeX(and there are some other solutions such as Mathoid).

Incidentally, if you need to support full LaTeX I highly recommend checking out ShareLaTeX over WriteLaTeX. They both have the same approach but WriteLaTeX is proprietary whereas you can pick up the ShareLaTeX code and integrate it straight away. You could even build your own ShareLaTeX-like interface, it’s not too tricky – together with a colleague – Rizwan Reza – and I (Riz did all the hard work) we managed to develop a workable prototype in about 2 days, but there are many gotchas setting up the LaTeX compiler correctly.

Not many book projects need LaTeX, so I will leave this as an interesting edge case. There are solutions if you need it, but not many people need it.

XML

I think I will just leave it to the words of the brilliant Dave Cramer (Hachette Book Group):

So we’ve chosen to describe our content with HTML, and build our production system around HTML.

When I tell people that, they smile condescendingly, and chuckle a bit. “That’s cute. Why don’t you use real XML?”

I then ask them what you can do in Docbook (or TEI, or NLM) that you can’t do in XHTML? I haven’t heard a good answer to that question yet. XHTML is XML, by definition. Calling something “para” rather than “p” doesn’t get you anything, except carpal tunnel syndrome and invoices from consultants

The problem with non-HTML XML is that it is essentially just XML the browser can’t use. Hence you lose all that other good stuff like WYSI editors, CSS design tools, cool tricks with JavaScript, and all the cool tools that are being developed for HTML. XML just can’t compete, plus you are going to need to convert the XML into HTML anyway. So don’t make life more complicated than it already is – continue your love affair with XML as long as it’s XHTML!

HTML

HTML is king in the browser and it gives you all you need to make books. I don’t want to spend a lot of time arguing the merits of HTML in this post as there is a lot to say and I want to bring that in at other points of the conversation. But in brief:

  • HTML is supported by JS and CSS.
  • The DOM is known natively by the browser.
  • HTML is standards-based.
  • It is straightforward.
  • HTML is easy to read and easy to clean.
  • HTML is the most popular file format on the planet.
  • You can use HTML to build structure in documents with assigned class and id values, or microdata formats.
  • HTML is the native file format for EPUB.
  • PDF can be rendered directly from HTML in the browser (more on this later).
  • HTML can be paginated in the browser.
  • CSS is moving towards supporting more and more page based elements.
  • The browser can act as a design environment.
  • You can create real what-you-see-is (WYSI) production environments.
  • Basic editing is built into the format itself.
  • HTML is supported by an enormous number of tools for conversion (in and out).
  • HTML is supported by an enormous repository of examples (the web).
  • HTML is cheap to develop with.
  • Even book designers are getting used to it.
  • Some schools teach it.
  • It has a million free tutorials online to help you use it.
  • A lot of people know HTML.
  • HTML is supported by a rapidly proliferating body of JavaScripts for typography, graph production, animation, interactions, dynamic rendering etc etc etc etc

The basic idea really comes down to this.

  • HTML is the cheapest format of our time.
  • HTML is the most popular format of our time.
  • HTML is the networked document format of our time.

Increasingly HTML is the way stories are told, whether that is in books or on the web. It’s a trite analogy perhaps, but HTML is the paper of our time. As Dave Cramer says:

why start with something other than HTML, when you have to turn it into HTML anyway?

It should be noted that Cramer also turns HTML into paper, and the Hachette Book Group have produced many beautiful paper books using HTML as the source format. Many of these books you will now find in the best-selling sections of your local brick and mortar bookstore.

Other print producers are also using HTML as the source. Print-on-demand services, used to producing very ugly books by ingesting MS Word and dealing with all that ugly conversion, are also adopting HTML production environments. Books on Demand, Germany’s largest Print on Demand service, adopted Booktype so their customers could have an easy in-browser book production environment. The source format is HTML but the users don’t know that, and the books look better. That’s the beauty of HTML.

Finally, helped a lot by the efforts of Dave Cramer and the Hachette Book Group, Sourcefabric, the people at O’Reilly, and others adopting HTML, we might be starting to see the very beginning of the changing of the guard.

HTML is the way to go for Book Production Platforms. If you choose another format you will find you inherit a lot of costs and additional overheads and, sadly, you will soon be left behind. There is just no format going forward at the same speed as HTML. Not even close. So, my advice is to first ask the question – can HTML do what you need? Push your team to answer that question. Will format X give you anything HTML can’t? As an exercise ask your team to prove HTML is a bad choice, and if the answer is not-HTML, then contact me and let me try and talk you into it!