publish.js

A list of some interesting Open Source Javascripts (mostly JS) that hold good possibilities for knowledge production and publishing.

Content Production

Lens
License: FreeBSD
Code: https://github.com/elifesciences/lens
WWW: http://lens.elifesciences.org/about/

Tangle
License: MIT
Code: https://github.com/worrydream/Tangle
WWW: http://worrydream.com/Tangle/

Flatsheet
License: MIT
Code: https://github.com/flatsheet/flatsheet
WWW: http://flatsheet.io/

Realtime Markdown Editor
License: TBD (emailed dev)
Code: https://github.com/scotch-io/node-realtime-markdown-viewer
WWW: https://scotch.io/tutorials/building-a-real-time-markdown-viewer(tutorial)
Demo: http://realtimemarkdown.herokuapp.com/

PlumbJS
License: MIT & GPL
Code: https://github.com/sporritt/jsplumb/
WWW: http://jsplumbtoolkit.com/

Ice
License: GPL 2
Code: https://github.com/NYTimes/ice
WWW: http://nytimes.github.io/ice/demo/

Annotator
License: MIT & GPL v3
Code: https://github.com/openannotation/annotator/
WWW: http://annotatorjs.org/

Space.js
License: MIT
Code: https://github.com/gopatrik/space.js
WWW: http://www.slashie.org/space.js/

MathJax
License: Apache
Code: https://github.com/mathjax/MathJax
WWW: https://www.mathjax.org/

KaTeX
License: MIT
Code: https://github.com/Khan/KaTeX
WWW: http://khan.github.io/KaTeX/

Velocity.js
License: MIT
Code: https://github.com/julianshapiro/velocity
WWW: http://julian.com/research/velocity/

JuxtaposeJS
License: Mozilla
Code: https://github.com/NUKnightLab/juxtapose
WWW: https://juxtapose.knightlab.com/

TimelineJS3
License: Mozilla
Code: https://github.com/NUKnightLab/TimelineJS
WWW: http://timeline.knightlab.com/

StoryMap
License: Mozilla
Code: https://github.com/NUKnightLab/StoryMapJS
WWW: https://storymap.knightlab.com/

Chartbuilder
License: MIT
Code: https://github.com/NUKnightLab/Chartbuilder
WWW: http://quartz.github.io/Chartbuilder/

Soundcite
License: Mozilla
Code: https://github.com/NUKnightLab/soundcite
WWW: http://soundcite.knightlab.com/

Pagination

BookJS
License: AGPL
Code: https://github.com/booktype/BookJS
Web: none

Cassius
License: AGPL
Code: https://github.com/MartinPaulEve/CaSSius
Web: https://www.martineve.com/2015/07/24/getting-started-typesetting-with-cassius/

Vivliostyle
License: Apache
Code: https://github.com/vivliostyle
Web: http://vivliostyle.com/

BookJS Polyfil
License: AGPL
Code: https://github.com/BookSprints/bookjs-polyfill
Web: none

Typography

Hyphenation

Sweet Justice
License: BSD
Code: https://github.com/aristus/sweet-justice
WWW: http://carlos.bueno.org/2010/04/sweet-justice.html

Hyphenator
License: LGPL
Code: https://code.google.com/p/hyphenator/downloads/list
WWW: https://code.google.com/p/hyphenator/

Hypher
License: BSD
Code: https://github.com/bramstein/hypher
WWW: http://www.bramstein.com/projects/hypher/

Font Resizing

FlowType.js
License: MIT
Code: http://github.com/simplefocus/FlowType.JS
WWW: http://simplefocus.com/flowtype/

Squishy
License: unspecified (argh! please include a license file!)
Code: https://github.com/lemonmade/squishy
WWW: http://cmsauve.com/projects/squishy/

Fittext
License: WTFPL
Code: https://github.com/davatron5000/FitText.js
WWW: http://fittextjs.com/

SlabText
License: MIT
Code: https://github.com/freqDec/slabText/
WWW: http://freqdec.github.io/slabText/

Responsive Text
License: MIT
Code: https://github.com/ghepting/jquery-responsive-text
WWW: n/a

Line Spacing

Typeset
License: unspecified (argh… )
Code: https://github.com/bramstein/typeset
WWW: http://www.bramstein.com/projects/typeset/

Kerning

Kern.js
License: WTFPL
Code: https://github.com/bstro/kern.js
WWW: http://www.kernjs.com/

Kerning.js
License: MIT (license file is in the wrong place – check the README)
Code: https://github.com/endtwist/kerning.js
WWW: http://kerningjs.com/

TypeButter
License: CC-BY-SA 3.0
Code: https://github.com/hudsonfoo/typebutter
WWW: http://typebutter.com/

Drop Caps

DropCap.js
License: confused (All rights reserved and apache??)
Code: https://github.com/adobe-webplatform/dropcap.js
WWW: http://blogs.adobe.com/webplatform/2014/10/02/drop-caps-are-beautiful/

Color

Color Font
Licence: WTFPL
Code: http://manufacturaindependente.com/colorfont/media/js/colorfont.js
WWW: http://manufacturaindependente.com/colorfont/

Font Tricks

Lettering JS
License: WTFPL
Code: https://github.com/davatron5000/Lettering.js
WWW: http://letteringjs.com/

jqisotext
License: MIT & GPL
Code: https://code.google.com/p/jqisotext/downloads/list
WWW: http://workshop.rs/2010/01/jqisotext-jquery-text-effect-plugin/

Arctext.js
License: MIT
Code: http://tympanus.net/Development/Arctext/Arctext.zip
WWW: http://tympanus.net/codrops/2012/01/24/arctext-js-curving-text-with-css3-and-jquery/

Blast.js
License: MIT
Code: https://github.com/julianshapiro/blast
WWW: http://julian.com/research/blast/

Base Lines

Baseline.js
License: WTFPL
Code: https://github.com/daneden/Baseline.js
WWW: n/a

Baseline CSS
License: CC-BY-SA 3.0
Code: http://baselinecss.com/download/baseline.zip
WWW: http://baselinecss.com/

HUGrid
License: GPL
Code: http://bohemianalps.com/tools/grid/HeadsUpGrid_download.zip
WWW: http://bohemianalps.com/tools/grid/

CSS Stuff

Tufte CSS
License: MIT
Code: https://github.com/daveliepmann/tufte-css
WWW: http://www.daveliepmann.com/tufte-css/

Other Resources
This and That

https://github.com/yumyo/js-type-master
http://stateofwebtype.com/
http://nytlabs.com/projects/editor.html
https://www.youtube.com/watch?v=VbCqFQ1sTYQ
http://dat-data.com/
https://github.com/sharelatex/clsi-sharelatex
https://anmolkoul.wordpress.com/2015/06/05/interactive-data-visualization-using-d3-js-dc-js-nodejs-and-mongodb/
http://webapplayers.com/inspiniaadmin-v2.2/cssanimation.html
http://yusugomori.com/projects/css-sans/fonts
http://codepen.io/juliangarnier/pen/idhuG
https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills
https://github.com/WebComponents/webcomponentsjs

Some Good Reading

https://medium.com/@mql/self-host-a-scientific-journal-with-elife-lens-f420afb678aa
https://medium.com/@
mql/centralised-vs-decentralised-publishing-626055376c81
http://book.pressbooks.com/

Platforms

Booktype
License: AGPL
Code: https://github.com/sourcefabric/Booktype
WWW: https://www.sourcefabric.org/en/booktype/

PressBooks
License: not specified ūüôĀ emailed org
Code: https://github.com/pressbooks/pressbooks
WWW: http://pressbooks.com/

PubSweet
License: Apache
Code: https://github.com/BookSprints/PubSweet
WWW: none

More coming…

Building Book Production Platforms p5

Workflow

Much more to come.

Most of the book production platforms in circulation have very little workflow tools to speak of. This is not necessarily a bad thing. A platform that is ‘just an editing environment’ is still pretty powerful. If you do need tools to assist with workflow, then in situations where a small group know each other well they can use email or, if in real space, Post-it notes or paper to track what needs to be done next. In many cases, a live chat in the interface, or integrated topic-based forum, will be enough to satisfy many workflow needs, and in other cases the platform can be augmented by external systems such as wikis, online spreadsheets, content management systems and other tools to meet particular requirements.

However, there are a number of situations where these ‘solutions’ become unsatisfactory. This is especially true for organsiations which have a large number of people involved in processing content, or which have sophisticated content-processing needs (such as book publishers).

Before going too much further, let me clarify what “workflow tools” are. In the broadest sense, they are tools that help you to know what needs to be done, and when it needs to be done by. Using this very broad definition, we can see that mechanisms such as discussion forums and live chats are workflow tools. By chatting with colleagues through a live chat or forum, you can work out what needs to be done next, or get a ‘notification’ (a shout out) that it needs to be done now… From there, systems can evolve into complex technical environments which are either relatively open-ended (such as Trello) or relatively closed, such as hard-coded workflow pipelines.

The first book production system I built for FLOSS Manuals was ‘built’ on top of Twiki in 2006-2007, had some basic workflow tools, namely:

  • a basic live chat
  • a dropdown status-selector for marking chapter statuses (needs content, needs images, finished, and so on)
  • notifications in the table of contents when someone is editing a chapter
  • a mailing list where efforts could be coordinated

blog-notif-en-1

These tools were simple and effective and served us well for a number of years. I also incorporated similar mechanisms into Booktype and PubSweet. In addition, when we used these platforms for Book Sprints, lots of whiteboard scribbles and Post-its were utilised.

whiteboard_scribbles

nameless

In a Book Sprint, notably, the facilitator is the main coordinating workflow mechanism. I point that out because it is important to understand that workflow tools can include humans ‚Äď often the easiest way to know what needs to be done and when is to be done by, is to get someone else to tell you.

sprinters

And let’s not forget that human factor! We are living at a time when we tend to want to programmatically solve problems with overly prescriptive technical systems. But sometimes underdetermining the technical systems is the right way to go.

I first tried pushing past these basic software workflow tools with Booktype ‚Äď a book production system I founded, now housed with Sourcefabric. I leveraged the kanban idea of multiple columns (phases) populated by ‘todo’ items to build the equivalent of a digital kanban system, making the first simple prototype in a demo for the Frankfurt Book Fair in 2012. The inspiration came from Pivotal Tracker and the Open Source Fulcrum.

Most often the technology used to set up a kanban system is a whiteboard, with marker pens to draw and label the columns, and Post-it notes as a marker of the tasks. This kind of system is popular in unconferences, and also often used by software development houses. We also use this type of kanban approach a lot in Book Sprints.

booktype

The task manager (as I called it) and the production system were linked to each book and worked nicely. Although this system didn’t make it into the core code of Booktype, this version got the idea across, and later Juan Gutierrez made an integrated version for PubSweet. (During 2014, I also built this idea into a system for PLOS).

The task manager used a whiteboard-like interface in which the user could use to create columns (phases). Cards could be added to each phase and simple notes kept on each card. It was simple but effective.

In time I discovered Trello, and Why Cards are the Future of the Web by Paul Adams – these examples placed cards nicely within evolving design paradigms of the Internet, and I started to think about this model in more detail.

There are many advantages to cards, not the least being that cards can ‘follow the user’ – think of them as powerful work-unit-applications that can be accessed by a user within any context where they are needed.

cards_web

Additionally, when thinking of digital cards within the digital workflow-kanban paradigm, the nice thing is that it is a very simple model. There are essentially just 2 elements ‚Äď cards and columns. You can create as many of each as you like. Further, you can name the columns and cards anything you like. That means these two devices can be used to represent any number of simple or complex workflows. You can start from the kanban default ‚Äď three columns marked ‘to do’, ‘doing’ and ‘done,’ and add cards for each task ‚Äď progressing them from left to right as tasks progress from ‘to do’ to ‘done.’ This is the default configuration when creating a new Trello board.

Replicating this system in an application is pretty easy to do. Trello is an excellent example. While Trello is not easily integrated into another technical system (such as an in-house publishing system), it is interesting in that the designers, while surely tempted by all that a web application could offer, have endeavoured to keep the Trello system true to the kanban ideology of useful but simple. With Trello, therefore, you can add columns, and cards to columns, naming each as required. When you open a card, however, you have some nice widgets for making lists, comments, discussions, attaching files etc. This is something paper cannot easily do, at least not with the small real estate afforded by Post-it notes.

Trello is a lovely application precisely because these systems, like the paper kanban, have been designed to be simple to use and serve as many generic use cases as possible.

However,while digital kanban systems like this are useful as standalone ‘context agnostic’ systems, they could be much more powerful for publishers (or anyone) if this simplicity and flexibility could be preserved while the system also served their specific use case. The trick is to preserve the simplicity and flexibility to allow publishers to model existing and future workflows in an easily ‘grok-able’ drag and drop manner (similar to Trello), while building cards that reflect the publisher’s specific needs (to invite editors, push content to external vendor services, perform peer review etc).

Building cards like this,¬†means pushing cards away from the Trello/kanban generic-use paper metaphor towards a more sophisticated specific-use digital and networked paradigm. This means embracing the idea that cards are networked applications and building cards that precisely serve the publisher’s needs and integrate into their existing internal and external systems.

The Four Phases of Knowledge Production

Knowledge production consists of four basic phases – manage, create, process, and share. When designing knowledge production systems, it pays to keep these four phases in mind and to build platforms that have an eye on this high-level abstraction. These phases can be linear dependencies, overlap and/or be concurrent.

‘Thinking from above’ can help us to better understand where the needs of each phase might be placed within the system we are designing.

Manage – when producing a knowledge asset, there needs to be some management of the context. This mainly includes things like the setting up and processing of users, roles, permissions, and groups. However, ‘manage’ can also be an appropriate way to think about the ways users access content. A dashboard is a management interface, as are interfaces to create a Table of Contents (book) or keep track of a Collection (eg a set of related journals). The usefulness of thinking of management needs in this way is that it really highlights that a dashboard (consider Google Drive as a dashboard) and a Table of Contents interface for managing an online book production system are the same category of interface. They might have a different treatment according to the use case, but they facilitate much the same kinds of activities.

Create – content needs to be created, so we need content-creation interfaces. We are familiar with these: think about blogging and where you write your posts – that is a content creation interface; now think about a book production system where you write a chapter – that is also a content creation interface. These interfaces both belong in the same high level abstract container.

We need to think of content creation interfaces at a high level of abstraction when designing these kinds of systems for many reasons.

First, it is useful to think about how components may be re-used. What, for example, is the difference between an interface that is used to create a blog post, and one used to create a chapter? For a great deal of use cases, the answer is nothing. This re-use approach is taken by PressBooks, for example. PressBooks uses WordPress as a book production platform (note: I don’t agree this is a good idea, as WordPress is an entire suite best suited to blogs and not books, but I am pointing out the similarity of the content production needs at a very high level).

Second, it is interesting to ask ourselves what kind of content we are trying to produce and whether we have the right type of content production interface. Think, for example, of a book production system. All (except one) of the online book production tools I am familiar with have one kind of content creation interface – a WYSIWYG editor attached to a blank page. You use the editor to fill up the page until your chapter is done.

But what if you need to produce a glossary? Most book production systems use the same tool. That doesn’t seem like a good idea. Glossaries have very specific needs – users need to be able to sort, create new items, perhaps even translate terms into other languages and sort by those languages etc. A ‘waterfall’ cascading content creation interface (WYSIWYG and a blank page) doesn’t meet that need very well. What if you wish to produce 2 page spreads (in the case of paper books), or an index ? …then imagine that instead of books, we are are looking to produce annotated data sets.

We need to liberate ourselves from the one-size-fits-all approach to content production, and placing these needs in a high level abstract container allows us to think of the needs and not the UI.

Process – one thing I have come to learn through working in STM publishing, is that publishers add value to content by improving it. That is pretty much what every publisher does. In my mind, we should be re-framing publishers and calling them processors. After all, making content public these days is a doddle. And the difference between ‘self-publishing’ and ‘publishing’… is the processing bit. Self- publishers (generally speaking) do not have access to the same level of experienced and useful processing that can improve a work. As the day goes on we will have less and less value for the ‘publishing’, and in the case of STM, the long wait to making science public is already an impediment to progress.

The processing of content is an important part of knowledge production. If we want good knowledge, we need to be able to bring into play all those people and (sometimes) machines at the right moment to improve the work. That is essentially what workflow is all about. Processing is a high-level abstraction for workflow. It’s important to consider processing at a high-level abstraction because far too many systems build hard-coded workflow pipelines into their platforms, and that retards the opportunity to reconsider, optimise, and even radicalise workflows. I also consider other UI elements such as discussions to belong in the ‘process bucket’. Discussions are workflow and they are often the only type of mechanism that can account for the high level of specificity and issue resolution required on a per-knowledge asset (eg issues for each manuscript, chapter etc) level.

Share – formerly this was getting the book to the bookshop, but under current conditions this is something else entirely. Sharing works in the age of digital assets and network communications is all about file formats, APIs, and syndication. Avenues for sharing and the requirements of this process are prolfierating daily and the needs can be so complex that any system built to manage ‘sharing’ needs to be extremely flexible.

Thinking about sharing at a high-level of abstraction helps us with this enormously. For example, ingestion of .docx to an HTML-based knowledge production system is actually an act of sharing. It consists of file conversion and feeding the result into the production system for others to access. That ‘feeding’ of content into our production system, is a type of syndication. And the next ‘export’ of the finished product to some other system (eg a book sales system), requires a further process of file conversion (to the target book format) and feeding that format into the target system. Exactly the same high-level process as ‘ingestion’. They both belong to the ‘sharing’ bucket.

So, we can see that actually ‘import/ingestion’ and ‘export’ are actually the same thing. Consequently, we can save ourselves a lot of effort by building a framework in any knowledge production system that will ‘do both’ (ie recognise that there is no difference between import and export, and manage both rather than building redundant parallel processes).

The beauty of this ‘conceptual schema’ for knowledge production is that we can apply it to a wide variety of use cases to understand the knowledge production process at hand, and the variation and similarities between any of them. For example, Book Sprints traverse each of these 4 phases, as does a typical book from a publisher, as does a Wikipedia article, as does a grant application to a funder. Thinking of the process that way helps us see where the variance is – and helps us to better focus on designing for the needs of each. This suggests that a really good, efficient, single system can be designed to enable the production of a vast range of knowledge types, and accommodate apparently different processes which were formerly housed in standalone single use case platforms.

Colophon: written in Piha after walking on the beach, then cleaned somewhat by Raewyn. Written using Ghost blogging software (free software!).

Why Persistent Identifiers are the Wrong Idea

I think I will rewrite this. It seems to me it’s only half the solution…more coming shortly.
This afternoon I was reading Elizabeth Eisenstein’s “The Printing Revolution in Early Modern Europe” when I came across this passage:

To consult different books, it was no longer so essential to be a wandering scholar...The era of the glossator and the commentator came to an end, and a new "era of intense cross-referencing between one book and another" began.

The point here is that after the printing press came about, there were more books available. An obvious point. As a result, cross-referencing became a feature, since it was possible to access and, consequently, reference other literature more readily.

This made me think about the current discussions around persistent identifiers for scholarly content. It seems the current solution is to offer a layer of indirection: this enables a stable identifier to persist, and should the ‘actual location’ of the content be changed, then we can re-configure the redirect to point to the new location.

Martin Fenner and Geoff Bilder point to this solution in their very good postings on this topic. However, this method does not overcome the real problem. What if either:

  • no one updates the redirection after the location has changed
  • the content really goes offline (through loss of domain, for example)

It appears to me that we have the wrong solution. There is really no way to solve this issue with URIs. We can only minimise it.

So, how to go about resolving this (so to speak).

One way to get some insight into the issues¬†is to wind back the clock and look at the way content was located in the age of the newly-born printing press. In this age, scholars were liberated because they didn’t have to wander the world looking for a particular book. Instead, identical printed copies proliferated, and it was just a matter of finding a copy of the work you were pursuing. Preferably you found a copy in a library or shop nearby. To find that work, one merely needed the cross-reference information to track it down. That “persistent identifier,” comprised of author’s name, book’s title, page of reference for the quoted material, publisher’s name and location, publication date, was commonly referred to as a citation (we still use that term). The citation helped the reader or researcher find a copy of the work cited. Not a particular printed book, but a copy of that book.

So, how is it that in an age of digital media we have gone backwards? Where copying something is even easier than in the printed age, why are we still pointing to ‘one’ authoritative copy? In essence, we are still referencing a book by stating the exact book that sits in a specific institution, on a particular shelf, with the blue (not green) cover.

It feels a little like the great leap backwards to me.

A way to get around this problem would be simply to allow and encourage content to be copied. Let digital media do what it does best – copy and distribute itself. The ‘unique identifier’ would then not be an URL (with a layer of indirection) but would take the form of a checksum or hash. Finding the right work would then be a matter of searching for a copy of the material with the right checksum.

I don’t know. I’m probably missing something. But it seems we have no problem tracking down YouTube videos when they spawn into the ether. We can also tell one version of software from another, no matter where it is and how it is labelled. Why not just let the content go and provide mechanisms to find a specific version¬†of the content via hash search (also solving the issues of versioning URIs)?

Colophon: written on a lovely Sunday afternoon in the Mission. Adam got up Monday morning with that nagging feeling… rethought it. Talked to Raewyn. Rewrite coming. Written using Ghost (free) software.

Its not US and THEM, its a TEAM, stupid

hi y’all

I have just been reading some posts on Scholarly Kitchen about content creation and the next wave of authoring systems.

It seems the STM sector has long been in need of developing a solution to get their publishing processes out of various traps. The most obvious trap is MS Word. A horrible format, to be sure, but it has long been the default file format for manuscript production, with a small tip of the hat to LaTeX for the technologically gifted. Their recent discussions have mainly been about online ‘authoring systems,’ going beyond MS Word to anticipate documents that are fully transparent to whatever combination of machine and human interactions play a part in understanding and processing the information.

This ‘get-out-of-MS-Word-free’ card is a very attractive proposition. MS Word is basically a binary blob to most publishing systems (even though in actual fact the format of .docx is XML -thereby also abruptly ending the false argument that XML inherently brings structure). As a ‘binary’ (go with me on this for now) MS Word is not transparent to the publication system, there is no record of when the author has worked on it; finding out what they have done since version xxx.xxx and version xxxx.xxxxxxxx is very difficult; nobody else can work on it when the author is also working on it, and there is no control over structure etc etc etc

So getting away from reliance on MS Word is the aim. But getting into (what I might rename as) an ‘authoring only‘ platform – is not the solution.

What is interesting about the SK forum, is that there seems to be a very clear distinction in the minds of publishers between the worlds of the author and the publisher. Most of the comments make this split, and there is much talk of ‘authoring systems’.

It seems a little bizarre to me, as I don’t think it’s wise to think about the author and the publisher as being distinct entities. It’s not a matter of author and publisher working on separate processes to shepherd a manuscript through to publication: it is very much a team effort. Authors and publishers work together in a way that should not be dichotomised: they are a team.

If we don’t acknowledge that, then we will not be able to design good publication systems. There is a lot of unclear thinking around this topic at the moment. The “authoring system” model assumes that content is made in an authoring system by a writer, and then migrates to the publisher’s submission, processing and publishing system, where the publisher does some stuff, and then at various times pings the author back to make changes to metadata, submission information, the manuscript and attendant assets (eg figures)…

In this model, next the author takes the manuscript out of the publisher’s system, ingests to the old authoring system, works on it, exports it, and re-ingests it into the publisher’s system… Hmmm…this cycle is exactly one of the pain points we were trying to avoid by getting away from Microsoft Word.

It seems to me that the current trend to build better authoring systems is a mistake. It is based on the false assumption that ‘MS Word’ is the problem, without realising that there is more to it. Word has been seen as the problem only because it has been the only problem in town. We don’t need better ‘authoring systems’ that repeat the separation between writing and publishing that is inherent in reliance on MS Word. We shouldn’t invest in new authoring systems and believe in them purely because they are ‘not Microsoft Word’. Rather, we need documents to be contained within submission and processing systems for the entire duration of their life, and they need to be completely operational and transparent within that system to all parties that must work on them. Without understanding that need, we are merely mitigating the problem by small steps whilst fooling ourselves that we have solved the larger problem.

We don’t want the author-publisher response/change cycle (a collaborative effort by the team which includes author and publisher) to be in separate systems. We want them working together in the same system. We need teams to work together in the most efficient way possible, and that is in the same (real world- or cyber-) space. Teams work best when they work in the same *space.

Though I see the current efforts towards authoring system development to be interesting, unless they are integrated with processing and workflow features, they will sooner or later be made redundant.

Colophon: written by Adam in 30 mins in a tizz. Tinkered with by Raewyn for another 30 mins. Written using Ghost software (free software!)

Single Voice

version 1.0 ‘not as raw’

During a Book Sprint, or when talking about Book Sprints, the question very quickly arises – ‘what about the author’s single voice?’

The fear is that collaboratively produced books will lose that personal, individual voice that we know so well from all the books we have read and loved.

Wouldn’t Frankenstein be a little lumpy if it was written by a collective? Same goes for any Tom Clancy book (he famously said that “Collaboration on a book is the ultimate unnatural act”). Clancy’s books are not high art, but they do seem to contain a particular ‘Clancy’ style. What about good contemporary literature? Could, for example, the wonderful The Art of Fielding be as wonderful if written by anyone other than Chad Harbach? And what about poetry by the father of English literature – Chaucer? It’s unimaginable that his works could be produced by anyone other than Chaucer.

We believe that both high and low literature would suffer if the works weren’t produced by a single author. There is only one Chaucer, one Clancy (thankfully), one Harbach, one Mary Shelley. We can tell their works apart because each contains a distinctive authorial voice. We know these writers. We know those voices.

We can only imagine what a mess would be created if books were written by more than one person. They would lose the single point of view. That special perspective. That special voice.

Well… first of all, it might be worth knowing that each of these examples actually had more than one contributing author, and each in its own interesting way. From Erick Kelemen’s work in the forensic field of textual criticism, there is good evidence that both Byron and Percy Shelly had a hand in at least some of Frankenstein. According to Kelemen, the extent of the collaboration is not exactly known, and we need to be aware that the discussion is also tainted by a good ole sexist lens. However, there is good evidence of collaboration, not just in the Preface (which some say is written entirely by Percy Shelly), but also in the content of the rest of the story.

Tom Clancy, in his own mind the enemy of collaborative book production, actually collaborated with others on many of his books. Some of the books he has credit for were actually written mostly by others, a common practice amongst authors of best-selling thriller and mystery series for at least the past twenty years.

And in fact, manuscripts produced at the time Chaucer was writing were shared documents, and it is extremely likely the exact words that we now consider to be Chaucer’s were not his at all. As Lawrence Liang has noted, in his discussion of the process of Chaucer’s canonisation, the process was essentially a gathering of manuscripts after Chaucer’s death by experts who decided which words were, and which were not, Chaucer’s, for all time.

In the disclaimer before the Miller’s Tale for instance, Chaucer states that he is merely repeating tales told by others, and that the Tales are designed to be the written record of a lively exchange of stories between multiple tellers, each with different, sometimes opposing, intents.

Interestingly, Chaucer seems not only to recognize the importance of retelling stories, but also a mode of reading that incorporates the ability to edit and write.

If you want to understand the role of collaboration in single-author-culture right now, there is no better story to read than The Book on Publishing which provides a great tale about the publishing of Harbach’s The Art of Fielding and acknowledges the huge value an editor can play in re-writing and restructuring a book.

There are two points here to keep in mind.

Firstly, we don’t know much about how books are written, nor how models of the writing process have changed over time. Paper is not a good medium for preserving versioning, and we lack an on-paper-process mechanism like git blame¬†that can backtrack to show how the text was created. A great pity. The lack of this kind of tool for the vast majority of publishing history means publishing has been able to propagate the very marketable myth of the single author. Collaboration has been obscured and de-valued. Worse, the extent and value of collaboration is not understood. We don’t even have a good language for talking about it.

Secondly, we are left believing claims such as “books have a single voice because they are written by a single author” when this is demonstrably false. Almost every published book has had at least two authorial contributors – the author and the editor; and most books will have been improved during the drafting process by the contributions of test readers.

Collaboration exists to improve works. It is why there are editors in publishing. Editors give feedback and shape the work to, amongst other things, strengthen the impression of the single authorial voice. It is very probably true that an effective single voice can only be achieved by 2 or more people collaborating.

So next time you find yourself asking “how can an authoritative singular voice be preserved in collaborative book production?” it might be better to take a deep breath and ask yourself “how could a single voice ever be effectively realised without collaborating?” That is the real question at play.

Colophon: version 1.0 Written in an hour by Adam Hyde. Raewyn Whyte then improved it (‘made it stronger’). Also, some references still need to be checked as the needed books are in storage in NZ somewhere! Written with Ghost Blog free software (MIT) https://github.com/tryghost/Ghost.

Fantasies of the Library

Fantasies of the Library is a book released last week by Berlin publisher k-verlag. There is an interview in it with me about the future of book publishing beyond the proprietary model. I also talk about my current work for the Public Library of Science and the relationship between Open Access and Open Source.

fantasies_cover

The full interview is also online and can be read here.

My favourite passage is this:
Charles Stankievech: “But why should one value open source and open access? What are the political ramifications of such a philosophy and practice?”

Adam Hyde: “Because both provide more value to humanity. Political ramifications are vast and complex. I like to think about the personal aspects of this choice, however. Living a life of open source and open access forces you to peel away layer by layer the proprietary way of thinking, doing, and being that we have all grown up with. It can be a very painful process, but it’s also extremely liberating and healthy. Largely, it actually means learning to live without fear and paranoia of people ‘stealing your ideas’. That’s quite a freedom in itself.

Books are Evil, Really Evil pt1

Right now books are something of an ironic artefact for me. I am involved in the rapid production of books through a process known as a Book Sprint. We create books. We throw a bunch of people in a room for a week, and carefully facilitate them through a process, progressing them step by step, from zero to finished book, in 5 days or less.Write a book in a week?! An astonishing proposal. Most people who attend a Book Sprint for the first time think it is impossible. Create a book in a week?! Most think that maybe they can get the table of contents done in that time. Maybe even some structure. But a book? 5 days later they have a finished book and they are amazed.

There are many essential ingredients to a Book Sprint. An experienced Book Sprint facilitator is a must. A venue set up just so… Lightweight and easy-to-use book production software. A toolchain that supports rapid rendering of PDF and EPUB from HTML. Good food… A writing team… and a lot more.

One of the contributing factors to success is the terror caused by the seemingly impossible idea that the group will create a book. It is a huge motivator. Such is the enormity of the task in the participants’ minds that they follow the facilitator and dedicate themselves to extremely long hours, working on minute details even when exhausted. There is a lot of chemistry in there. Camaraderie and peer pressure are pushed to maximum effect as a motivational factor, as is fear of failure, especially fear of failure before your peers, both inside and outside the Sprint room. The pleasure of helping your peers is a strong motivator, as is the idea that together we will do this!¬†But the number one motivator is the idea that we are going to produce a book.

We all know that books these days, paper books, are published from a PDF. You send a PDF to the printer, and the final output is a perfect bound book. This happens for most Book Sprints – we send the final PDF to a printer for them to produce the printed book. So what we are creating is actually a PDF (along with an EPUB) …but imagine if we were to call the event “PDF Sprint”. At the beginning of the PDF Sprint we could announce that we have gathered everyone together…so that…at the end of the week…they will have….(gasp!)…a PDF!

Nope. Doesn’t work. Doesn’t even nearly work. A book is the seemingly impossible outcome that Book Sprint participants have come to conquer. Even though the definition of ‘what a book is’ is completely up for grabs, it is abook they are determined to produce. A book is the pinacle of knowledge products, and writing a book is about equal in cerebal achievements to climbing Everest. A PDF is merely getting to base camp, or perhaps the equivalent to planning the trip from your armchair.

So, what’s the problem? Books are good then! A great motivator for Book Sprints. Where exactly is the irony? How can I complain?

Book Sprints are extraordinary events. The people are not just put into a room and left to write. They are led through a process where notions of single authorship and ownership of content just no longer make sense. Such ideas are unsustainable and nonsensical in this environment, and participants slowly deconstruct ideas of authorship over the 5 days.

The participants actively collaborate during the event. Really collaborate. Book Sprints are a kind of collaborative therapy. Each participant learns to let go of their own voice so they can contribute to constructing a new shared voice with the rest of the team. They learn new ways to contribute to group processes, to communicate, to improve each other’s contributions, to synthesize, to empower and encourage others to improve the work without having to ask permission.

The resulting book has no perceivable author. It has been delivered by what is now a community. And as a result, most of the books, about 99% I would say, end up being freely licensed. A book born by sharing is more easily shared. More easily shared than a book created with the notions of author-ownership. The idea of sharing is embedded in the DNA of the Book Sprint, part of the genesis of the product, and sharing more often than not becomes part of the life of the book after the Book Sprint is completed.

But books are evil

So, how is it possible I can take the position that books are evil? Where exactly is the irony? It is a lovely story I just painted. Lots of flowers and warm fuzzy feelings. Wow. Sharing, sharing, sharing… it’s a book love-in!

Well… with some regret, I have to admit that most books do not come into the world this way. They are produced and delivered through legacy processes. Cultural norms shape the production and reception of books, and the ideas contained within them are not born into freedom. These books are, normatively, created by ‘single author geniuses’, born into All Rights Reserved knowledge incarceration, and you cannot recycle them.

Try as we may, we are a little group of people. A small band of Book Sprinters, and it is unlikely that we can sway the mainstream to our way of doing things. We have many victories – Cisco released one of its Book-Sprinted books freely online! Whoot! That’s massive! But… as big as Cisco is, one Cisco book in the sea of publishing is merely a grain of salt in the Pacific. By adding our special grain of salt to this ocean we are by no means making our point more salient.

Books are doomed to be the gatekeepers of knowledge. If you make a book, you are, more than likely, sentencing the words in it to life + 50 years (depending on where you live).

Books are in fact the very artefacts that maintain proprietary knowledge culture.

It comes down to these three issues for me:
1. books gave birth to copyright
2. books gave birth to industrialised knowledge production
3. books gave birth to the notion of the author genius

These three things together are the mainstays of proprietary knowledge culture, and proprietary knowledge culture has been firmly encased and sealed, with loving kisses, between the covers of the book. Ironically these three things, through the process of the Book Sprint, are what we are trying to deconstruct.

many thanks to Raewyn Whyte for improving this post

Building Book Production Platforms p4

The renderer

Note: this is an early version. It has been cleaned up some, but is still needing links and screenshots…. Apologies if the rawness offends you ūüôā

This series is skipping around the toolchain, depending on what’s most in my mind at the moment. Today it’s file conversion, otherwise known as ‘rendering’. This is the process of converting one file type to another, for example, HTML-to-EPUB or Word-to-HTML, and so on.

It’s important to have file conversion in the book production world because we often want to convert the HTML to a book format – like book-formatted PDF, or EPUB, mobi and so on, or to import into a new document existing content contained in a file like MS Word.

Manual conversions

It is, of course, quite possible to do all your file conversion manually.

Should you wish to convert HTML into a nice book-formatted PDF, one possible strategy is to go out to InDesign or Scribus and lay it all out like our ancestors did as recently as 2014. Or, if you want to convert MS Word, for example, to HTML, you can just save it as HTML in Word… Yes, Word copies across a lot of formatting junk, but you can clean it up using purpose-built freely available software (such as HTMLTidy and CleanUp HTML), online services (like DirtyMarkup),or a handy app (such as Word HTML Cleaner)…

Manual conversion is not too bad a strategy, as long as it doesn’t take you too long, and it is often more efficient and faster than those convoluted hand-holding technical systems which promise to do it for you in one step. Despite the utopian promises made by automation… you often get better results doing the conversion manually.

I sometimes hear people in Book Sprints, for example, complain something to the tune of “why can’t I just click a button and import part of this paragraph from Wikipedia into the chapter, and then if the entry is updated in Wikipedia, I can just click the button again and it will be updated here”…

I try not to sigh too loudly when I hear this kind of ‘I have all the solutions!’ kind of ‘question’. Some day that may be feasible, but in the meantime, all the knowledge production platforms I have built have an OS-independent trans-format import mechanism which allows those handy keyboard shortcuts ‘control c’ and ‘control p’… sigh. Don’t knock copy and paste! It can get you a long way.

You can also build an EPUB by hand…

But, who really wants to do any of this? Isn’t it better to just push a button and taaadaaa! out pops the format of choice! (I have all the solutions! haha).

I think we can agree it is better if you are able to use a smart tool to convert your files, and the good news is that within certain parameters and for loads of use cases, this is possible. But don’t under-estimate the amount of tweaking for individual docs that might, at times (not always), be required.

Import and export are the same thing

The process of ‘importing’ a document is also sometimes known as ingestion. Before delving down into this, the first gotcha with file transformation is to avoid thinking about import and export as separate technical systems. That can, and has, caused a lot of extra work when building file conversion into a toolchain.

Both import and export are, actually,¬†file conversion. The formats might differ, import might solely be Word-to-HTML in your system and the export HTML-to-EPUB. However, the process of file conversion has many needs that can be abstracted and applied to both of these cases. A quick example – file conversion is often processor and memory intensive. So effective management of these processes is quite important, and in addition, fallbacks for errors or fails need to be managed nicely. These two measures are required independent of the filetypes you are converting from or to. So don’t think about pipelining specific formats, try and identify as many requirements as possible for building just one file conversion system, not an import system plus an export system.

Ingestion

In importing documents to an HTML system, the big use case is MS Word. Converting from MS Word is a road full of potholes and gotchas. The first problem is that there is no single ‘MS Word’ file format, rather there are many many different file formats that all call¬†themselves MS Word. So to initiate a transformation, you need to know what variety of MS Word you are dealing with.

Your life is made much easier if you can stipulate that your system requires one variety – .docx. If you do have to deal with other forms of Word, then it is possible to do transformations on the backend from miscellaneous Word file type X to .docx and then from .docx to HTML. Libreoffice, for example, offers binaries that do this in a ‘headless’ state (it can be executed from the command line without the need to fire up the GUI). However, the more transformations you undertake, the more errors in the conversion you are likely to introduce. Obviously, this then causes QA issues and will increase your workload per transform required.

Another real problem with MS Word versions before .docx, is that .docx is transparent, actually is just XML. So you can view what you are dealing with.¬†Versions before this were horrible binaries – a big clump of ones and zeros – and after that a bunch of gunk. That same problem also exists when you use binaries like soffice (the Libreoffice binary for headless conversions) as it is also a big bucket of numbers. You can’t easily get your head into improving transformations with soffice unless you want to learn to etch code into your CPU with a protractor.

If you have to deal with MS Word at all, I recommend stipulating .docx as the accepted MS Word format. I am not a file type expert, far from it, but from people who do know a lot about file formats I know that .docx looks like it has been designed by a committee… and possibly, a committee whose members never spoke to each other. Additionally, Microsoft, being Microsoft, likes to bully people into doing things their way. .docx is a notable move away from that strategy, and does make it substantially easier to interoperate with other formats, however, there are some horrible gotchas like .docx having its own non-standard version of MathML. Yikes. So, life in the .docx lane is easier, but not necessarily as easy as it should be if we were all playing in the same sandbox like grownups.

I have tried many strategies for Word to HTML conversion. There are many open source solutions out there, but oddly, not as many good ones as you would hope. Recently I looked at these three rather closely:

  • Calibre’s Python based ebook converter script
  • OxGarage
  • soffice (Libreoffice)

There are others…I can’t even remember which ones I have looked at in detail over the years. I have trawled Sourceforge and Github and Gitorious and other places. But the web is enormous these days and maybe there is just the oh-so-perfect solution that I have missed. If you know it then please email it to me, I’ll be ever so grateful (only Open Source solutions please!).

These three are all good solutions, but at the end of the day, I like OxGarage. I won’t go into too much detail about all of them but a quick top-of-mind whys and why-nots would include:

  • Calibre’s scripts are awesome and extendable if you know Python, however they don’t support MS MathML to ‘real’ MathML conversions. That’s a show stopper for me.
  • On the good side, though, Calibre’s developer community is awesome, and they are heroes in this field and deserve support, so if you are a Python coder or dev shop¬†then, by all means,¬†please pitch in and help them improve their .docx to HTML transforms. The world will be a better place for it.
  • soffice does an ok job but it’s a black box, who knows what magic is inside? It tends to make really complex HTML and it is also really heavy on your poor hardware. I have used it a lot but I’m not that big a fan.
  • OxGarage…well…I love OxGarage, so I really recommend this option…

OxGarage was developed by a European Commission-funded project and then, as is common for these kinds of projects, it dried up and was left on a shelf. Along came Sebastian Rhatz, a guru of file transformation, big Open Source guy, and also a force behind the Text Encoding Initiative. Sebastian is also the head of Academic IT Sevices at Oxford University. The guy has credentials! Also, he’s a terribly nice and helpful guy. He has so much experience in this area I feel the trivialness of my questions about our .docx to HTML woes at PLOS… afraid he might absentmindedly swipe me out of the way like I was an inconsequential little midge.. but he’s such a nice chap, instead he invites midges out to lunch.

So, Sebastian picked up the Java code and added some better conversions. OxGarage is essentially a Java framework that manages multiple different types of conversions. You feed it and are fed from it by a simple web API. It doesn’t have the best error handling, but it does do a good job. The .docx to HTML conversion is multi-step. First, the .docx is converted to TEI – a very rich, complex markup, and then from TEI via XSL to HTML. That means that all you really need to worry about is tweaking the XSL to improve the transformation and that’s not too tricky. It could be argued that the TEI conversion is a redundant step. I think it is. But OxGarage works out of the box and does a pretty good job so we have adopted it for the project I am working on for PLOS, and we are happy with it. We have added some special (Open) Sauce but I’ll get to that later. We are using it and will shoot for more elegant solutions later (and we have designed a framework to make this an easy future path).

If you are looking for Word-to-HTML conversion tools, I recommend OxGarage. Im not saying it’s the optimal way to do things, but it will save you having to build another file conversion system from scratch, and from what I can tell from Sebastian, that would take considerable effort.

HTML to books

The other side of the tracks is the conversion of the HTML you have into a book file format. We live in a rather tangled semantic world when it comes to this part of the toolchain. Firstly, it’s hard to know what a book file format actually is these days… on a normal day, I would say a book file format is a¬†file format that can display a human readable structured narrative. Yikes. That’s not particularly helpful… Let’s just say for now that a book file format is – EPUB, book formatted PDF, HTML, and Mobi.

So, transforming from HTML to HTML sounds pretty easy. It is! The question is really how do you want your book to appear on the web? Make that decision first, and then build it. Since you are starting with HTML this should be rather easy and could be done in any programming language.

The next easiest is EPUB. EPUB contains the content in HTML files stored in a zip file with the .epub suffix. That is also easy to create and, depending on your programming language, there are plenty of libraries to help you do this. So moving on…

Mobi. Ok.. mobi is a proprietary format and rather horrible. It contains some HTML, some DB stuff… ¬†I don’t know… ¬†a bit of bad magic, frogs legs… that kind of thing. My recommendation is to first create your EPUB and then use Calibre’s awesome ebook converter script to create the mobi on the backend. Actually, if you use this strategy, you get all the other Calibre output formats for free, including (groan) .docx if you need it. Honestly, go give those Calibre guys all your love, some dev time, and a bit of cash. They are making our world a whole lot easier.

Ok… the holy grail… people still like paper books, and paper books are printed from PDF. Paper these days is a post-digital artifact. So first you need that awkward sounding book-formatted PDF.

Here there are an array of options and then there is this very exciting world that can open to you if you are willing to live a little on the bleeding edge…. I’m referring to CSS Regions… but let’s come back to that.

First, I want to say I am disappointed that some ‘Open Source’ projects use proprietary code for HTML-to-PDF conversion. That includes Press Books and Wikipedia. Wikipedia is re-tooling their entire book-formatted-PDF conversion process to be based on LaTeX and that is an awesome decision. However, right now they use the proprietary PrinceML as does Press Books. I like both projects, but I get a little disheartened when projects with a shared need don’t put some effort into an Open Source solution for their toolchain.

All book production platforms that produce paper books need an HTML-to-PDF renderer to do the job. If it is closed source then I think it needs to be stated that the project is partially Open Source. I’m a stickler for this kind of stuff but also, I am saddened that adoption of proprietary components stops the effort to develop the Open Source solutions we need, while simultaneously enabling proprietary solutions to gain market dominance – which, if you follow the logic through, traps the effort to develop a competitive Open Source solutions in a vicious circle. I wish that more people would try, like the Wikimedia Foundation is trying, to break that cycle.

The browser as renderer

There is one huge Open Source hero in this game. Jacob Truelson. He created WKHTMLTOPDF when he was a university tutor because he wanted his students to be able to write in HTML and give him nicely formatted PDF for evaluation. So he grabbed a headless Webkit, added some QT magic, some tweaks, and made a command line application that converts HTML to book-formatted PDF. We used it in the early days of FLOSS Manuals and it is still one of the renderer choices in the Booktype file conversion suite (Objavi). It was particularly helpful when we needed to produce books in Farsi which contain right to left text. No HTML to PDF renderer supported this at the time except WKHTMLTOPDF because it was based on a browser engine that had RTL support built in.

Some years later WKHTMLTOPDF was floundering, mainly because Jacob was too busy, and I tried to help create a consortium around the project to find developers and finance. However I didn’t have the skills, and there was little interest. Thankfully the problem solved itself over time, and WKHTMLTOPDF is now a thriving project and very much in demand.

WKHTMLTOPDF really does a lot of cool stuff, but more than this, I firmly believe the approach is the right approach. The application uses a browser to render the PDF…that is a HUGE innovation and Jacob should be recognised for it. What this means is – if you are making your book in HTML in the browser, you have at your fingertips lots of really nice tools like CSS and JavaScript. So, for example, you can style your book with CSS or add javaScript to support the rendering of Math, or use typography JavaScripts to do cool stuff… When you render your book to PDF with a browser, you get all that stuff for free. So your HTML authoring environment and your rendering environment are essentially the same thing… ¬†I can’t tell you how much that idea excites me. It is just crazy! This means that all those nice JavaScripts you used, and all that nice CSS which gave you really good looking content in the browser will give you the same results when rendered to PDF. This is the right way to do it and there is even more goodness to pile on, as this also means that your rendering environment is standards-based and open source…

Awesome. This is the future. And the future is actually even brighter for this approach than I have stated. If you are looking to create dynamic content – let’s say cool little interactive widgets based on the incredible tangle! Library – for ebooks (including web-based HTML) … if you use a browser to render the PDF you can actually render the first display state of the dynamic content in your PDF. So, if you make an interactive widget, in the paper book you will see the ‘frozen’ version, and in the ebook/HTML version you get the dynamic version – without having to change anything. I tested this a long time ago and I am itching to get my teeth into designing content production tools to do this.

So many things to do. You can get an idea how it works by visiting that Tangle link above… try the interactive widgets in the browser, and then just try printing to PDF using the browser… you can see the same interactive widgets you played with also print nicely in a ‘static’ state. That gets the principle across nicely.

So a browser-based renderer is the right approach, and Prince, which is, it must be pointed out, partly owned by H√•kon Wium Lie, is trying to be a browser by any other name. It started with HTML and CSS to PDF conversion and now…oo!… they added Javascript… so…are they a browser? No? I think they are actually building a proprietary browser to be used solely as a rendering engine. It just sounds like a really bad idea to me. Why not drop that idea and contribute to an actual open source browser and use that. And those projects that use Prince, why not contribute to an effort to create browser-based renderers for the book world? It’s actually easier than you think. If you don’t want to put your hands into the innards of WebKit, then do some JavaScript and work with CSS Regions (see below).

This brings us to another part of the browser-as-renderer story, but first I think two other projects need calling out for thanks. Reportlab for a long time was one of the only command line book-formatted-PDF rendering solutions. It was proprietary but had a community license. That’s not all good news, but at least they had one foot in the Open Source camp. However, what really made Reportlab useful was Dirk Holtwick’s Pisa project that provided a layer on top of Reportab so you could convert HTML to book-formatted-PDF.

The bleeding edge

So, to the bleeding edge. CSS Regions is the future for browser-based PDF rendering of all kinds. Interestingly H√•kon Wium Lie has said, in a very emphatic way, that CSS Regions is bad for the web…perhaps he means bad for the PrinceML business model? I’m not sure, I can only say he seemed to protest a little too much. As a result, Google pulled CSS regions out of Chrome. Argh.

However CSS Regions are supported in Safari, and in some older versions of Chrome and Chromium (which you can still find online if you snoop around). Additionally, Adobe has done some awesome work in this area (they were behind the original implementation of CSS Regions in WebKit – the browser engine that used to be behind Chrome and which is still used by Safari). Adobe built the CSS Regions polyfil – a javaScript that plays the same role as built-in CSS regions.

When CSS regions came online in early 2012, Remko Siemerink and I experimented with CSS Regions at an event at the Sandberg (Amsterdam) for producing book- formatted PDF. I’m really happy to see that one of these experiments is still online¬†(NB this needs to be viewed in a browser supporting CSS Regions).

It was obviously the solution for pagination on the web, and once you can paginate in the browser, you can convert those web pages to PDF pages for printing. This was the step needed for a really flexible browser-based book-formatted-PDF rendering solution. It must be pointed out however, that it’s not just a good solution for books… at BookSprints.net we use CSS Regions to create a nicely formatted and paginated form in the browser to fill out client details. Then we print it out to PDF and send it…

Adobe is on to this stuff. They seem to believe that the browser is the ‘design surface’ of the future. Which seems to be why they are putting so much effort into CSS Regions. Im not a terribly big fan of InDesign and proprietary Adobe strategies and products, but credit where credit is due. Without Adobe CSS Regions ^^^ would just be an idea, and they have done it all under open source licenses (according to Alan Stearns from Adobe, the Microsoft and IE teams also contributed to this quite substantially).

At the time CSS Regions were inaugurated, I was in charge of a small team building Booktype in Berlin, and we followed on from Remko’s work, grabbed CSS Regions, and experimented with a JavaScript book renderer. In late 2012, book.js was born (it was a small team but I was lucky enough to be able to dedicate one of my team, Johannes Wilm, to the task) and it’s a JavaScript that leverages CSS Regions to create paginated content in the browser, complete with a table of contents, headers, footers, left-right margin control, front matter, title pages…etc… we have also experimented with adding contenteditable to the mix so you can create paginated content, tweak it by editing it directly in the browser, and outputting to PDF. It works pretty well and I have used it to produce 40 or 50 books, maybe more. The Fiduswriter team has since forked the code to pagination.js which I haven’t looked at too closely yet as I’m quite happy with the job book.js does.

CSS Regions is the way to go. It means you can see the book in the browser and then print to PDF and get the exact same results. It needs some CSS wizardry to get it right, but when you get it right, it just works. Additionally, you can compile a browser in a headless state and run it on the command line if you want to render the book on the backend.

Wrapping it all up

There is one part of this story left to be told. If you are going to go down this path, I thoroughly recommend you create an architecture that will manage all these conversion processes and which is relatively agnostic to what is coming in and going out. For Booktype, Douglas Bagnall and Luka Frelih built the original Objavi, which is a Python based standalone system that accepts a specially formatted zip file (booki.zip) and outputs whatever format you need. It manages this by an API, and it serves Booktype pretty well. Sourcefabric still maintains it and it has evolved to Objavi 2.

However, I don’t think it’s the optimal approach. There are many things to improve with Objavi, possibly the most important is that EPUB should be the file format accepted, and then after the conversion process takes place EPUB should be returned to the book production platform with the assets wrapped up inside. If you can do this, you have a standards-based format for conversion transactions, and then any project that wants to can use it. More on this in another post. Enough to say that the team at PLOS are building exactly this and adding on some other very interesting things to make ‘configurable pipelines’ that might take format X though an initial conversion, through a clean up process, and then a text mining process, stash all the metadata in the EPUB and return it to the platform. But that’s a story for another day…