Just pinning this here… from what I see there are 3 levels of review that occur in publishing a text, and each requires different tools:
- word level – an extremely granular review level, looking at each word. Often changes are tracked and each player is making recommendations that someone(s) need to validate. Hence the entire reason for track changes functionality.
- paragraph level – commentary on the arguments, facts, statements contained in a document. Annotation tools typically cover off on this requirement quite well.
- document level – a single commentary on the entire document. Common in journal publishing eg. a review of a manuscript. Typically achieved by a single written review (per reviewer) and often synthesized into one ‘meta review’ by an editor.
Paragraph and document level tools are pretty easy to produce. Word level tools are extremely complicated… hence I’ve personally avoided trying to produce this kind of tool until demand from publishers pushed us into it. I think a rethink of tools to enable word level review – to come up with simpler tools that cover off on the publisher’s requirements – is very necessary. I’ve tried various ‘diff’ tools for ‘comparing’ changes on a timeline but this has never caught on so well although I’ve personally found this approach very productive and seen non-publishing folk get much out of this kind of approach.