1. Introduction and Goals

1.1. create awesome docs!

docToolchain is an implementation of the docs-as-code approach for software architecture plus some additional automation. The basis of docToolchain is the philosophy that software documentation should be treated in the same way as code together with the arc42 template for software architecture.

How it all began…​

1.1.1. docs-as-code

Before this project started, I wasn’t aware of the term docs-as-code. I just grew tired of keeping all my architecture diagrams up to date by copiing them from my UML tool over to my word processor.

As a lazy developer, I told myself 'there has to be a better way of doing this'. And I started to automate the diagram export and switch from a full fledged word processor over to a markup renderer. This enable me to reference the diagrams from within my text and update them just before I render the document.

1.1.2. arc42

Since my goal was to document software architectures, I was already using arc42 - a template for software architecture. At this time, it used the MS Word template.

But what is arc42?

Dr. Gernot Starke and Peter Hruschka created this template in a joint effort to create a standard for software architecture documents. The dumped all their experience about software architectures into not only a structure but also explaining texts. These explanations are part of every chapter of the template and give you guidance on how to write each chapter of the document.

arc42 is available in many formats like MS Word, textile and Confluence and all these formats are automatically generated from one golden master which is formatted in asciidoc.

1.1.3. docToolchain

In order to follow the docs-as-code approach, you need a build script which automates steps like exporting diagrams and rendering the used markdown (asciidoc in case of docToolchain) to the target format.

Unfortunately, such a build script is not easy to create in the first place ('how do I create .docx?', 'why does lib x not work with lib y?') and it is also not too easy to maintain.

docToolchain is the result of my journey through the docs-as-code land. The goal is to have an easy to use build script which only has to be configured and not modified and which is maintained by a community as open source software.

The technical steps of my journey are written down in my blog: https://rdmueller.github.io.

Let’s start with what you’ll get when you use docToolchain…​

1.2. Benefits of the docs-as-code Approach

You want to write technical docs for your software project. So it is very likely that you already have the tools and proccesses to work with source code in place. Why not also use it for your docs?

1.2.1. Document Management System

By using a version control system like Git, you get a perfect document management system for free. It let’s you version your docs, branch them and gives you an audit trail. You are even able to check who wrote which part of the docs. Isn’t that great?

Since your docs are now just plain text, it is also easy to do a diff and see exactly what has changed.

And when you store your docs in the same repository as your code, you always have both in sync!

1.2.2. Collaboration and Review Process

Git as a distributed version control system let’s you even collaborate on your docs. People can fork the docs and send you pull requests for the changes they made. By reviewing the pull request, you have a perfect review process out of the box - by accepting the pull request, you show that you’ve reviewed and accepted the changes. Most git frontends like Bitbucket, Gitlab and of course Github also allow you to reject pull requests with comments.

1.2.3. Image References and Code Snippets

Instead of pasting images to a binary document format, you now can reference images. This will ensure that those images are always up to date every time you rebuild your documents.

In addition, you can reference code snippets directly from your source code. This way, these snippets are also always up to date!

1.2.4. Compound and Stakeholder-Tailored Docs

Since you can not only reference images and code snippets but also sub-documents, you can split your docs into several sub-documents and a master which brings all those docs together. But you are not restricted to one master - you can create master docs for several different stakeholder which only contain the chapters needed for them.

1.2.5. many more Features…​

If you can dream it, you can script it.

  • Want to include a list of open issues from Jira? Check.

  • Want to include a changelog from Git? Check.

  • Want to use inline, text based diagrams? Check.

  • and many more…​

2. How to install docToolchain

Let’s get started…​

Assuming that you first create your solution architecture and then the code, you simply start by getting a copy of the current docToolchain repository. The easiest way is to clone the repository without history and remove the .git folder:

linux with git clone
git clone https://github.com/rdmueller/docToolchain.git <your project name>
rm -rdI .git

Another way is to download the zipped git repository and rename it:

linux with download as zip
wget https://github.com/rdmueller/docToolchain/archive/master.zip
unzip master.zip
mv docToolchain-master <your project name>

If you work (like me) on a windows environment, just download and unzip the repository.

This should already be enough to start a first build:

linux with gradle wrapper
linux with maven wrapper
windows with gradle wrapper
windows with maven wrapper

As a result, you will see the progress of your build together with some warnings which you can just ignore for the moment.

The first build generated some file within the <project>/build folder:


Congratulations! if you see the same folder structure, you just managed to render the standard arc42 template as html and pdf!

If you didn’t get the right output, please raise an issue on github

3. Overview of available Tasks

This chapter explains all docToolchain specific tasks.

The following picture gives an overview of the whole build system:

Figure 1. docToolchain

3.1. Conventions

There are some simple naming conventions of the tasks. They might be confusing first and that’s why they are explained here.

3.1.1. generateX

render would have been another good prefix, since these tasks use the plain asciidoctor functionality to render the source to a given format.

3.1.2. exportX

These tasks export images and AsciiDoc snippets from other systems or file formats. The resulting artefacts can then be included from your main sources.

What’s different to the generateX tasks is that you don’t need to export with each build.

It is also likely that you have to put the resulting artefacts under version control because the tools needed for the export (like Sparx Enterprise Architect or MS PowerPoint) are likely to be not available on a build server or on another contributors machine.

3.1.3. convertToX

These tasks take the output from asciidoctor and convert it (through other tools) to the target format. This results in a dependency on a generateX task and another external tool (currently pandoc).

3.1.4. publishToX

These tasks not only convert your documents but also deploy/publish/move them to a remote system — currently confluence. This means that the result is likely to be visible immediately to others.

3.2. generateHTML


This is the standard asciidoctor generator which is supported out of the box.

The result is written to build/docs/html5. The HTML files need the images folder in the same directory to be displayed the right way.

if you would like to have a single-file HTML as result, you can configure asciidoctor to store the images inline as data-uri.
Just set :data-uri: in the config of your AsciiDoc file.
But be warned - such a file can be easily very big and some browsers might get into trouble rendering them.

3.2.1. Text based Diagrams

For docToolchain, it is configured to use the asciidoctor-diagram plugin which is used to create plantUML diagrams.

The plugin also supports a bunch of other text based diagrams, but plantUML is the most used.

To use it, just specify your plantUML code like this:

.example diagram
[plantuml, "{plantUMLDir}demoPlantUML", png] (1)
class BlockProcessor
class DiagramBlock
class DitaaBlock
class PlantUmlBlock

BlockProcessor <|-- DiagramBlock
DiagramBlock <|-- DitaaBlock
DiagramBlock <|-- PlantUmlBlock
1 The element of this list specifies the diagram tool plantuml to be used.
The second element the name of the image to be created and the third the image type.
the {plantUMLDir} ensures that plantUML also works for the generatePDF task. Without it, generateHTML works fine, but the PDF will not find the generated images.
make sure to specify a unique image name for each diagram. Otherwise it will be overwritten and you’ll get all the same diagrams.

The above example renders as

example diagram
Figure 2. example diagram
plantUML needs graphviz dot installed to work. If you can’t install it, you can use java based version of the dot library. Just add !pragma graphviz_dot jdot as first line of your diagram definition. It is still an experimental feature, but already works quite well!

3.2.2. Source

task generateHTML (
        type: AsciidoctorTask,
        group: 'docToolchain',
        description: 'use html5 as asciidoc backend') {

    attributes \
        'plantUMLDir'         : ''

    sources {
        sourceFiles.findAll {
            'html' in it.formats
        }.each {
            include it.file

    backends = ['html5']

3.3. generatePDF


This task makes use of the asciidoctor-pdf plugin to render your documents as good looking PDF.

The file will be written to src/docs/pdf.

the used plugin is still in alpha status, but the results are already quite good. If you want to use another way to create a PDF, use for instance phantomJS and script it!

The PDF is generated directly from your AsciiDoc sources without the need of an intermediate format or other tools. The result looks more like a nicely rendered book than a print-to-pdf HTML page.

It is very likely that you need to "theme" you PDF - change colors, fonts, page header and footer. This can be done by changing the src/docs/custom-theme.yml file. Documentation on how to modify it can be found in the asciidoctor-pdf theming guide.

3.3.1. Source

task generatePDF (
        type: AsciidoctorTask,
        group: 'docToolchain',
        description: 'use pdf as asciidoc backend') {

    attributes \
        'plantUMLDir'         : file('build/docs/images/plantUML/').path

    sources {
        sourceFiles.findAll {
            'pdf' in it.formats
        }.each {
            include it.file

    backends = ['pdf']

3.4. generateDocbook


This is only a helper task - it generates the intermediate format for convertToDocx and convertToEpub.

3.4.1. Source

task generateDocbook (
        type: AsciidoctorTask,
        group: 'docToolchain',
        description: 'use docbook as asciidoc backend') {

    sources {
        sourceFiles.findAll {
            'docbook' in it.formats
        }.each {
            include it.file

    backends = ['docbook']

3.5. generateDeck


This task makes use of the asciidoctor-reveal.js backend to render your documents HTML based presentation.

This task is best used together with the exportPPT task. Create a PowerPoint presentation and enrich it with reveal.js slide definitions in AsciiDoc within the speaker notes.

3.5.1. Source

task generateDeck (
        type: AsciidoctorTask,
        group: 'docToolchain',
        description: 'use revealJs as asciidoc backend to create a presentation') {

    attributes \
        'plantUMLDir'         : '',
        'idprefix': 'slide-',
        'idseparator': '-',
        'docinfo1': '',
        'revealjs_theme': 'black',
        'revealjs_progress': 'true',
        'revealjs_touch': 'true',
        'revealjs_hideAddressBar': 'true',
        'revealjs_transition': 'linear',
        'revealjs_history': 'true',
        'revealjs_slideNumber': 'true'

    options template_dirs : [new File('resources/asciidoctor-reveal.js','templates/slim').absolutePath ]

    sources {
        sourceFiles.findAll {
            'revealjs' in it.formats
        }.each {
            include it.file


    outputDir = file(targetDir+'/decks/')

    resources {
        from('resources') {
            include 'reveal.js/**'
        from(sourceDir) {
            include 'images/**'
        logger.error "${buildDir}/ppt/images"

3.6. publishToConfluence


3.6.1. Source

task publishToConfluence(
        description: 'publishes the HTML rendered output to confluence',
        group: 'docToolchain'
) << {
    evaluate(new File('scripts/asciidoc2confluence.groovy'))
 * Created by Ralf D. Mueller and Alexander Heusingfeld
 * https://github.com/rdmueller/asciidoc2confluence
 * this script expects an HTML document created with AsciiDoctor
 * in the following style (default AsciiDoctor output)
 * <div class="sect1">
 *     <h2>Page Title</h2>
 *     <div class="sectionbody">
 *         <div class="sect2">
 *            <h3>Sub-Page Title</h3>
 *         </div>
 *         <div class="sect2">
 *            <h3>Sub-Page Title</h3>
 *         </div>
 *     </div>
 * </div>
 * <div class="sect1">
 *     <h2>Page Title</h2>
 *     ...
 * </div>

// some dependencies
         @Grab('org.codehaus.groovy.modules.http-builder:http-builder:0.6' ),
import org.jsoup.Jsoup
import org.jsoup.parser.Parser
import org.jsoup.nodes.Entities.EscapeMode
import org.jsoup.nodes.Document
import org.jsoup.nodes.Document.OutputSettings
import org.jsoup.nodes.Element
import org.jsoup.select.Elements
import groovyx.net.http.RESTClient
import groovyx.net.http.HttpResponseException
import groovyx.net.http.HTTPBuilder
import groovyx.net.http.EncoderRegistry
import groovyx.net.http.ContentType
import java.security.MessageDigest
//to upload attachments:
import org.apache.http.entity.mime.MultipartEntity
import org.apache.http.entity.mime.content.StringBody
import org.apache.http.entity.mime.content.InputStreamBody
import org.apache.http.entity.mime.HttpMultipartMode
import groovyx.net.http.Method

def CDATA_PLACEHOLDER_START = '<cdata-placeholder>'
def CDATA_PLACEHOLDER_END = '</cdata-placeholder>'

def baseUrl

// configuration
def config
try {
    println "scriptBasePath: ${scriptBasePath}"
    config = new ConfigSlurper().parse(new File(scriptBasePath, 'ConfluenceConfig.groovy').text)
} catch(groovy.lang.MissingPropertyException e) {
    //no scriptBasePath, works for some szenarios
    config = new ConfigSlurper().parse(new File('scripts/ConfluenceConfig.groovy').text)

def confluenceSpaceKey
def confluenceCreateSubpages
def confluencePagePrefix

// helper functions

def MD5(String s) {

// for getting better error message from the REST-API
void trythis (Closure action) {
    try {
    } catch (HttpResponseException error) {
        println "something went wrong - got an http response code "+error.response.status+":"
        println error.response.data
        throw error

def parseAdmonitionBlock(block, String type) {
    content = block.select(".content").first()
    titleElement = content.select(".title")
    titleText = ''
    if(titleElement != null) {
        titleText = "<ac:parameter ac:name=\"title\">${titleElement.text()}</ac:parameter>"
    block.after("<ac:structured-macro ac:name=\"${type}\">${titleText}<ac:rich-text-body>${content}</ac:rich-text-body></ac:structured-macro>")

def uploadAttachment = { def pageId, String url, String fileName, String note ->
    def is
    def localHash
    if (url.startsWith('http')) {
        is = new URL(url).openStream()
        //build a hash of the attachment
        localHash = MD5(new URL(url).openStream().text)
    } else {
        is = new File(url).newDataInputStream()
        //build a hash of the attachment
        localHash = MD5(new File(url).newDataInputStream().text)

    def api = new RESTClient(config.confluenceAPI)
    //this fixes the encoding
    api.encoderRegistry = new EncoderRegistry( charset: 'utf-8' )

    def headers = [
            'Authorization': 'Basic ' + config.confluenceCredentials,
    //check if attachment already exists
    def result = "nothing"
    def attachment = api.get(path: 'content/' + pageId + '/child/attachment',
            query: [
                    'filename': fileName,
            ], headers: headers).data
    def http
    if (attachment.size==1) {
        // attachment exists. need an update?
        def remoteHash = attachment.results[0].extensions.comment.replaceAll("(?sm).*#([^#]+)#.*",'$1')
        if (remoteHash!=localHash) {
            //hash is different -> attachment needs to be updated
            http = new HTTPBuilder(config.confluenceAPI + 'content/' + pageId + '/child/attachment/' + attachment.results[0].id + '/data')
            println "    updated attachment"
    } else {
        http = new HTTPBuilder(config.confluenceAPI + 'content/' + pageId + '/child/attachment')
    if (http) {
        http.request(Method.POST) { req ->
            requestContentType: "multipart/form-data"
            MultipartEntity multiPartContent = new MultipartEntity(HttpMultipartMode.BROWSER_COMPATIBLE)
            // Adding Multi-part file parameter "file"
            multiPartContent.addPart("file", new InputStreamBody(is, fileName))
            // Adding another string parameter "comment"
            multiPartContent.addPart("comment", new StringBody(note + "\r\n#" + localHash + "#"))
            headers.each { key, value ->
                req.addHeader(key, value)

def realTitle = { pageTitle ->
    confluencePagePrefix + pageTitle

def rewriteDescriptionLists = { body ->
    def TAGS = [ dt: 'th', dd: 'td' ]
    body.select('dl').each { dl ->
        // WHATWG allows wrapping dt/dd in divs, simply unwrap them
        dl.select('div').each { it.unwrap() }

        // group dts and dds that belong together, usually it will be a 1:1 relation
        // but HTML allows for different constellations
        def rows = []
        def current = [dt: [], dd: []]
        rows << current
        dl.select('dt, dd').each { child ->
            def tagName = child.tagName()
            if (tagName == 'dt' && current.dd.size() > 0) {
                // dt follows dd, start a new group
                current = [dt: [], dd: []]
                rows << current
            current[tagName] << child.tagName(TAGS[tagName])

        rows.each { row ->
            def sizes = [dt: row.dt.size(), dd: row.dd.size()]
            def rowspanIdx = [dt: -1, dd: sizes.dd - 1]
            def rowspan = Math.abs(sizes.dt - sizes.dd) + 1
            def max = sizes.dt
            if (sizes.dt < sizes.dd) {
                max = sizes.dd
                rowspanIdx = [dt: sizes.dt - 1, dd: -1]
            (0..<max).each { idx ->
                def tr = dl.appendElement('tr')
                ['dt', 'dd'].each { type ->
                    if (sizes[type] > idx) {
                        if (idx == rowspanIdx[type] && rowspan > 1) {
                            row[type][idx].attr('rowspan', "${rowspan}")
                    } else if (idx == 0) {
                        tr.appendElement(TAGS[type]).attr('rowspan', "${rowspan}")


def rewriteInternalLinks = { body, anchors, pageAnchors ->
    // find internal cross-references and replace them with link macros
    body.select('a[href]').each { a ->
        def href = a.attr('href')
        if (href.startsWith('#')) {
            def anchor = href.substring(1)
            def pageTitle = anchors[anchor] ?: pageAnchors[anchor]
            if (pageTitle) {
                // as Confluence insists on link texts to be contained
                // inside CDATA, we have to strip all HTML and
                // potentially loose styling that way.
                a.wrap("<ac:link${anchors.containsKey(anchor) ? ' ac:anchor="' + anchor + '"' : ''}></ac:link>")
                   .before("<ri:page ri:content-title=\"${realTitle pageTitle}\"/>")

def rewriteCodeblocks = { body ->
    body.select('pre > code').each { code ->
        if (code.attr('data-lang')) {
            code.select('span[class]').each { span ->
            code.before("<ac:parameter ac:name=\"language\">${code.attr('data-lang')}</ac:parameter>")
        code.parent() // pre now
            .wrap('<ac:structured-macro ac:name="code"></ac:structured-macro>')

def unescapeCDATASections = { html ->
    def start = html.indexOf(CDATA_PLACEHOLDER_START)
    while (start > -1) {
        def end = html.indexOf(CDATA_PLACEHOLDER_END, start)
        if (end > -1) {
            def prefix = html.substring(0, start) + CDATA_PLACEHOLDER_START
            def suffix = html.substring(end)
            def unescaped = html.substring(start + CDATA_PLACEHOLDER_START.length(), end)
                    .replaceAll('&lt;', '<').replaceAll('&gt;', '>')
            html = prefix + unescaped + suffix
        start = html.indexOf(CDATA_PLACEHOLDER_START, start + 1)

//modify local page in order to match the internal confluence storage representation a bit better
//definition lists are not displayed by confluence, so turn them into tables
//body can be of type Element or Elements
def deferredUpload = []
def parseBody =  { body, anchors, pageAnchors ->
    [   'note':'info',
        'tip':'tip'            ].each { adType, cType ->
        body.select('.admonitionblock.'+adType).each { block ->
            parseAdmonitionBlock(block, cType)
    //special for the arc42-template
            .wrap('<ac:structured-macro ac:name="expand"></ac:structured-macro>')
            .wrap('<ac:structured-macro ac:name="info"></ac:structured-macro>')
            .before('<ac:parameter ac:name="title">arc42</ac:parameter>')
    body.select('div.title').wrap("<strong></strong>").before("<br />").wrap("<div></div>")
    // see if we can find referenced images and fetch them
    new File("tmp/images/.").mkdirs()
    // find images, extract their URLs for later uploading (after we know the pageId) and replace them with this macro:
    // <ac:image ac:align="center" ac:width="500">
    // <ri:attachment ri:filename="deployment-context.png"/>
    // </ac:image>
    body.select('img').each { img ->
        img.attributes().each { attribute ->
            //println attribute.dump()
        def src = img.attr('src')
        def imgWidth = img.attr('width')?:500
        def imgAlign = img.attr('align')?:"center"
        println "    image: "+src

        //it is not an online image, so upload it to confluence and use the ri:attachment tag
        if(!src.startsWith("http")) {
          def newUrl = baseUrl.toString().replaceAll('\\\\','/').replaceAll('/[^/]*$','/')+src
          def fileName = (src.tokenize('/')[-1])

          trythis {
              deferredUpload <<  [0,newUrl,fileName,"automatically uploaded"]
          img.after("<ac:image ac:align=\"${imgAlign}\" ac:width=\"${imgWidth}\"><ri:attachment ri:filename=\"${fileName}\"/></ac:image>")
        // it is an online image, so we have to use the ri:url tag
        else {
          img.after("<ac:image ac:align=\"imgAlign\" ac:width=\"${imgWidth}\"><ri:url ri:value=\"${src}\"/></ac:image>")
    rewriteDescriptionLists body
    rewriteInternalLinks body, anchors, pageAnchors
    //sanitize code inside code tags
    rewriteCodeblocks body
    def pageString = unescapeCDATASections body.html().trim()

    //change some html elements through simple substitutions
    pageString = pageString
            .replaceAll('<br>','<br />')
            .replaceAll('</br>','<br />')

    return pageString

// the create-or-update functionality for confluence pages
def pushToConfluence = { pageTitle, pageBody, parentId, anchors, pageAnchors ->
    def api = new RESTClient(config.confluenceAPI)
    def headers = [
            'Authorization': 'Basic ' + config.confluenceCredentials,
            'Content-Type':'application/json; charset=utf-8'
    //this fixes the encoding
    api.encoderRegistry = new EncoderRegistry( charset: 'utf-8' )
    //try to get an existing page
    def page
    localPage = parseBody(pageBody, anchors, pageAnchors)

    def localHash = MD5(localPage)
    def prefix = '<p><ac:structured-macro ac:name="toc"/></p>'+(config.extraPageContent?:'')
    localPage  = prefix+localPage
    localPage += '<p><ac:structured-macro ac:name="children"><ac:parameter ac:name="sort">creation</ac:parameter></ac:structured-macro></p>'
    localPage += '<p style="display:none">hash: #'+localHash+'#</p>'

    def request = [
            type : 'page',
            title: realTitle(pageTitle),
            space: [
                    key: confluenceSpaceKey
            body : [
                    storage: [
                            value         : localPage,
                            representation: 'storage'
    if (parentId) {
        request.ancestors = [
                [ type: 'page', id: parentId]
    trythis {
        page = api.get(path: 'content',
                query: [
                        'spaceKey': confluenceSpaceKey,
                        'title'   : realTitle(pageTitle),
                        'expand'  : 'body.storage,version'
                ], headers: headers).data.results[0]
    if (page) {
        //println "found existing page: " + page.id +" version "+page.version.number

        //extract hash from remote page to see if it is different from local one

        def remotePage = page.body.storage.value.toString().trim()

        def remoteHash = remotePage =~ /(?ms)hash: #([^#]+)#/
        remoteHash = remoteHash.size()==0?"":remoteHash[0][1]

        if (remoteHash == localHash) {
            //println "page hasn't changed!"
            deferredUpload.each {
                uploadAttachment(page?.id, it[1], it[2], it[3])
            deferredUpload = []
            return page.id
        } else {
            trythis {
                // update page
                // https://developer.atlassian.com/display/CONFDEV/Confluence+REST+API+Examples#ConfluenceRESTAPIExamples-Updatingapage
                request.id      = page.id
                request.version = [number: (page.version.number as Integer) + 1]
                def res = api.put(contentType: ContentType.JSON,
                        requestContentType : ContentType.JSON,
                        path: 'content/' + page.id, body: request, headers: headers)
            println "> updated page"+page.id
            deferredUpload.each {
                uploadAttachment(page.id, it[1], it[2], it[3])
            deferredUpload = []
            return page.id
    } else {
        //create a page
        trythis {
            page = api.post(contentType: ContentType.JSON,
                    requestContentType: ContentType.JSON,
                    path: 'content', body: request, headers: headers)
        println "> created page "+page?.data?.id
        deferredUpload.each {
            uploadAttachment(page?.data?.id, it[1], it[2], it[3])
        deferredUpload = []
        return page?.data?.id

def parseAnchors = { page ->
    def anchors = [:]
    page.body.select('[id]').each { anchor ->
        def name = anchor.attr('id')
        anchors[name] = page.title
        anchor.before("<ac:structured-macro ac:name=\"anchor\"><ac:parameter ac:name=\"\">${name}</ac:parameter></ac:structured-macro>")

def pushPages
pushPages = { pages, anchors, pageAnchors ->
    pages.each { page ->
        println page.title
        def id = pushToConfluence page.title, page.body, page.parent, anchors, pageAnchors
        page.children*.parent = id
        pushPages page.children, anchors, pageAnchors

def recordPageAnchor = { head ->
    def a = [:]
    if (head.attr('id')) {
        a[head.attr('id')] = head.text()

def promoteHeaders = { tree, start, offset ->
    (start..7).each { i ->
        tree.select("h${i}").tagName("h${i-offset}").before('<br />')

config.input.each { input ->

    println "${input.file}"
    if (input.file ==~ /.*[.](ad|adoc|asciidoc)$/) {
        println "convert ${input.file}"
        "groovy asciidoc2html.groovy ${input.file}".execute()
        input.file = input.file.replaceAll(/[.](ad|adoc|asciidoc)$/,'.html')
        println "to ${input.file}"
    confluenceSpaceKey = input.spaceKey?:config.confluenceSpaceKey
    confluenceCreateSubpages = (input.createSubpages!= null)?input.createSubpages:config.confluenceCreateSubpages
    confluencePagePrefix = input.pagePrefix?:config.confluencePagePrefix

    def html =input.file?new File(input.file).getText('utf-8'):new URL(input.url).getText()
    baseUrl  =input.file?new File(input.file):new URL(input.url)
    Document dom = Jsoup.parse(html, 'utf-8', Parser.xmlParser())
    dom.outputSettings().prettyPrint(false);//makes html() preserve linebreaks and spacing
    dom.outputSettings().escapeMode(org.jsoup.nodes.Entities.EscapeMode.xhtml); //This will ensure xhtml validity regarding entities
    dom.outputSettings().charset("UTF-8"); //does no harm :-)
    def masterid = input.ancestorId

    // if confluenceAncestorId is not set, create a new parent page
    def parentId = !input.ancestorId ? null : input.ancestorId
    def anchors = [:]
    def pageAnchors = [:]
    def sections = pages = []

    // let's try to select the "first page" and push it to confluence
    dom.select('div#preamble div.sectionbody').each { pageBody ->
        def preamble = [
            title: input.preambleTitle ?: "arc42",
            body: pageBody,
            children: [],
            parent: parentId
        pages << preamble
        sections = preamble.children
        parentId = null
    // <div class="sect1"> are the main headings
    // let's extract these
    dom.select('div.sect1').each { sect1 ->
        Elements pageBody = sect1.select('div.sectionbody')
        def currentPage = [
            title: sect1.select('h2').text(),
            body: pageBody,
            children: [],
            parent: parentId

        if (confluenceCreateSubpages) {
            pageBody.select('div.sect2').each { sect2 ->
                def title = sect2.select('h3').text()
                def body = sect2
                def subPage = [
                    title: title,
                    body: body
                currentPage.children << subPage
                promoteHeaders sect2, 4, 3
        } else {
            promoteHeaders sect1, 3, 2
        sections << currentPage

    pushPages pages, anchors, pageAnchors

3.7. convertToDocx


3.7.1. Source

task convertToDocx (
        group: 'docToolchain',
        type: Exec
) {
    workingDir 'build/docs/docbook'
    executable = "pandoc"
    new File('build/docs/docx/').mkdirs()
    args = ['-r','docbook',

3.8. convertToEpub


Dependency: [generateDocBook]

This task uses pandoc to convert the DocBook output from AsciiDoctor to ePub. This way, you can read your documentation in a convenient way on an eBook-reader.

Result can be found in build/docs/epub

3.8.1. Source

task convertToEpub (
        group: 'docToolchain',
        type: Exec
) {
    workingDir 'build/docs/docbook'
    //commandLine "pandoc -r arc42-template.xml -o arc42-template.docx "
    executable = "pandoc"
    new File('build/docs/epub/').mkdirs()
    args = ['-r','docbook',

3.9. exportEA


3.9.1. Source

task exportEA(
        dependsOn: [streamingExecute],
        description: 'exports all diagrams and some texts from EA files',
        group: 'docToolchain'
) << {
    //make sure path for notes exists
    //and remove old notes
    new File('src/docs/ea').deleteDir()
    //also remove old diagrams
    new File('src/docs/images/ea').deleteDir()
    //create a readme to clarify things
    def readme="""This folder contains exported diagrams or notes from Enterprise Architect.

Please note that these are generated files but reside in the `src`-folder in order to be versioned.

This is to make sure that they can be used from environments other than windows.

# Warning!

**The contents of this folder        will be overwritten with each re-export!**

use `gradle exportEA` to re-export files
    new File('src/docs/images/ea/.').mkdirs()
    new File('src/docs/images/ea/readme.ad').write(readme)
    new File('src/docs/ea/.').mkdirs()
    new File('src/docs/ea/readme.ad').write(readme)
    //execute through cscript in order to make sure that we get WScript.echo right
    "%SystemRoot%\\System32\\cscript.exe //nologo scripts/exportEAP.vbs".executeCmd()
    //the VB Script is only capable of writing iso-8859-1-Files.
    //we now have to convert them to UTF-8
    new File('src/docs/ea/.').eachFileRecurse { file ->
        if (file.isFile()) {
            println "exported notes "+file.canonicalPath
    ' based on the "Project Interface Example" which comes with EA
    ' http://stackoverflow.com/questions/1441479/automated-method-to-export-enterprise-architect-diagrams

    Dim EAapp 'As EA.App
    Dim Repository 'As EA.Repository
    Dim FS 'As Scripting.FileSystemObject

    Dim projectInterface 'As EA.Project

    Const   ForAppending = 8

    ' Helper
    ' http://windowsitpro.com/windows/jsi-tip-10441-how-can-vbscript-create-multiple-folders-path-mkdir-command
    Function MakeDir (strPath)
      Dim strParentPath, objFSO
      Set objFSO = CreateObject("Scripting.FileSystemObject")
      On Error Resume Next
      strParentPath = objFSO.GetParentFolderName(strPath)

      If Not objFSO.FolderExists(strParentPath) Then MakeDir strParentPath
      If Not objFSO.FolderExists(strPath) Then objFSO.CreateFolder strPath
      On Error Goto 0
      MakeDir = objFSO.FolderExists(strPath)

    End Function

    Sub WriteNote(currentModel, currentElement, notes, prefix)
        If (Left(notes, 6) = "{adoc:") Then
            strFileName = Mid(notes,7,InStr(notes,"}")-7)
            strNotes = Right(notes,Len(notes)-InStr(notes,"}"))
            set objFSO = CreateObject("Scripting.FileSystemObject")
            If (currentModel.Name="Model") Then
              ' When we work with the default model, we don't need a sub directory
              path = "./src/docs/ea/"
              path = "./src/docs/ea/"&currentModel.Name&"/"
            End If
            ' WScript.echo path&strFileName
            post = ""
            If (prefix<>"") Then
                post = "_"
            End If
            set objFile = objFSO.OpenTextFile(path&prefix&post&strFileName&".ad",ForAppending, True)
            name = currentElement.Name
            name = Replace(name,vbCr,"")
            name = Replace(name,vbLf,"")
            ' WScript.echo "-"&Left(strNotes, 6)&"-"
            if (Left(strNotes, 3) = vbCRLF&"|") Then
                ' content should be rendered as table - so don't interfere with it
                'let's add the name of the object
            End If
            if (prefix<>"") Then
                ' write the same to a second file
                set objFile = objFSO.OpenTextFile(path&prefix&".ad",ForAppending, True)
            End If
        End If
    End Sub

    Sub SyncJira(currentModel, currentDiagram)
        notes = currentDiagram.notes
        set currentPackage = Repository.GetPackageByID(currentDiagram.PackageID)
        updated = 0
        created = 0
        If (Left(notes, 6) = "{jira:") Then
            WScript.echo " >>>> Diagram jira tag found"
            strSearch = Mid(notes,7,InStr(notes,"}")-7)
            Set objShell = CreateObject("WScript.Shell")
            'objShell.CurrentDirectory = fso.GetFolder("./scripts")
            Set objExecObject = objShell.Exec ("cmd /K  groovy ./scripts/exportJira.groovy """ & strSearch &""" & exit")
            strReturn = ""
            x = 0
            y = 0
            Do While Not objExecObject.StdOut.AtEndOfStream
                output = objExecObject.StdOut.ReadLine()
                ' WScript.echo output
                jiraElement = Split(output,"|")
                name = jiraElement(0)&":"&vbCR&vbLF&jiraElement(4)
                On Error Resume Next
                Set requirement = currentPackage.Elements.GetByName(name)
                On Error Goto 0
                if (IsObject(requirement)) then
                    ' element already exists
                    requirement.notes = ""
                    requirement.notes = requirement.notes&"<a href='"&jiraElement(5)&"'>"&jiraElement(0)&"</a>"&vbCR&vbLF
                    requirement.notes = requirement.notes&"Priority: "&jiraElement(1)&vbCR&vbLF
                    requirement.notes = requirement.notes&"Created: "&jiraElement(2)&vbCR&vbLF
                    requirement.notes = requirement.notes&"Assignee: "&jiraElement(3)&vbCR&vbLF
                    updated = updated + 1
                    Set requirement = currentPackage.Elements.AddNew(name,"Requirement")
                    requirement.notes = ""
                    requirement.notes = requirement.notes&"<a href='"&jiraElement(5)&"'>"&jiraElement(0)&"</a>"&vbCR&vbLF
                    requirement.notes = requirement.notes&"Priority: "&jiraElement(1)&vbCR&vbLF
                    requirement.notes = requirement.notes&"Created: "&jiraElement(2)&vbCR&vbLF
                    requirement.notes = requirement.notes&"Assignee: "&jiraElement(3)&vbCR&vbLF
                    Set dia_obj = currentDiagram.DiagramObjects.AddNew("l="&(10+x*200)&";t="&(10+y*50)&";b="&(10+y*50+44)&";r="&(10+x*200+180),"")
                    x = x + 1
                    if (x>3) then
                      x = 0
                      y = y + 1
                    end if
                    dia_obj.ElementID = requirement.ElementID
                    created = created + 1
                end if
            Set objShell = Nothing
            WScript.echo "created "&created&" requirements"
            WScript.echo "updated "&updated&" requirements"
        End If
    End Sub

    Sub SaveDiagram(currentModel, currentDiagram)
                ' Open the diagram

            ' Save and close the diagram
            If (currentModel.Name="Model") Then
                ' When we work with the default model, we don't need a sub directory
                path = "/src/docs/images/ea/"
                path = "/src/docs/images/ea/" & currentModel.Name & "/"
            End If
            diagramName = Replace(currentDiagram.Name," ","_")
            diagramName = Replace(diagramName,vbCr,"")
            diagramName = Replace(diagramName,vbLf,"")
            filename = path & diagramName & ".png"
            MakeDir("." & path)
            ' projectInterface.putDiagramImageToFile currentDiagram.DiagramID,fso.GetAbsolutePathName(".")&filename,1
            WScript.echo " extracted image to ." & filename
            For Each diagramElement In currentDiagram.DiagramObjects
                Set currentElement = Repository.GetElementByID(diagramElement.ElementID)
                WriteNote currentModel, currentElement, currentElement.Notes, diagramName&"_notes"
            For Each diagramLink In currentDiagram.DiagramLinks
                set currentConnector = Repository.GetConnectorByID(diagramLink.ConnectorID)
                WriteNote currentModel, currentConnector, currentConnector.Notes, diagramName&"_links"
    End Sub
    ' Recursively saves all diagrams under the provided package and its children
    Sub DumpDiagrams(thePackage,currentModel)

        Set currentPackage = thePackage

        ' export element notes
        For Each currentElement In currentPackage.Elements
            WriteNote currentModel, currentElement, currentElement.Notes, ""
            ' export connector notes
            For Each currentConnector In currentElement.Connectors
                ' WScript.echo currentConnector.ConnectorGUID
                if (currentConnector.ClientID=currentElement.ElementID) Then
                    WriteNote currentModel, currentConnector, currentConnector.Notes, ""
                End If
            if (Not currentElement.CompositeDiagram Is Nothing) Then
                SyncJira currentModel, currentElement.CompositeDiagram
                SaveDiagram currentModel, currentElement.CompositeDiagram
            End If
            if (Not currentElement.Elements Is Nothing) Then
                DumpDiagrams currentElement,currentModel
            End If

        ' Iterate through all diagrams in the current package
        For Each currentDiagram In currentPackage.Diagrams
            SyncJira currentModel, currentDiagram
            SaveDiagram currentModel, currentDiagram

        ' Process child packages
        Dim childPackage 'as EA.Package
        ' otPackage = 5
        if (currentPackage.ObjectType = 5) Then
            For Each childPackage In currentPackage.Packages
                call DumpDiagrams(childPackage, currentModel)
        End If
    End Sub

		Function SearchEAProjects(path)

		  For Each folder In path.SubFolders
		    SearchEAProjects folder

		  For Each file In path.Files
				If fso.GetExtensionName (file.Path) = "eap" Then
					WScript.echo "found "&file.path
				End If

    End Function

    Sub OpenProject(file)
      ' open Enterprise Architect
      Set EAapp = CreateObject("EA.App")
      WScript.echo "opening Enterprise Architect. This might take a moment..."
      ' load project
      ' make Enterprise Architect to not appear on screen
      EAapp.Visible = False

      ' get repository object
      Set Repository = EAapp.Repository
      ' Show the script output window
      ' Repository.EnsureOutputVisible("Script")

      Set projectInterface = Repository.GetProjectInterface()

      ' Iterate through all model nodes
      Dim currentModel 'As EA.Package
      For Each currentModel In Repository.Models
        ' Iterate through all child packages and save out their diagrams
        Dim childPackage 'As EA.Package
        For Each childPackage In currentModel.Packages
          call DumpDiagrams(childPackage,currentModel)
    End Sub

  set fso = CreateObject("Scripting.fileSystemObject")
  WScript.echo "Image extractor"
  WScript.echo "looking for .eap files in " & fso.GetAbsolutePathName(".") & "/src"
  'Dim f As Scripting.Files
  SearchEAProjects fso.GetFolder("./src")
  WScript.echo "finished exporting images"

3.10. exportChangeLog


As the name says, this task exports the changelog to be referenced from within your documentation - if needed.

The source is the git changelog for the path src/docs - it only contains the commit messages for changes on the documentation. All changes on the build or other sources from the repository will not show up.

The changelog is written to build/docs/changelog.adoc and contains the changes with date, author and commit message already formatted as AsciiDoc table content:

| 09.04.2017
| Ralf D. Mueller
| fix #24 template updated to V7.0

| 08.04.2017
| Ralf D. Mueller
| fixed typo

You simply include it like this:

| Date
| Author
| Comment



By excluding the table definition, you can easily translate the table headings through different text snippets.

it might make sense to only include certain commit messages from the change log or exclude others (starting with # or //?). But this isn’t implemented yet.

3.10.1. Source

task exportChangeLog(
        dependsOn: [streamingExecute],
        description: 'exports the change log from a git subpath',
        group: 'docToolchain'
) << {
    def res = "git log ./src/docs/arc42".execute().text
    def changes = []
    def change = null
    res.eachLine { line ->
        switch (line) {
            case ~/^commit.*/:
                if (change!=null) {
                    changes << change
                change = [commit:line-'commit ',log:'']
            case ~/^Author:.*/:
                change['author'] = line-'Author: '
            case ~/^Date:.*/:
                change['date'] = line-'Date: '
                change['log'] += (line ? line.trim()+ "\n" : '')
    changes << change
    def path = './build/docs/'
    new File(path).mkdirs()
    def changelog = new File(path+'changelog.adoc')

    changes.each { c ->
        try {
            changelog.append """| ${new Date(Date.parse(c.date)).format("dd.MM.yyyy")}
| ${c.author.replaceAll('<[^>]*>','')}
| ${c.log}
        } catch (Exception e) { println c }

3.11. exportJiraIssues


3.11.1. Source

task exportJiraIssues(
        description: 'exports all jira issues from a given search',
        group: 'docToolchain'
) << {
    def user = jiraUser
    def pass = jiraPass
    if (!pass) {
        pass = System.console().readPassword("Jira password for user '$user': ")

    def stats = [:]
    def jira = new groovyx.net.http.RESTClient( jiraRoot+'/rest/api/2/' )
    jira.encoderRegistry = new groovyx.net.http.EncoderRegistry( charset: 'utf-8' )
    def headers = [
            'Authorization':"Basic " + "${user}:${pass}".bytes.encodeBase64().toString(),
            'Content-Type':'application/json; charset=utf-8'
    def openIssues = new File('./build/docs/openissues.adoc')
    println jiraJql.replaceAll('%jiraProject%',jiraProject).replaceAll('%jiraLabel%',jiraLabel)
            query:['jql': jiraJql.replaceAll('%jiraProject%',jiraProject).replaceAll('%jiraLabel%',jiraLabel),
                   'fields':'created,resolutiondate,priority,summary,timeoriginalestimate, assignee'
    ).data.issues.each { issue ->
        openIssues.append("| <<${issue.key}>> ",'utf-8')
        openIssues.append("| ${issue.fields.priority.name} ",'utf-8')
        openIssues.append("| ${Date.parse("yyyy-MM-dd'T'H:m:s.000z",issue.fields.created).format('dd.MM.yy')} ",'utf-8')
        openIssues.append("| ${issue.fields.assignee?issue.fields.assignee.displayName:'not assigned'} ",'utf-8')
        openIssues.append("| ${jiraRoot}/browse/${issue.key}[${issue.fields.summary}]\n",'utf-8')


3.12. exportPPT


3.12.1. Source

task exportPPT(
        dependsOn: [streamingExecute],
        description: 'exports all slides and some texts from PPT files',
        group: 'docToolchain'
) << {
    //make sure path for notes exists
    //and remove old notes
    new File('src/docs/ppt').deleteDir()
    //also remove old diagrams
    new File('src/docs/images/ppt').deleteDir()
    //create a readme to clarify things
    def readme="""This folder contains exported slides or notes from .ppt presentations.

Please note that these are generated files but reside in the `src`-folder in order to be versioned.

This is to make sure that they can be used from environments other than windows.

# Warning!

**The contents of this folder        will be overwritten with each re-export!**

use `gradle exportPPT` to re-export files
    new File('src/docs/images/ppt/.').mkdirs()
    new File('src/docs/images/ppt/readme.ad').write(readme)
    new File('src/docs/ppt/.').mkdirs()
    new File('src/docs/ppt/readme.ad').write(readme)
    //execute through cscript in order to make sure that we get WScript.echo right
    "%SystemRoot%\\System32\\cscript.exe //nologo scripts/exportPPT.vbs".executeCmd()

3.13. exportExcel


Sometimes you have tabular data to be included in your documentation. Then it is likely that the data is available as excel sheet or you would like to use MS Excel to create and edit it.

Either way, this task lets you export your excel sheet and include it directly in your docs.

The task searches for .xlsx files and exports each contained worksheet as .csv and as .adoc.

Formulars containes in your Workbook are evaluated and exported statically.

The generated files are written to src/excel/[filename]/[worksheet].(adoc|cvs) . The src and not build folder is chosen mainly to get a better history for all changes on the worksheets.

The files can be included either as AsciiDoc


or as CSV file


The AsciiDoc version gives you a bit more control:

  • horizontal and vertical alignment is preserved

  • line breaks are preserved

  • column width — relative to other columns is preserved

  • background colors are preserved.

3.13.1. Source

task exportExcel(
        description: 'exports all excelsheets to csv and AsciiDoc',
        group: 'docToolchain'
) << {
    File sourceDir = file(srcDir)

    def tree = fileTree(srcDir).include('**/*.xlsx').exclude('**/~*')

    def exportFileDir = new File(sourceDir, 'excel')

    //make sure path for notes exists
    //create a readme to clarify things
    def readme="""This folder contains exported workbooks from Excel.

Please note that these are generated files but reside in the `src`-folder in order to be versioned.

This is to make sure that they can be used from environments other than windows.

# Warning!

**The contents of this folder will be overwritten with each re-export!**

use `gradle exportExcel` to re-export files
    new File(exportFileDir, '/readme.ad').write(readme)

    def nl = System.getProperty("line.separator")

    def export = {sheet, evaluator, targetFileName ->
        def targetFileCSV = new File(targetFileName+'.csv')
        def targetFileAD = new File(targetFileName+'.adoc')
        def df = new org.apache.poi.ss.usermodel.DataFormatter();
        def regions = []
        sheet.numMergedRegions.times {
            regions << sheet.getMergedRegion(it)
        logger.debug "sheet contains ${regions.size()} regions"
        def color = ''
        def resetColor = false
        def numRows = 0
        def headerCreated = false
        (sheet.lastRowNum+1).times { rowNum ->
            def row = sheet.getRow(rowNum)
            if (row && !headerCreated) {
                headerCreated = true
                // create AsciiDoc table header
                def width = []
                numRows = row.lastCellNum
                numRows.times { columnIndex ->
                    width << sheet.getColumnWidth((int)columnIndex)
                //lets make those numbers nicer:
                width = width.collect{Math.round(100*it/width.sum())}
            def data = []
            def style = []
            def colors = []
            // For each row, iterate through each columns
            if (row) {
                numRows.times { columnIndex ->
                    def cell = row.getCell(columnIndex)
                    if (cell) {
                        def cellValue = df.formatCellValue(cell, evaluator)
                        if (cellValue.startsWith('*') && cellValue.endsWith('\u20AC')) {
                            // Remove special characters at currency
                            cellValue = cellValue.substring(1).trim();
                        def cellStyle = ''
                        def region = regions.find { it.isInRange(cell.rowIndex, cell.columnIndex) }
                        def skipCell = false
                        if (region) {
                            //check if we are in the upper left corner of the region
                            if (region.firstRow == cell.rowIndex && region.firstColumn == cell.columnIndex) {
                                def colspan = 1 + region.lastRow - region.firstRow
                                def rowspan = 1 + region.lastColumn - region.firstColumn
                                if (rowspan > 1) {
                                    cellStyle += "${rowspan}"
                                if (colspan > 1) {
                                    cellStyle += ".${colspan}"
                                cellStyle += "+"
                            } else {
                                skipCell = true
                        if (!skipCell) {
                            switch (cell.cellStyle.alignmentEnum.toString()) {
                                case 'RIGHT':
                                    cellStyle += '>'
                                case 'CENTER':
                                    cellStyle += '^'
                            switch (cell.cellStyle.verticalAlignmentEnum.toString()) {
                                case 'BOTTOM':
                                    cellStyle += '.>'
                                case 'CENTER':
                                    cellStyle += '.^'
                            color = cell.cellStyle.fillForegroundXSSFColor?.rgb?.encodeHex()
                            color = color != null ? nl + "{set:cellbgcolor:#${color}}" : ''
                            data << cellValue
                            if (color == '' && resetColor) {
                                colors << nl + "{set:cellbgcolor!}"
                                resetColor = false
                            } else {
                                colors << color
                            if (color != '') {
                                resetColor = true
                            style << cellStyle
                        } else {
                            data << ""
                            colors << ""
                            style << "skip"
                    } else {
                        data << ""
                        colors << ""
                        style << ""

            } else {
                //insert empty row
                numRows.times {
                    data << ""
                    colors << ""
                    style << ""

            .join(',')+nl, 'UTF-8')
                    .collect{value, index ->
                if (style[index]=="skip") {
                } else {
                    style[index] + "| ${value.replaceAll('[|]', '{vbar}').replaceAll("\n", ' +$0') + colors[index]}"
            .join(nl)+nl*2, 'UTF-8')

    tree.each { File excel ->
        println excel
        def excelDir = new File(exportFileDir, excel.getName())
        InputStream inp
        inp = new FileInputStream(excel)
        def wb = org.apache.poi.ss.usermodel.WorkbookFactory.create(inp);
        def evaluator = wb.getCreationHelper().createFormulaEvaluator();
        for(int wbi=0; wbi < wb.getNumberOfSheets(); wbi++) {
            def sheetName = wb.getSheetAt(wbi).getSheetName()
            println sheetName
            def targetFile = new File(excelDir, sheetName)
            export(wb.getSheetAt(wbi), evaluator, targetFile.getAbsolutePath())

3.14. htmlSanityCheck


This task invokes the htmlSanityCheck gradle plugin. It is a Standalone (batch- and command-line) html sanity checker - detects missing images, dead links, duplicate bookmarks.

In docToolchain, this task is used to ensure that the generated HTML contains no missing links or other problems.

This task is the last default task and creates a report in build/report/htmlchecks/index.html

Figure 3. sample report

Further information can be found on github: https://github.com/aim42/htmlSanityCheck

3.14.1. Source

htmlSanityCheck {
    sourceDir = new File( "$buildDir/docs/html5" )

    // files to check - in Set-notation
    //sourceDocuments = [ "one-file.html", "another-file.html", "index.html"]

    // where to put results of sanityChecks...
    checkingResultsDir = new File( checkingResultsPath )
    checkExternalLinks = false

3.15. dependencyUpdates

This task uses the Gradle versions plugin created by Ben Manes to check for outdated build dependencies. Quite helpful to keep all dependencies up-to-date.

4. Further Reading

This chapter lists some additional references to interesting resources.

4.2. Books

links to amazon are affiliate links

4.2.1. English Books

4.3. Past and upcoming Talks

4.3.1. Dokumentation am (Riesen-)Beispiel – arc42, AsciiDoc und Co. in Aktion

Anhand eines großen Systems zeigen Gernot und Ralf, wie Sie mit ziemlich wenig Aufwand angemessene und vernünftige Dokumentation für unterschiedliche Stakeholder produzieren – sodass Entwicklungsteams dabei auch noch Spaß haben.

Unser Rezept: AsciiDoc mit arc42 mischen, Automatisierung mit Gradle und Maven hinzufügen und mit Diagramm- oder Modellierungstools Ihrer Wahl kombinieren. Schon sind schicke HTML- und reviewfähige PDF-Dokumente fertig. Auf Wunsch gibts DOCX und Confluence als Zugabe.

Wir zeigen, wie Sie Doku genau wie Quellcode verwalten können, stakeholderspezifische Dokumente erzeugen und Diagramme automatisiert integrieren können. Einige Teile dieser Doku können Sie sogar automatisiert testen.

Zwischendurch bekommen Sie zahlreiche Tipps, wie und wo Sie systematisch den Aufwand für Dokumentation reduzieren können und trotzdem lesbare, verständliche und praxistaugliche Ergebnisse produzieren.


4.3.2. Gesunde Dokumentation mit Asciidoctor

Autoren möchten Inhalte effizient dokumentieren und vorhandene Inhalte wiederverwenden. Ein Leser möchte das Dokument in einem ansprechenden Layout präsentiert bekommen.

Das textbasierte Asciidoc-Format bietet für einen Entwickler oder technischen Redakteur alle notwendigen Auszeichungselemente, um auch anspruchsvolle Dokumente zu schreiben. So werden unter anderem Tabellen, Fußnoten und annotierte Quellcodes unterstützt. Gleichzeitig ist es ähnlich leichtgewichtig wie z.B. das Markdown Format. Für die Leser wird HTML, PDF oder EPUB generiert.

Da Asciidoc wie Programmcode eingecheckt wird und Merge-Operationen einfach möglich sind, können Programmcode und Dokumentation zusammen versioniert und auf einem einheitlichen Stand gehalten werden.

Der Vortrag gibt eine kurze Einführung in Asciidoc und die dazugehörigen Werkzeuge.


5. Acknowledgements and Contributors

This project is an open source project which is based on community efforts.

Many people are involved in the underlying technologies like AsciiDoc, Asciidoctor, Gradle, arc42 etc. This project depends and build on them.

But it even more depends on the direct contributions made through giving feedback, creating issues, answering questions or sending pull requests.

Here is an incomplete and unordered list of contributors: