π "what's wrong with writing (and maintaining) docs ?"
Very often a single team has a privileged access to a set of data, services, middlewares and you cannot grant access because of technical limitations (sometimes due to limited licences plans), costs, access control strategies
or so many other reasons : there may me plenty of them.
π Still, it is not rare that third parties need to have at least a partial access to these resources, for example to develop new services on top of yours.
πΈ The cost of manual legacy documentation
The consequence is that you may be asked to implement a Wiki (eg. on Confluence) that your team will be charged to maintain in addition to its core activities.
Then you grant access to this wiki and everyone is happy..or so you think... as
you have just increased the
RUN
load of your team... and then impacted your own "Time to Market" to deliver new bankable services.
To make simple, the problem with this strategy is that :
- π« It will cost you a lot of time
- ποΈ You have to stay up-to-date with sometime - and hopefully for you - very fast changing systems
- π You waste your team skills & workforce on writing most of the time outdated documentations
βΎοΈ Embracing continuous change
What we want is a SchemaCrawler like portable and static report of our onPrem Kafka so we can embrace the change comfortably and continuously.
π― The "What" & the "How"
This post is about this common pattern and how we delivered a scalable data-driven documentation pipeline around Kafka, based on data & CI:
-
π° We already have the automatically data prepared & delivered as
csv
andjson
by scheduled GH Actions on a third party repository -
π¦₯ On a dedicated Wiki-like repository we
clone
the part of the data we need -
π Thanks to templates &
gomplate
we transformcsv
andjson
into nicemarkdown
- π We build & deliver the resulting static site with HUGO Github Action on GH Pages
-
π€© We enjoy the
HUGO shadocs
theme
π° Benefits
Below some benefits we immediately got
- π Up-to-date documentation
- π‘οΈ Data security thanks to custom obfuscation so the doc can be shared with confidence to third parties
-
β‘
DevEx
impact : third parties get informations "as a Service", whenever they need it 24/7 : no need to make an issue... - π€© Responsive wiki and amazing UI (PC, tablet and even phone)
- π Continuously improving documentations, howtos and resources
- "π as a Service" delivery
- π Linked to classical/legacy static documentations... in both directions
- π GitHub driven Access management (Teams, SSO,...)
πΏ Demo
πͺ The magic of gomplate
: csv
to markdown
s
The core magic that transforms a single input csv
file into more than 100 separate markdown
(in less than 1' ) is below :
in: |
{{ $topics := (datasource "topics") | data.ToCSV | data.CSVByRow }}
{{ range $i, $e := $topics }}
{{ $file := $e.name | strings.Slug }}
{{ tmpl.Exec "topicT" . | file.Write (print "outputs/topics/" $file ".md") }}
{{ end }}
templates:
- topicT=template/topic.tmpl
datasources:
topics:
url: source/import/auto/kafka/node_kafka_topic.csv
outputFiles: ["csv"]
suppressEmpty: true
π€© Then HUGO
does the rest within GitHub to deliver a whole functional website : the overall process takes less than a minute.
Top comments (2)