►
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
Dependency Management: Where the Fork are We? - Olivier Vernin, Suse
JW Grand Ballroom 2
Speakers: Olivier Vernin
Today’s software development relies more than ever on (third) dependencies. Even though solutions exist to help maintain them, yet, we see many of them being outdated. Why do so many teams struggle with that practice?
Let’s see that together.
A
I
think
we
are
ready
to
start
it's
11
30
on
my
time.
Is
it
good,
perfect
awesome
thanks
for
joining
me
to
my
second
presentation
today,
I'm
gonna
talk
about
dependency
management
and
where
the
fork
are
we
today,
this
is
based
on
learning,
so
I'm
working
first
user
as
an
engineering
manager
feel
free
to
to
follow
and
to
reach
out
to
me
on
twitter.
If
you
want
most
of
my
work
is
open
source
so
and
yeah
spending
most
of
my
time
working
on
github.
A
What
so,
basically
dependencies
are
everywhere
on
our
project,
especially
with
you
think
about
the
full
life
cycle
of
your
projects,
so
you
obviously
have
dependencies
for
your
favorite
languages,
but
you
also
can
also
think
about
communities,
for
example,
infrastructure,
for
example,
when
you
think
all
the
hem
chart
that
you
you
may
be
using
when
you
think
about
the
documentation
that
you
build
for
your
application,
you
also
have
a
lot
of
links,
so
dependencies
are
really
everywhere
and
automating
them
brings
you
quite
a
lot
of
benefits,
so
obviously
it
reduces
the
technical
depth.
A
When
you
can
automate
dependencies,
it
increases
your
confidence
in
changing
your
projects.
It
increases
security
because
you
can
always
you
can
easily
get
to
the
latest
stable
version
that
you're
using
it
allows
you.
That's
probably
one
of
the
features
that
I
like
it
allows
you
to
use
new
features.
A
You
have
a
lot
of
challenges
when
you
think
about
the
version.
The
way
teams
version
their
application
and
how
you
validate
the
upgrade.
So
let's
just
look
at
the
data
structure,
so
you
can
think
of
okay.
You
have
two
kind
of
data
structure.
You
want
automate
the
first
one
is
a
pretty
standard
data
structure.
You
know
the
file
that
you're
looking
for,
and
you
know
the
contents
so
in
this
example
it's
a
simple
so,
for
example,
in
this
example,
is
for
javascript.
A
So
you
know
that
if
you
have
a
javascript
application,
you
should
have
a
package
that
json
in
your
git
repository
and
the
structure
of
that
package
of
json
is,
I
mean
pretty
obvious,
so
you
just
look
into
and
fetch
all
the
dependencies.
That's,
I
would
say
the
easiest.
That's
the
easiest
scenario,
because
you
know
what
file
to
look
for
in
advance,
and
so
you
can
retrieve
all
the
dependencies.
A
But
on
the
other
side
you
have
a
lot
of
more
dependencies
that
you
need
to
automate
and
those
dependencies
can
be
done
in
many
more
files.
So,
for
example,
I
offer
a
beer
to
the
person
who
can
tell
me
what
should
be
the
next
version
for
the
image
column
tag,
and
the
simple
reason
is:
this
file
is
just
a
random
file
fetch
from
a
git
repository
on
the
jenkins
project
and
the
thing
is
it's
really
difficult
to
know?
A
Okay,
what
information
should
be
should
be
there
and
those
kind
of
dependencies
are
pretty
much
everywhere
and
when
you
want
to
automate
this
kind
of
dependencies.
Well,
you
don't
really
have
a
lot
of
choices.
You
could
imagine
an
ai
that
can
automatically
detect
the
file
the
kind
of
files
and
for
what
you
use
a
specific
file.
I
just
just
to
come
back
to
the
previous
example.
A
So
if
we,
if
you
think
about
all
those
scenarios
where
you
have
files,
you
have
documentation,
you
have
your
the
configuration
of
your
environment
whatever,
and
you
want
to
automate
them.
You
could
think
about
maybe
an
ai
that
could
automate
that
but
yeah
we
are
not
there.
Yet.
We
could
enforce
some
standards
away
to
update
those
data
structure,
but
that
bring
us
to
have
standards
which
can
be
difficult
or
finally,
we
can
take
care
of
those
updates
by
yourself.
The
reality
today
is
that's
what
we
do.
A
We
just
write
bash
scripts
or
make
files,
or
maybe
a
small
python
script
or
small.
Whatever
script
you
want
to
use,
and
then
you
copy
paste
those
scripts
between
github
repositories,
so
that
can
be
really
difficult
to
maintain
on
long
term.
A
So
that's
for
the
data
structure.
That's
the
first
challenge
to
keep
in
mind
the
second
one
that
I
have
is
when
you
want
to
automate
those
updates.
You
have
to
think
about
what
are
the
version
that
the
project
use
and
in
this
case
I
really
think
about
all
the
third
dependencies
so
not
only,
for
example,
the
project
that
the
backend
team
is
building
in
your
company,
but
also,
for
example,
okay.
A
What's
the
version
number
versioning
for
jenkins,
for
instance,
and
you
have
a
lot
of
different
kind
of
versioning
and
when
you
want
to
validate
that,
you
can
upgrade
to
a
specific
version
safely
without
breaking
your
own
application.
You
need
to
have
information
such
as
okay.
There
is
a
new
version,
and
can
I
safely
merge
this
one
and
what
kind
of
information
do
I
get?
A
For
example,
ubuntu
we
just
put
the
date
of
the
the
release,
and
so
you
know
that
when
you
have
a
pr
opening
on
your
git
repositories,
you
have
a
pr
telling
you
that
there
is
a
new
version
available
and
the
only
information
that
you
know
when
you
face
that
kind
of
situation.
Is
that
you're
trying
to
upgrade
to
a
new
world
version?
You
don't
have
any
other
information
and
in
the
end,
you
still
have
to
look
at
the
change
log.
Look
at
the
I
mean
what
would
be
so.
A
The
other
way
to
look
at
versioning
is
hash
versioning.
So,
for
example,
docker
docker
image
digest
git
commit
ash.
You
can
build
building
a
checksum.
This
kind
of
versioning
is
interesting.
If
you
want
to
have
very,
you
can
identify
your
version
with
grid
precision,
so
you
really
know
what
you
have
in
your
I
mean
what
you
have
your
infrastructure
project,
whatever
how?
However,
you
use
that
version,
but
the
reality
is
when
you
need
to
validate
the
change
introduced
by
this
kind
of
version
change.
A
It
can
be
really
cumbersome
because,
well,
you
have
no
idea
if
this
hash
match
exactly
the
latest
one.
So
in
the
end,
each
time
you
have
you
introduce
a
change,
you
need
to
manually
run
the
validation,
and
that
can
be
quite
annoying.
I
mean
when
in
terms
of
validating
so,
for
example,
on
the
jenkins
project,
we
used
we
automated
quite
a
lot
of
date
using
harsh
versioning,
because
we
really
want
to
be
able
to
identify
what
was
running
in
the
cluster.
A
Another
pretty
popular
way
to
version
your
application
is
to
use
the
build
number
this
one
is
I
mean
I,
I
saw
a
lot
of
projects
using
this
one.
I
find
it's
really
annoying
when
I
need
to
review
a
version
change
using
the
bit
number
versioning,
because
I
don't
necessarily
have
access
to
the
ci
environment
that
generated
this
version,
and
so
the
idea
is,
for
example,
in
this
case
it's
I
mean
still
an
example
on
the
jenkins
project.
We
have
the
three
main.
A
The
first
three
digits
are
the
number
of
the
job
id,
so
that
was
generated
on
the
job
id
165.
Then
you
have
the
build,
and
then
you
have
a
git
short
hash
from
an
external
contributor.
When
I
did
when
I
depend
on
on
those
kind
of
third
dependencies,
I
mean,
if
I
don't
have
access
to
the
ci
environment,
it
tells
me
nothing
and
so,
in
the
end
I
still
have
to
look
into
the
git
repository
to
know.
A
A
What
is
awesome
with
the
semantic
versioning
is,
when
you
have
a
pr
opening
on
your
git
repository
telling
you
that
there
is
a
new
version
using
semantic
versioning,
then
you
know
immediately
if
you
can
merge,
apr
and
proceed
or
if
you
have
to
plan,
maybe
some
work
in
the
coming
weeks
to
to
really
take
more
time
to
test
it
to
understand.
What's
what's
changing
in
this
version,
what
is
interesting
when
you
have
to
review
project
using
semantic
versioning?
Is
you
really
see
those
people
who
take
care
of
not
breaking
that
convention?
A
You
have
some
vendors
who
I
mean
are
not
really
paying
attention
and,
more
importantly,
you
have
just
project.
We
have
version
that
looks
like
a
semantic
versioning,
but
just
not
doing
70
versioning,
so
that
can
be
quite
confusing
and
then
I
put
in
all
other
kind
of
categories
of
projects
that
have
a
specific
versioning,
but
it
tells
you
nothing
about
what
changed
between
the
version
2
232
and
the
233,
and
you
still
need
to
look
at
the
change
log.
I
mean
in
the
case
of
the
jenkins.
A
I
know
the
difference
because
I
worked
on
it,
but
I
had
the
first
time
I
had
to
find
the
right
documentation.
That
explained
me:
what
was
what
what
the
version
mean
in
this
case,
and
so
you
can
put
all
the
other
projects
in
that
category.
A
A
So
the
next
big
challenge
that
I
faced
while
looking
into
the
dependency
management,
is
when
you
want
to
validate,
and
when
you
want
to
upgrade
your
dependencies,
you
have
different
ways
to
see
them
the
the
way
to
upgrade
so
the
first
one
is
the
pretty
common
one.
I
mean
the
one
that
all
people
think
of
is
an
individual
upgrade,
so
you
have
dependency
blood
but
bumping
it
open,
for
example,
a
pr
and
then
you
validate
that
change.
You
merge
your
change
and
then
you
test
the
change
and
job
done.
It's
awesome.
A
The
thing
is
when
you
have,
let's
say
a
javascript
project
with
android
of
dependencies.
You
can
just
spend
your
days
validating
prs
and
merging
prs,
and
so
it
just
introduced
what
we
need,
what
we
called
it
noise.
So
we
have
a
lot
of
noise,
reviewing
dependency
change,
and
so
some
projects
started
using
what
we
can
see
as
batch
upgrade.
So
the
idea
of
a
batch
upgrade.
Is
you
just
group
all
the
changes
for
a
specific
reason?
For
example,
you
want
to
bump.
A
So
that's
how
I
started.
Building
a
project
named
update,
cli,
so
the
way
the
way
it
started.
I
was
maintaining
the
jenkins
infrared
project.
I
had
quite
a
lot
of
dependencies
and
I
had
a
lot
of
non-standard
structure
and
because
well
because
the
jenkins
project
has
the
age
that
it
has,
and
so
we
we
not
only
have
kubernetes,
we
have
puppet.
A
We
have
a
lot
of
application
developing
different
languages,
so
I
was
facing
quite
a
lot
of
the
non-standard
data
structure
and
also
I
needed
a
way
to
just
do
batch
upgrades
for
for,
depending
on
the
situation.
So
what
I
can
show
you
what
it
looks
like
now,
so
the
project
started
as
bashgrip
that
evolved
to
python,
and
then
I
switched
to
a
goal
line.
So
I
can
just
change
my
I
will
just
mirror
my
screen,
so
I
can
show
you
what
it
looks
like
you
see.
My
screen,
that's
awesome.
A
So
I
have
a
small
demo
here
and
I
have
two
files.
So
the
first
one
is
the
data
file.
So
this
is
a
random
yaml
file
that
I
know
that
I
want
to
update
and
this
file
contained
two
information.
It
contained
a
container
for
the
latest
updates.
I
mean
for
the
updates
here
I
hosted
on
ghgr
and
it
contained
a
tag,
and
what
I
want
to
do
here
is:
I
want
to
automate
the
update,
retrieving
the
latest
information
and
before
doing
so,
I
need
to
show
you
one
thing
that
I
had
to
do
in
advance.
A
Otherwise
my
demo
wouldn't
work
is
because
I
need
to
interact
with
the
github
api.
I
need
some
credentials
for
this,
and
so
I'm
using
sof
so
for
those
people
who
are
not
familiar
with
it,
soaps
is
a
tool
coming
from
the
mozilla
projects
and
it
allows
you
to
encrypt
information
and
to
publish
them
in
a
git
repository.
So
you
have
a
bunch
of
information
here,
so
it's
really
super
project.
I
love
it,
and
so
here
I
have
my
file,
so
the
file
is
named
update
here.
I.E.T.M.L.
A
The
name
means
madder
in
this
case,
because
I'm
leveraging
jsonshima
store
to
automate
to
have
syntax.
So
the
way
a
data
work
is
you
need
to
write
a
manifest
inside
that
many
files
you
tell
one
pipeline
means,
for
example,
batch
upgrade.
You
can
update
multiple
thing
at
the
same
time.
One
pipeline
is
one
manifest
and
you
specify
where
the
information
is
coming
from,
what
you
want
to
update
and
how
you
want
to
do
this,
so
I'm
just
okay
right
why
it
doesn't
work.
A
Okay,
so,
as
I
was
saying,
jesus
jason's
shema
store
is
a
it's
a
convention.
So
if
you
publish
the
decent
schema
for
your
project,
you
can
automatically
have
suggestions
when
you
write
this
one.
So
all
you
have
to
do
from
vs
code
just
sit
around
space,
and
so
it
suggests
so
I'm
going
I'm
going
to
to
do
I'll,
provide
a
title
to
my
pipeline,
the
mostly
con
for
example.
A
A
Space
so
updates
that
the
tmo
I
specify
the
kinds
so
the
kind
is
you
can
see
it
as
a
plugin,
so
we
can,
for
example,
when
you
want
to
automate
the
content
of
the
ml
file.
You
don't
provide
the
same
information
that
if,
for
example,
you
want
to
automate
the
content
of
a
docker
file
or
f
or
a
rm
chart,
for
example,
so
in
this
case
it's
just
a
yaml
file,
I
specify
the
spec,
so
the
spec
or
the
specific
configuration
for
this
one
file,
the
file
is
cdcon2022
smash
data
tml.
A
A
A
A
A
So
the
github
username
and
token,
which
is
great,
seems
to
be
working,
and
so
from
here
update
is
a
binary
written
in
go
line,
so
we
gen,
we
specify
the
manifest
so
it
supports,
I
mean
you
can
provide
either
a
file
or
a
directory.
A
So
you
create
silicon
this
one,
let's
see
if
it
works,
and
so
the
way
of
the
way
it
works.
Is
you
have
okay
changed
or
awesome,
so
it
works
so
the
way
it
works.
Is
you
have
the
core
of
update
that
we
all
would
always
do
the
same.
So
first
it
try
to
download
all
the
git
repositories
that
your
project
rely
on.
Then
it
retrieve
all
the
sources.
A
In
this
case
we
only
have
one,
which
is
the
latest,
update
cli
version,
so
v0.25.0
it
is
identified
that
we
have
a
change
log
available
on
the
github
release.
So
I
also
have
this
information
here,
so
I
can
know
okay
what
we
are
introducing
in
the
latest
version,
and
then
we
have
the
targets
we
can
have
one
or
multiple
targets
so
that
allow
us
to
have
like
a
batch
upgrade,
for
example,
and
in
this
case
what
it
says
we
want
to.
We
want
to
check
that
I'm
using
the
wrong
key.
A
A
Spec,
I
want
to
specify
g
h,
e
r
dot,
io,
slash,
update,
cli,
slash,
update
cli,
so
before
before
I
execute
the
thing
here
is
a
source
retrieve
an
information
and
that
information
is
injected
in
a
condition
or
a
target,
so
we
don't
have
to
re-enter
the
same
value
so,
depending
on
the
plugin
that
we
use
the
value
is
automatically
filled.
So
in
this
case
it
will
automatically
set
the
fields
tag,
so
this
tag
will
be
automatically
set
to
the
value
retrieve
from
the
source.
A
So
what
I
want
to
do
here
is
a
typical
workflow
that
I
use
it
everywhere
in
my
project.
Is
I
want
to
retrieve
the
latest
version
for
a
specific
application.
I
want
to
test
that
there
is
a
docker
image
available
for
that
specific
version,
and
only
then
only
then
I
bumped
the
yaml
file
or,
for
example,
the
hem
charts,
and
so,
if
I
run
this
and
I
didn't
do
any
mistakes
changed
awesome,
so
we
now
have
one
additional
condition
that
appears
here,
which
is
okay.
A
We
have
we
have
the
correct,
docker
image
tag
that
exists
on
the
repository,
so
we
can
go
further.
So
this
is
really
the
the
how
the
project
evolved
from
building
bash
scripts,
and
so
what
is
awesome?
Sorry?
I
will
just
show
you
so
so
far
in
question
now
I
can
just
continue
so
the
the
way
we
know
we
use
it.
A
We
just
we
usually
have
one
two
three
manifest
per
git
repository
that
we
try
to
automate
and
we
really
focus
on
those
non-standard
data
structure
and
if,
for
some
reason,
the
yaml
file
change,
we
can
easily
adapt
the
manifest
to
just
say:
okay,
we
are
not
anymore
using
a
container
called
an
image,
but
we
are
using
a
container
column.
I
don't
know
engine
x
got
an
image,
so
we
can
easily
change
the
the
the
workflow
and
we
could
also
think
of
more
advanced
scenarios.
A
So
I
can
show
you
one
before
this
one,
for
example,
and
hcni.
A
Okay
right,
so
this
one
is
an
example.
So
in
this
case
we
don't
only
want
to
change
the
file
locally,
but
we
want
to
automatically
open
a
pull
request
or
we
won't
automatically
commit
directly
to
our
git
repository.
So
we
specify
one
one
one
sm
resource.
So
in
this
case
it's
a
sm
resource
of
type
github,
so
we
specify
the
different
parameters.
We
retrieve
the
value
from
the
environment
variable.
So
it's
a
different
way
to
provide
some
secrets.
We
retrieve
the
latest
version.
So
in
this
case
it's
the
ego
project.
A
Then
we
realize
that
okay,
you
go
use
a
v
into
the
version
number
and
that's
so
that's
why
I
was
telling
you
that
versioning
can
be
really
annoying
is
because
some
project
will
put,
for
example,
information
such
as
change
dash
before
the
version,
and
in
this
case
you
go
use
the
vb
for
the
version,
and
I
don't
want
it.
So
I
just
drop
the
v
before
the
version
and
then
I
will
try
to
update
my
netlify
configuration
so
each
time
I
see
a
ego
version.
I
will
just
bump
to
the
latest
version.
A
I
also
bump
directly
the
github
action,
so
I
have
a
github
action
that
specified
the
ego
version,
so
I
can
test
building
my
application
automatically
and
finally,
I
have
a
specific
configuration
for
the
pull
request,
and
so
my
what
my
my
records
configuration
looks
like
is
it's
a
pull
request
of
type
github.
We
specify
the
same
id
we
provide.
We
say
which
target
is
related
to
this
change.
A
We
want
to
enable
the
auto
merge
flag
on
the
git
repository
and
we
want
to
use
the
merge
method
squash
and
we
specify
some
labels,
and
so
what
it
does
is.
If
there
is
a
new
version
for
go
ego,
it
will
automatically
open
a
pr
trigger
the
test.
So
we
will
try
to
build
the
website
and
if
it
builds,
then
we
can
just
merge
a
pair
and
move
on.
A
So
if
I
come
back
here
this
way,
so
that's
all
I
want
to
show
with
update
cli
it's
an
experiment
that
I
started.
I
mean
that
that
the
project-
it's
a
it's
a
project
that
I
started
yeah
maybe
two
years
ago
on
the
jenkins
project
and
we
started
using
in
many
places
it's
useful
not
only
for
dev
dependencies.
It's
useful
in
many
different
scenarios,
because
we
automate
undercar
file,
update,
we
automate
the
release
of
hem
chart.
So
typical
example
is
for
our
hem
chart.
A
Is
we
have
one
in
the
case
of
a
pino
where
we
retrieve
the
version
from
the
backend
team
and
the
front
temp
team?
And
so
as
soon
as
one
of
the
two
version
is
released,
we
can
automatically
trigger
a
new
hand
chart
and
the
back-end
team
use
github
release,
while
the
front-end
team
just
rely
on
git
tag
for
the
releases,
so
it
really
depends
for
the
documentation
as
well.
A
We
automatically
retrieve
the
latest
version
and
we
automate
our
links
in
the
different
projects
and
in
the
case
of
the
dev
environment,
we
use
it
to
change
the
same
version
everywhere,
so
we
bump
the
ci
environment.
We
bump
the
dev
and
we
I
mean
we
bump
everything
at
once.
A
That's
it
so
we've
been
using
it
a
lot
on
different
projects,
and
we
learned
quite
a
few
things.
So
the
first
one
is
the
command
line.
Tool
is
really
great
because
then
it
gives
you
a
feedback
very
quickly,
so
you
just
join
a
git
repository
around
updates
like
diff,
and
then
you
automatically
see
what
would
change
in
the
project.
So
if
you
want
to
just
change
locally,
you
can
do
the
change
locally
and
then
you
can
test
all
the
things
it's
really
portable,
because
it's
a
go
like
button
and
binary.
A
So
it
works
on
all
the
distribution
architecture
and
so
on.
So
well,
you
can
just
run
the
command
from
the
ci
environment,
so
we
use
it
from
jenkins
and
github
action
on
the
other
side.
Writing
manifest
is
annoying.
Even
though
we
pay
attention
to
noted.
We
never
introduced
breaking
changes
in
the
manifest.
A
Yet
we
still
have
to
write
those
manifests.
We
have
to
store
those
manifests
somewhere
at
this
time.
I
don't
see
any
other
way
to
automate
dependencies
where
you
don't
have.
Where
you
don't
know.
The
data
structure
that
you
want
to
know
that
you
want
to
update
in
advance,
so
maintaining
is
a
bit
annoying
because
the
way
we
proceed
is
we
deprecate
some
keys,
for
example,
in
the
manifest,
and
then
we
wait
for
a
few
versions
before
totally
removing
that
key.
A
So
that
means
that
from
time
to
time
we
need
to
do
a
bit
of
cleaning
in
the
in
the
project
and
well.
We
learn
how
we
learn
constantly,
but
on
the
other
side
yeah,
it
brings
us
a
lot
of
flexibility.
So,
for
example,
if
we
realize
that
it
just
annoys
you
to
update
a
specific
dependency,
we
just
merge
them
into
one
specific
batch
into
one
pipeline.
So
it's
an
experiment.
So
it's
a
site,
it's
a
project
that
we
started
on
the
site,
so
I'm
not
working
on
it
full-time.
A
We
use
it
on
the
jenkins
project
and
few
other
places,
but
yeah
just
it's
still
at
an
experiment.
At
this
time
the
different
plugins
are
driven
by
what
we
need
so
yeah.
That's
it
yeah.
If
you
have
any
questions,
let's
talk,
I
would
be
curious
to
know
how
you
manage
our
dependencies
updates
and
yeah.
If
you
want
stickers,
have
some
stickers
and
check
out
the
project
thanks.
A
A
A
So
yeah,
if
you
are
interested
the
thing
is
we
we
built
it
in
in
a
way
that
we
can
easily
extend
the
different
plugins.
So,
for
example,
a
source
is
just
retrieving
a
version,
and
so
that's
the
easiest
scenario,
so
you
just
have
to
write
a
function
that
tells
you,
okay,
how
you
retrieve
the
version
so
retrieving.
For
example
yeah.
I
show
you
github
releases.
We
we
have,
for
example,
maven
repositories,
so
that's
really
easy
as
soon
as
you
have
an
api
when
we
want
to
run
the
condition.
A
That's
the
same,
all
you
have
to
do
is
write
a
function
that
will
just
ensure
that
the
thing
work
or
not
when
it
comes
to
targets,
it's
a
bit
more
difficult,
because
then
you
need
to
update
a
file,
for
example,
and
that
can
be
a
bit
more
tricky
depending
on
what
you
want
to
update.
So
we
have
some
fallback,
I'm
relying
I
mean
I'm
maintaining
infrastructure
for
quite
a
long
time,
so
I
rely
on
a
lot
on
yaml
files.
A
So
we
usually,
we
have
to
make
a
game
of
files
in
docker
file
and
the
fallback
that
we
have
is
just
bump.
A
file
based
on
regex
rules,
which
is
sometimes
convenient,
but
well
I
like
that
story.
I
like
that
thing
where
you
say
you
had
one
problem:
you
solved
it
with
a
rig
x
and
now
you
have
two
problems
to
solve
and
yeah.
For
me,
it's
really
like,
if
I
don't
have
any
other
choice,
that's
what
I
do,
and
otherwise
we
can
also
run
some
small
scripts.
A
So
an
example
of
that
is.
We
use
docuseries
on
one
of
the
project
that
we
that
I
maintain
and
I
wrote
a
small
script
that
just
runs
docuseries
comments,
so
you
can
generate
the
website
for
the
next
version.