►
Description
Meeting notes: https://docs.google.com/document/d/1y7a6uaN8fwFJ7IRnvtxSfgz-OGFq6u7bKN6F7NDxKPg/edit
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn
Follow us on Twitter: https://twitter.com/keptnProject
Sign up to our newsletter: https://bit.ly/KeptnNews
A
All
right
good
morning,
good
evening,
everyone
so
today
is
October
6
and
we
have
a
developer
meeting
in
this
session.
We
wanted
to
focus
on
Captain
life
cycle
controller,
so
Thomas
will
do
edema
and
then
we
will
have
a
quick
discussion.
So
if
you
have
missed
the
result
of
the
video
published
on
YouTube,
please
overview
and
a
few
a
drama
somebody
also
there
is
a
repository
with
quick
start
guidelines.
A
So
I
think
this
is
starting
materials
for
the
presentation
and
and
after
that
we
have
a
few
topics
from
common
agenda,
so
I'm
not
sure
whether
bread
joins
or
not.
The
last
week
we
had
a
discussion
about
parameters
service,
so
we
can
discuss
it
and
then
there
is
a
bunch
of
demos
for
new
developments.
A
B
Yes,
yes,
so
yes,
hello,
everyone
for
everyone
here
who
didn't
see
my
Administration
on
qcom
I
will
oh
sorry
on
YouTube
I
will
do
a
short
recap
and
show
up
description
of
what
the
lifecycle
controller
is,
how
it
works
and,
in
the
end,
how
it
looks
like
at
the
moment.
B
So
let's
share
my
screen
and
let's
take
this
one:
okay,
okay!
So
when
we
are
thinking
about
lifecycle
controller,
the
lifecycle
controller
will
be
more
or
less
an
operator
which
resides
in
the
qnips
cast
in
the
in
the
community
skills.
So
we
are,
workloads
are
operated,
so
you
might
have
a
github's
controller
like
flux
or
Argos
CPU,
but
also
some
manual
work
or
a
pipeline
with
sleep
tail,
which
applies
something
on
your
kubernetes
cluster
and
on
the
communities
cluster.
B
So
when
we
are
thinking
of
Q
of
typical
Cubone
applications,
these
applications
are
not
only
mostly
not
only
consisting
of
one
workload
or
deployments
the
full
set
or
whatever
they
are
mostly
consisting
of
multiple
workouts
and
in
the
end
we
also.
We
want
to
know
if
the
whole
application
works
and
if
the
whole
application
is
tested
properly.
B
So
what
you
see
here
is
kind
of
a
sandwich
with
what
what
our
life
cycle
controller
does.
So,
our
lifecycle
controller
is
capable
of
doing
pre-deployment
power
of
executing
pre-deployment
tasks.
Then
it
lets
kubernetes
through
the
deployment
and
after
we
find
out
that
that
supports,
are
technically
healthy
or
the
workloads.
Then
we
redo
the
post
Department
and
our
lifecycle
controller,
it's
observability
to
the
whole
process.
So
what
we
see,
what
we
see
in
this
window
is
that
we
get
a
full
trace
of
the
deployment.
B
Furthermore,
we
will
get
some
new
tricks,
such
as.
How
often
has
this
workload
or
application
has
been
deployed?
How
long
did
it
take
and
how?
How
often
did
it
feel,
and
furthermore,
we
are
also
issuing
events
on
everything
we
are
doing
so,
for
instance,
if
the
pre-deployment
phase
of
an
often
of
a
workload
or
an
application
is
finished,
then
we
will
issue
a
complete
this
event
and
therefore
it
will
also
be
better
observed.
B
The
second
part
of
the
lifecycle
controller
standardizes,
some
kind
of
task
execution.
So
when
you,
when
you
annotate
your
workload
with
our
cap
with
our
cap,
annotations
and
I,
will
take
a
closer
look
on
this
afterwards,
you
are
able
to
execute
at
the
moment
functions
which
will
also
see
in
the
demonstration
afterwards.
This
could
also
be
containers
and
cliers
and
in
the
end
we
could
also
trigger
external
control
planes
like
cabinets
at
the
moment
via
cloud
or
CP
events.
B
What
we
see
on
the
lower
part
of
the
spread
of
the
slide
is
that
we
that,
in
the
end,
the
lifecycle
controller
will
start
the
pre-deployment
checks
based
on
the
application
at
the
in
the
first
step.
Afterwards,
it
might
do
pre-deployment
tasks
and
post
deployment
tasks
based
on
the
workload
and
in
the
end,
after
all,
of
the
verticals
are
deployed,
it
will
trigger
post
deployment
checks
of
the
application.
B
B
Currently,
in
our
life
cycle
controller
repository-
and
there
are
some
examples-
for
instance
the
Potato
Head
Department
in
there-
and
what
you
see
here-
is
that
we
that
we
got
a
typical
kubernetes
manifest,
which
consists
of
a
namespace
deployments,
services
and
so
on,
and
what
we
did
to
instrument
this
with
our
lifecycle
controller.
Is
we
added
some
annotations
to,
for
instance,
to
the
potato
head
entry
deployment
where
we
say
this
belongs
to
the
app
Potato
Head?
B
This
is
this
belongs
to
the
workload
Potato
Head
entry
and
this
in
the
version.
Zero
one
zero
and
the
most
important
part
for
today's
demo-
is
that
we
also
annotated
that
we
want
to
use
the
post
deployment
table
task
as
a
post
deployment
task.
B
Furthermore,
I
added
an
immediate
container
to
this
through
this
deployment
that
we
see
that
our
post
deployment
checks
are
working.
B
Furthermore,
I
also
added
a
segment
pre-deployment
task
for
the
other
services
which
checks
for
the
entry
service
to
be
available
before
deploying
the
service.
So
this
is
a
very,
very
simple
solution
in
this
case,
where
we
simply
wrote
the
typescript
function,
just
a
second
which
simply
checks
if
an
HTTP
endpoint
is
available
before
we
before
we
start
the
deployment,
and
what
you
see
here
is
a
typical
function.
B
So
what
we
have
seen
in
our
brilliant
post
deployment
tasks
here
are
names
of
Captain
tasks,
the
task
definitions
and
the
task
definition
for
that
is
also
specified
in
this
repository,
so
it
also
gets
deployed
with
the
with
the
workload
you
have,
for
instance,
what
you
saw
here
is
that
we
have
to
check
of
the
entry
service
and
what
we
can
do
is
we
can
share
such
functions,
so
we
can
post
them
on
a
web
server.
In
our
case,
this
is
okay.
This
is
hosted
on
GitHub
and
you
can
only
add
parameters
to
this.
B
B
This
is
a
simple
select
function
which
posts
something
to
a
flag
Channel,
and
this
this
gets
gets
showed
more
into
depths
in
the
YouTube
video,
okay
and
with
all
of
the
things
we
have
in
there
I
think
we
have
everything
we
need
to
deploy
this
service
in
the
future.
It
might
also
be
able
to
run
post
deployment
analysis
and
so
on.
So
the
things
we
know
from
Captain
at
the
moment
that
we
can
trigger
a
slow
evaluations
and
find
out
if
the
application
is
really
working.
B
B
I
have
exactly
the
same
configuration
as
we've
seen
in
the
git
repository
on
the
left
upper
side
on
the
right
upper
side,
I
watch
the
pods
which
get
created
on
the
kubernetes
cluster
and
on
the
lower
side.
I
have
the
captain.
Workloading
I
have
a
list
of
the
captain
barcode
instances
and
they
describe
a
certain
version
of
a
workload
which
is
deployed
on
the
on
the
environment.
B
So
I
will
simply
do
a
cubicle
apply
organization
Dot,
and
we
should
see
that
the
log
is
happening
now.
So
this
is
a
bit,
so
we
see
that
our
pre-checks
got
started
and
we
also
see
that
there
are
erroring
at
the
moment,
and
this
is
okay,
because
as
long
we
are
taking
over
the
behavior
of
Q
meters
and
to
re-trigger
tasks
as
as
often
as
necessary
and
try
to
make
this
a
bit
smaller
that
you
see
more.
B
We
also
see
that
we
have
the
services
down
here
and
we
also
see
that
the
potato
at
entry
has
started
now
and
after
this
has
been
started
in
this
running,
we
trigger
the
post
deployment
check,
and
this
is
this
is
completed
now.
What
we
also
see
is
now
that
this
is
running
the
all
of
the
other
jobs
are
completed
now
and
after
some
some
time
all
of
the
other
ports
should
go
to
a
running
state.
B
Yes,
so
now
they
are
created,
and
after
some
time
so
the
Potato
Head
is
very
fast
starting,
very
fast.
They
are
all
running
and
what
we're
seeing
the
workload
at
the
moment
is
the
status
of
all
of
this,
and
we
see
that,
for
instance,
for
our
Potato
Head
entry
workloads,
the
pre-deployment
deployment
and
post
deployment
is
succeeded
as
it
is
for
every
other
service.
B
We
could
also
try
to
try
to
update
one
of
these
applications
and,
for
instance,
let's
say
we'll
use
the
web.
Sorry,
let's
say
we
used
this
this
the
entry
service.
Again,
then,
simply,
let's
simply
apply
this,
and
we
see
that
the
new
version
gets
created
here.
We
also
see
that
the
pre-deployment
has
already
passed.
The
deployment
is
pending.
B
For
some
reason,
this
takes
a
bit
longer,
but
yes,
this
should
not
take,
should
not
hold
us
back
okay,
so
this
was
more
or
less
the
demonstration
of
all
of
this.
What
we
are
dealing
with
at
the
moment
are
mainly
the
observability
parts
so,
for
instance,
doing
adding
enhancing
the
trees
at
the
moment.
I
think
there's
only
the
way
book
instrumental.
B
And,
yes,
with
all
of
this,
we
can.
We
will
do
some
demonstrations
on
kubecon
and
try
to
find
out
how
this,
if,
if
this
is
something
people
need
and
if
there
are
use
cases
we
we
didn't
think
of
at
the
moment,
because
with
all
of
this
function
and
execution
part,
we
can
can
do
lots
of
things
and,
yes,
I,
think
that's
it
for
myself.
Like
do
you
have
something
to
add
at
the
moment.
A
No,
it
looks
pretty
good
I
had
a
few
comments
or
other
questions,
but
yeah
no
content
wise.
Now.
One
question
is
rather
about
security
matters
so
for
Captain
functions.
A
When
you
download
from
URL
it
would
be
quite
important
to
have
a
support
for
checksums
or
whatever,
because
otherwise
yeah
it's
basically
unreliable,
especially
for
external
URLs,
like
GitHub
damage.
Today,.
B
A
B
Yeah,
so
at
the
moment
we
are
very
rarely
prototyping
phase.
Yeah
tell
awesome
things
we
have
to
add
exactly,
especially
for
the
gpg
keys
or
whatever,
to
find
out
that
what
we,
what
we
execute,
is
exactly
the
things
we
expect
to.
A
B
We
might
also
have
the
post
we
end.
Also,
maintenance
of
other
projects
might
have
also
the
the
possibility
to
add
they
are
their
own
things
in
inside
of,
in
all
of
all
of
the
things
we
are
doing
here
so.
B
At
the
moment,
we
only
implemented
functions
to
have
something
something
in
the
Prototype,
but
for
instance,
it
might
also
be
possible
to
say
yes,
I
want
I,
do
not
want
only
to
execute
functions.
I,
for
instance,
want
to
trigger
a
cloud
event
or
trigger
an
event
in
peric
for
or
or
some
other
point.
So
it
only
depends
on
like
on
the
on
the
Integrations.
B
So
if
there
is
someone
who
integrates
other
functions
like
let's
say,
connective
functions
or
whatever,
this
should
not
be
the
big
problem,
and
we
could
also
use
such
things.
B
B
So
this
would
make
some
use
cases
a
bit
easier
yeah,
but
at
the
moment,
implementation
wide.
This
is
very
open.
Oh
implementation,
wise.
This
is
very
open.
A
B
A
A
So
yeah
project
has
gone
really
well
and
it's
interesting
to
see
where
it
goes
next,
so
we
have
quite
a
lot
of
people
on
the
call.
Does
anyone
has
additional
questions
and
feedback.
D
A
A
A
E
E
D
B
A
B
The
only
thing
which
is
to
mention
at
the
moment
this
is
limited
to
our
community,
discussed
a
larger
than
1.25
I,
think
so
higher
than
124.
Sorry
and
it
doesn't
work
on
the
closet,
but
that's
also
more
or
less
by
Nature,
but
otherwise.
F
D
A
It
gets
the
only
real
limitation
for
that
is
being
able
to
run
in
public
clouds
which
haven't
updated
to
the
recent
kubernetes
versions,
but
yeah
by
the
time
I
it
reaches
the
production.
State
I
believe
that
everything
will
be
updated
anyway.
B
A
So
I
guess,
then
we
move
to
the
next
topic,
so
Brett
McCoy
he's
not
on
the
call
I
guess
so.
I
can
just
summarize
what
is
this
topic
about
for
those
who
haven't
watched
previous
meeting?
A
So
there
is
an
issue
in
primitive
service,
so
basically
it
stopped
working
with
you
primito
stock,
which
is
a
defaulted
distribution,
and
so
the
feature
request
was
that
actually
addressing
parameter
separator
well
adjusting
the
service
to
be
both
to
work
with
parameter
separator
too
again.
This
is
a
feature
request,
so
I
don't
think
that
something
that
would
be
prioritized
right
now,
but
yeah
just
for
your
information,
the.
A
A
A
So
since
practice,
not
here,
we
could
probably
take
it
offline
in
the
chat
but
yeah
if
anyone
is
interested
in
parameter
separators
or
have
experience
with
that,
I
think
that
it
could
be
a
good
enhancement
for
the
service.
A
A
So
this
is
all
for
this
part
and
yeah
Giovanni's.
The
next
one.
D
D
Okay,
now
you
should
be
able
to
see
the
agenda
right
perfect.
So
the
first
ticket
I
would
like
to
highlight
is
that
we
finally
remove
High
charts
and
we
fully
switch
to
D3
for
captain.
So
now,
all
the
graphs
are
being
rendered
with
a
new
fancy,
Library
D3,
which
boosts
the
performance.
So
the
response
time
of
the
page-
and
we
can
get
rid
of
a
lot
of
old
code
for
high
charts,
this
whole
library
that
we
were
not
able
to
update
due
to
pay
wall
for
any
update.
D
D
The
operator
now
here
it
is
automatically
prints
out,
spends
for
the
wall
life
cycle,
oopsie,
I'll
scroll,
this
on
the
whole
life
cycles.
You
can
see
here
that
choose
Pens
have
been
created.
The
first
one
is
the
webbook
that
annotates
the
pod.
Here
we
see
is
the
one
that
starts
the
trace,
because
there
is
no
parent
ID
and
contains
all
the
important
attributes,
the
cabin
deployment,
app
name,
workload
version
and
so
on
so
forth,
and
then
we
also
have
a
second
span
that
is
mixed
with
a
different
logs,
where
it's
a
child
span.
D
D
A
One
question
so
for
Trace
apparent:
is
it
possible
to
pass
it
in
the
initial
event,
so,
basically,
the
one
which
triggers
Captain
for
evaluation.
D
A
It's
super
important
because
yeah
it's
a
the
only
way
to
troubleshoot.
If
something
goes
wrong.
B
Yes,
so
as
long
as
the
as
long
as
we
are
triggering
event
based
executors
or
whatever
as
long
this
is,
is
possible
on
the
other
side,
if
we
are
triggered
by
a
key
tops
tools,
this
might
get
a
bit
a
bit
more
interesting
mm-hmm,
because
then
we
would
meet
together
in
the
Manifest
which
I
applied
on
them.
D
F
D
Like
we're
not
share
the
screen
anymore,
sorry,
everyone
yeah
I,
wanted
just
well
anyway.
The
link
is
in
the
public
document,
go
over
the
documentation,
please.
If
we
need
to
enhance
some
sessions
or
if
something
is
not
fully
clicked.
A
D
Not
it's
not
necessary
to
show
I
wanted
just
to
say
that
we
are
trying
to
follow
as
much
as
close
as
possible
the
open
parametric
semantic
conventions
and
we're
having
the
plan
also
to
contribute
back
semantic
conversions
for
the
deployment
life
cycles
since
open
Telemetry
has
a
blind
spot
on
this
topic.
A
D
C
Share
the
screen
real
quick.
Can
you
hear
me
this
time?
I
think
it
should
be
working
all
right,
starting
off
with
the
bootstrap
of
DCI
pipeline
for
the
love
cycle,
this
one's
pretty
straightforward,
so
I
have
just
a
quick
example
here
from
our
main
branch.
C
It
starts
off
with
the
preparation
step
where
we
evaluate
the
change
files
of
the
project,
so
it
wouldn't
execute
the
whole
Matrix
for
all
those
modules.
If
not,
if
not
all
files
have
changed.
C
This
is
just
for
performance,
and
after
that
step
we
have
the
compile
step
where
we
just
try
to
compile
the
module
and
parallel
to
that,
we
execute
the
unit
tests,
and
if
those
two
steps
are
complete,
we
build
and
push
our
Docker
images
and
after
the
images
are
built
and
pushed,
we
create
our
manifest
files
and
attach
them
to
the
CI
run
here.
C
The
next
one
is
introducing
go
in
the
auto
caching
for
this
iPad
1
of
Captain.
So
this
one
is
also
pretty
straightforward
here
in
the
set
up
steps
for
go
and
perspective
down
here
for
yarn.
We
just
cached
the
dependencies
and
Define
a
key
for
the
cache,
so
it
can
be
easy
to
restart
if
nothing
has
changed.
C
And
last
but
not
least,
we
have
a
research
ticket
regarding
the
goal:
vulnerability
Checker
here
so
I
did
the
quick,
POC
and
tested
the
one
ability.
Checker
I
also
found
that
it's
not
a
replacement
for
sneak
itself,
since
sneak
is
also
usable
for
different
security
scans
also
for
node,
for
instance,
openability
Checker.
C
Only
web
checks
for
gold
modules,
like
the
name,
implies
so
I
created
a
follow-up
ticket
to
introduce
the
vulnerability
checker
for
our
Pipeline,
and
it
can
also
be
scheduled
for
like
once
a
week
or
so,
and
it's
a
pretty
nice
tool,
since
it's
not
only
scanning
for
dependencies
that
have
vulnerabilities,
but
it
scans
if
the
code
actually
invokes
vulnerable
Parts
of
those
vulnerable
packages.
C
G
E
I
will
just
show
one
thing
today
and
that's
the
KLC
release
pipeline.
We
already
actually
have
it
open
already.
E
We
already
created
the
first
preview
release,
basically
with
the
first
features
of
the
lifecycle
controller
thanks
to
this
Pipeline
and
we've
actually
switched
away
from
the
process
that
we
had
in
the
cap
main
repository
where
we
used
standard
version
as
a
release,
helper
tool
and
now
we're
using
release.
Please,
which
comes
with
a
nice
GitHub
action
that
basically
takes
care
of
the
release
for
us
also
versioning,
also
change
log
generation
and
GitHub
releases
and
tagging
and
stuff
like
that.
E
So
this
is
much
simplified.
Basically,
this
will
do
the
release
and
the
rest
is
just
building
our
images,
building
the
Manifest
for
The
Operators
for
the
operator
and
then
attaching
it
to
that
release.
E
G
Okay,
so,
hopefully
you're
seeing
it
yeah
other
that
don't
have
much
to
present,
as
most
of
the
things
were
already
presented
by
Thomas,
just
a
few
minor
things.
In
the
last
days,
we
introduced
a
new
bug
fix
for
the
shipyard
controller
in
Captain,
not
in
Captain
lightsaber
controller.
G
The
thing
was
that
in
the
update
project,
API
endpoints
that
was
not
possible
to
update
anything
without
specifying
the
git
credentials.
Otherwise
the
captain
will
scream
that
you
should
specify
the
credentials
yeah
with
this
fix.
The
fix
was
basically
about
disabling
the
validation
of
the
git
credentials
in
update
update
endpoint
when
they
are
not
present
of
yeah.
G
For
example,
now
it's
possible
to
update
on
leadership
yard
controller
Shipyard
yaml
file
without
specifying
the
bit
credentials
if
they
are
not
if
there
is
no
need
to
update
them
yeah
and
the
second
thing,
which
was
not
part
of
the
presentation
by
Thomas
that
recently
we
added
metrics
to
the
captain
lifecycle,
controller.
G
G
F
Okay,
I
just
want
to
run
by
everybody,
something
I'm
thinking
about
doing
before.
I
do
it
and
everybody
goes.
Oh,
my
God
she's
lost
her
mind,
Undo
It
Fast,
which
would
be
a
nightmare
original
plan.
We
had
hoped
that
we
were
going
to
be
able
to
release
the
LTS
docs
under
ducky
Source.
F
It
doesn't
look
like
that's
going
to
be
possible,
we're
hoping
that
we
may
be
able
to
silently
do
that
soon.
Afterwards,
the
plan
was
when
we
go
to
docusara,
since
we
will
have
true
versioning
for
the
docs
that
we
will
get
rid
of
these
subdirectories
for
each
release,
which
are
kind
of
a
mess.
I
can
see
how
they
worked
at
one
time,
but
pretty
much.
F
Let's
face
it,
we're
just
copying
them
over
and
that's
the
end
of
it
and
it's
a
nightmare,
maintaining
the
cross
references
because
there's
a
lot
of
cross-references
from
those
top
level
sessions
like
sections
like
Concepts
and
now
installation
which
I
probably
moved
too
quickly.
So
my
proposal
is
that
for
LTS
we
do
that
restructuring
that
we
pull
everything
up.
So
what
is
currently
under
0.19
so
we'll
have
operate
and
manage
and
Define,
and
all
that
other
good
stuff
will
be
at
the
same
level
as
Concepts
and
installation
on.
F
When
you
look
at
the
landing
page
on
the
docs
it
does.
The
biggest
nightmare
I
think
about
is
the
update
piece
and
I've
already
Rewritten
the
updates
to
have
generic.
This
is
how
you
update,
and
then
we
can
have
subsections
for
details
and
I
put
to
cover
us
for
the
time
being.
I've
got
a
little
table
that
links
to
the
release
note
for
the
week.
I
think
we
have
the
last
three
releases
or
something
showing,
but
there
is
a
table
that
goes
back
all
the
way.
F
As
far
as
we've
got
in
the
source
for
every
release,
with
a
link
to
the
release,
notes
and
a
link
to
the
upgrade
instructions,
even
the
sub
points,
but
there's
a
lovely
table,
so
they
can
get
to
that
and
will
be
anything
else.
We
do
I
think
it's
just
general.
Let's
face
it.
If
people
are
looking
at
three
different
releases,
they
don't
read
three
different
releases
of
documentation.
F
Does
anybody
see
any
major
problems
with
that?
It's
it's
a
little
bit
of
a
clue,
but
I
think
it's
a
short
time
clue.
It
will
also
mean
because
we
were
anticipating
having
to
do
that
restructuring
at
the
same
time
that
we
ported
the
content
over
to
docosaurus.
So
we
were
going
to
have
all
of
those
xrefs
to
deal
with.
This
means
I'm
going
to
have
a
few
excruciating
days
before
LTS,
but
then
it
will
be
done
and
when
we
go
to
docusaurus
it
will
be
a
straight.
F
The
structure
will
remain
identical
and
I.
Think
that's
going
to
be
a
good
thing
for
us.
I
have
a
little
bit
of
a
concern.
I
have
seen
a
couple
of
times
popping
up
when,
when
I've
had
a
CI
failure,
I'm
getting
a
nasty
little
message
about
that
leads
me
to
think
that
we
may
have
to
do
a
whole
lot
of
upgrade
to
get
Hugo
to
work
after
mid-november
I'm.
F
Not
quite
sure,
and
the
next
time,
I
get
it
I'm
going
to
copy
the
message
and
send
it
to
people
who
know
it,
but
this
would
reposition.
We
have
almost
got.
We
don't
have
all
the
features
we
want
in
docosaurus,
but
I
think
we
are
very
close
to
where
we
could
put
out
sort
of
a
cluge
version
of
the
docs
under
ducky
source
and
if
we
were
faced
with
that
versus
reconstructing
everything
we're
doing
with
Hugo,
we
might
want
to
go
that
way.
So
so
this
is
my
proposal.
Does
anybody
object.
A
I
would
rather
object
about
moving
all
the
recommendation
to
the
top
level,
because
if
we
talk
about
Captain
V1,
which
is
basically
Capital
NCS
in
the
future,
there
might
be
a
new
major
releases.
Then,
with
our
LCS
policy
things
that
the
previous
releases
would
be
supported
for
longer
time,
and
once
you
move
all
the
documentation
to
the
top
level.
A
Documentation
baselines
like
we
do
now
with
minor
releases,
so
you
would
get
the
same
for
major
releases
in
the
future,
so
my
preference
would
be
to
still
have
a
folder
or
whatever
like
one.txt,
where
all
the
documentation
is
aggregated.
A
Right,
oh
well,
my
expectation
would
be
so
at
least
what
is
in
the
current
LCS
proposal
that
we
have
this
version,
which
is
maintained
for
a
long
time.
So
I
wouldn't
expect
us
to
see
a
documentation,
copy
and
paste
several
months
like
we
had
before
right.
So
it
should
be
much
more
stable
in
these
regards.
So
basically,.
A
Probably,
but
so
currently
our
problem
that
we
have
some
documentation
on
the
top
level
and
some
documentation
on
the
version
level
right
right.
So
what
we
actually
follow,
your
suggestion,
but
a
bit
differently,
you
put
all
the
documentation
on
the
Wonder
text
so,
for
example,
references
Etc
all
we
extracted
to
the
top.
A
We
could
just
use
this
topper
under
one
to
text
so
that
all
the
documentation
is
centralized
and
within
it
all
cross
references
are
fixed,
so
there
will
be
no
problem
for
field
immigration
and
the
evening.
So.
A
So
again,
so
there
is
documentation
So.
Currently
we
have
snapshots
for
minor
releases
and
also
you
have
some
items
like
Concepts
installation
Etc
on
the
top
level
right,
so
we
basically
have
two
options.
So
one
option,
as
you
propose
to
get
rid
of
this
folder
and
to
move
everything
to
the
top
right
alternative
option
is
to
have
something
like
window
tags.
A
Why?
We
put-
let's
say
installation
concept,
quick
start,
maybe
even
Explore,
and
it
remains
under
one
to
text
as
a
version
will
be
gradually
improve,
which
which
has
which
is
solid
with
regards
to
cross
references,
and
it
keeps
evolving.
So
basically,
we
have
the
same
Top
Root,
but
under
window
tags
instead
of
the
top
level.
A
By
default,
and
so
why
I
wonder
about
that
so
again
from
the
user
experience
standpoint?
So,
let's
see
now
we
have
Captain
Wonder
pigs.
We
also
have
two
projects,
which
would
be
eventually
a
part
of
the
documentation.
So
one
is
Captain
like
cycle
controller,
see,
look
like
that
and
another
one,
so
that
is
Captain
lighthouse.
F
I,
don't
know
it's
a
possibility,
I
guess
the
other
question
and
I
we
don't
have.
Any
answer
is
how
long
till
we're
going
to
be
able
to
switch
to
Dr
dacosaurus?
A
F
A
Against
it,
because
it
would
be
double
immigration,
so
my
proposal
is
actually
to
look
into
docosaurus
and
what
we
need
to
finish
it.
We
had
a
discussion
with
Rajiv
so
as
he
presented
last
week
at
jisok
Damas,
there
is
a
bunch
of
scripts
for
external
repository
support,
so
he
basically
went
ahead
and
tried
to
implement
and
let's
see
his
own
GitHub
action
for
assembling
things
is
one
of
rigid
ways
to
handle
the
obstacle.
A
F
Yeah
and
what
if
this
means
we
have
to
do
major
retooling
in
November
just
continue
to
support
Hugo.
A
A
Yeah
I,
just
I,
think
maybe
this
level
should
be
should
support
the
exemption
in
the
future
on
the
top
level
so
yeah,
instead
of
putting
everything
on
the
top
level
right
away,
we
resolve
one
additional
level
so
that
we
can
include
more
documentation
sources,
let's
say
for
sub-projects
and
new
major
versions
in
the
future.
F
A
Think,
yes,
so
for
users
we
can
make
it
simple:
yeah
I
cannot
put
a
redirect
to
there
and
then
we
take
all
the
risks.
F
F
G
A
Okay,
so
we
take
it
offline,
okay,
yeah,
we
also
with
mac
and
come
with
the
proposal.
How
we
do
that
and.
F
One
more
question:
do
we
know
what
the
tree
is
going
to
be
named?
Is
it
I
mean
we
said
it's
virtually
0.20,
but
when
we
three.
A
A
A
E
A
F
F
You're
gonna
have
to
do
it
this
way
and
if
you're,
using
1.2.1
you're
gonna
have
to
do
it
this
way
or
whatever
that
that
needs
to
be.
Hopefully
most
of
that
will
be
covered
in
reference
Pages,
where
we
actually
have
a
whole
differences
between
version
section
we'll
be
working
that
in
content,
so
yeah
yeah,
okay,
cool.
We
will,
if
anybody
else
thinks
about
this
and
has
more
thoughts,
feel
free
to
chime
in
on
slack
and
we'll
figure
it
out.
Okay,.
A
So
then
we
are
taking
off
it
offline.
Yes,
thanks
Mike
for
bringing
this
topic
up
so
I
added
another
topic
of
about
Native
tasks
at
least
Alpha,
but
taking
the
time
I
will
also
move
to
the
chat.
I
guess
it's
rather
interesting
for
those
who
manages
tickets
tries
to
implement
epics
on
GitHub
and
other
things.
So
it's
you
know
fine
on
the
next
community
meeting.
A
Okay
so
yeah
I'll
drop,
the
purple
I
can
just
quickly
show
it
what
we
have
to
use.
Probably
it's
enough:
where
is
it
okay?
So,
basically,
yesterday
we
had
a
meeting
with
GitHub.
They
introducing
new
Alpha
feature
for
visualization
of
tasks
Etc.
So
basically,
the
idea
is
to
have
even
more
capabilities
right
inside
GitHub
interface,
including
a
grouping
of
tickets,
automatic
referencing,
providing
some
metadata
in
the
visualization,
and
it's
an
article
feature.
A
Basically
it
would
be
hidden
behind
a
task
list
macro,
so
it
won't
be
enabled
by
default.
But
if
we
agree
to
participate,
we
can
play
around
with
such
layouts
and
provide
feedback
to
GitHub
kubernetes
already
enabled
it,
so
they
have
a
document
with
their
notes,
but
generally
they
are
positive
about
the
feature
so
I
think
for
us.
It
would
build
quite
interesting
to
try
it
out,
especially
since
we
tried
to
move
a
lot
of
project
management
to
GitHub
at
the
moment.
A
A
Okay,
so
basically,
this
is
what
I
wanted
to
present
and
yeah.
If
nobody
is
against,
I
will
I
will
just
post
to
the
slots
and
if
everyone
agrees
just
for
the
sandbox
at
the
moment,
okay.
A
So
yeah,
that's
it
and
yeah
thanks
a
lot
everyone
so
I
will
make
get
the
video
published,
maybe
within
one
to
two
hours,
so
no
need
to
wait.
If
you
want
to
share
the
demo
with
others.