►
From YouTube: SIG Cluster Lifecycle - etcdadm 20210201
Description
etcdadm office hours
Agenda: https://docs.google.com/document/d/1b_J0oBvi9lL0gsPgTOrCw1Zlx3e7BYEuXnB3d2S15pA/edit#heading=h.e59q52mi9zxu
A
Let
me
share
the
the
doc
link
in
the
in
the
chat.
I
see
nate
taylor
nate,
I
think
you
reached
out.
There
was
a
there's,
an
issue
that
we
talked
about.
It's
great
to
see
you
in
the
in
the
meeting.
I
don't
know
if
you
wanted
to
talk
about
that
issue.
I
I
I
don't
I
don't.
I
don't
have
anything
specific
in
the
agenda.
I
don't
know
just
in
a
few.
A
If
you
have
anything
so
yeah
nate,
do
you
wanna
you
wanna,
take
it
away,
you
wanna,
introduce
yourself
or
yeah.
B
Sure
nate
taylor
very
amateur
go
program
like
not
very
good,
but
I
can
read
it
and
make
my
own
sense
and
sometimes
add
things
looking
to
use
std
adm
with
my
company
in
order
to
help
with
the
deployment
of
our
on-prem
kubernetes
clusters.
B
One
option
is
just
pushing
out
a
tarball,
and
I
was
was
hoping
to
try
and
use
package
management
and
so
submitted
a
pr
to
ignore
trying
to
grab
a
package
if
it
already
found
it.
B
Then
daniel
pointed
out
to
me
that
it
handles
cleanup
too,
which
I
had
missed
when
I
originally
started
looking
at
everything,
and
so
after
talking
it,
it
seems
like
phases
could
be
a
nice
way
to
be
able
to
handle
things
skipping.
The
cleanup
phase
skipping
the
fetch
phase,
things
of
of
that
nature.
B
I
am
definitely
interested
in
helping
with
it.
My
biggest
concern
is
is
is
times
I
have
a
hard
time
finding
free
time
for
this
kind
of
stuff,
and
with
my
my
lack
of
experience,
I
imagine
it
would
take
me
quite
a
while
to
implement
phases,
especially
when
we
take
a
look
at
cube
abm
and
the
systems
that
they
use
to
implement
them.
A
Thanks
thanks
for
yeah
describing
the
the
issue
so
yeah,
I
remember
we
we,
we
talked,
I
think
in
in
slack
or
on
the
issue.
Also,
I'm
gonna,
I'm
gonna,
pull
up
that.
I
think
there
was
a
pr
I'll
I'll
pull
up
and
put
in
the
doc
in
a
minute
yeah
we
talked
about
faces.
I
I
wonder
if
you
know
maybe
there's
something
something
some
smaller
change
that
could
you
know
that
that
that
would
be
that
would
sort
of
unblock
this
justin?
Do
you?
A
Do
you
have
any
thoughts
like
right
now,
ncdm
is
pulling
or
the
cli
right,
the
cli
it.
It
uses
a
dedicated
cache
and
it
looks
to
that
cache
to
a
folder,
a
specific
folder.
I
think
under
var
cache
at
cd
adm
for
a
for
a
tarball
that
corresponds
to
the
ncd
version
that
you
want
to
use
if
it
doesn't
find
the
tarball
it
attempts
to
download
it
from,
I
believe,
github
lcd
releases,
and
then
it
writes
that
the
tarball
in
that
directory-
and
so
then,
once
once
it's
there,
then
it
it
takes.
A
You
know
it
extracts
the
artifacts
from
there
puts
them
into
opt,
opted
cdm,
and
so
I
think
yeah
it
doesn't.
It
doesn't
support
sort
of
a
user,
a
user
provided
at
cd,
but
it
could
because
even
even
today,
at
cddm
it
doesn't,
it
doesn't
really
rely
on
the
location
of
the
file
right
it
it
it.
It
uses
it
for
a
particular
location.
You
know
to
run
ncd,
but
but
I
believe
that
I
think
this
might
happen
actually
during
install.
A
It
will
actually
run
at
cd
to
verify
that
the
version
that
it
outputs
is
the
one
that
it's
looking
for
right.
So
it's
conceivable
that
a
user
could
use
or
could
could
yeah
could
place
their
own
lcd.
A
You
know
and
and
tell
it
and
where,
where
to
find
it
and
ncdm
would
be
fully
capable
of
checking.
Okay
can't
is
this
is
it
you
know
is
this?
Is
this
the
binary
that
I
expect
and
and
that's
it
but
yeah,
but
there's
that
issue
of
of
of
cleanup
yeah
and
in.
A
In
in
in
theory,
does
that
does
that
sound
like?
Is
there
something
yeah?
Is
there
some
smaller
change?
You
know
other
than
phases
that
will
that
will
sort
of
unblock
that
I
think
faces
right.
Like
nate
was
saying
right.
We,
it
would
be
a
way
to
allow
for
for
for
cleanup,
I'm
sorry,
I'm
a
little
yeah.
I
I
I'm
kind
of
cloudy
on
on
what
the
cleanup
issue
is.
B
So,
a
little
bit
more,
I
had
to
pull
up
the
code,
real,
quick
to
remind
myself
when
you
do
and
in
it
it
does
some
things
to
to
make
sure
that
it
doesn't
need
to
clean
up
an
existing
situation,
an
existing
deployment,
but
when
it
when
it
gets
to
the
point
of
looking
to
get
unpack
the
cash
and
everything
like
that,
what
it
does
is
it
looks
to
see
if
the
cache
already
exists.
Grab
it.
B
If
not,
and
then
unpacks
it
into
the
install
directories
that
you
that
you
can
specify
the
default
is,
is
an
opt
bin.
It
does
not
that
that
logic
is
not
skipped
if
the
std
binary,
the
std
command
is,
is
already
available.
B
B
What
happens
is
that
scd-80m
attempts
to
clean
up?
But
if
you
don't
specify
installer,
it
just
looks
for
the
default,
and
so
it
skips
cleaning
up
the
binaries
which
has
worked
for
my
needs.
But
obviously
that's
not
that's
not
a
real
solution,
but
that
that's
just
some
some
further
information
about
it.
C
One
thing
which
might
be
helpful
would
be
to
differentiate
between
like
the
install
and
instances
of
a
service,
so,
for
example,
like
kubernetes
at
least
historically
has
run
separate
lcd
clusters
from
main
events,
and
those
can
be
on
the
same
machine,
and
so
we
can
sort
of
imagine
that
there
could
be
a
trivially
like
three
lcd
installations
on
a
single
machine.
There
could
be
like
the
system,
one
that
you
installed
using
like
packages
or
whatever
right,
like
the
like.
C
If
there's
a
debian
package
for
for
etcd
like
maybe
install
it
and
bring
up
lcd
for
something
completely
different
outside
of
kubernetes
for
the
sake
of
argument,
and
then
I
might
want
to
run
etsy
adm
to
bring
up
a
main
cluster,
I'm
gonna
run
it
again
to
bring
up
an
events
cluster,
and
so
now,
when
I,
when
I
go
to
to
remove,
say
the
main
cluster,
I
don't
want
to
delete
the
events
cluster
and
I
don't
want
to
delete
the
software
that
I
installed
or
that
I
have
cached
somewhere.
C
So
I
don't.
I
don't
think
that
in
general
it'd
be
cdm
should
be
deleting
the
binaries
unless
it
knows
that
they
are
specific
to
a
an
instance
right.
So
one
like.
So
what
would
that
mean
in
practice
so
like,
for
example,
underneath
atdm?
We
could
have
opt
cdadm
main
and
then
bin,
which
would
hold
like
the
current
version,
and
we
would
literally
copy
them
from
the
cache
and
then
we
would
know
that
they
were
owned.
C
But
if
they
were
in
user
bin,
for
example,
we
would
we
would
not
be
in
a
delete
or
if
we
would
not
be
deleting
them,
and
if
we
were
getting
them
straight
from
the
cache.
We
would
not
be
deleting
them,
because
we
don't
know
if
someone
else
is
using
them.
Similarly
like.
If
we
have
to
uninstall
a
system
d
unit
file,
we
would
not.
We
would
only
install
our
system
d
unit
file,
we
wouldn't
uninstall.
C
Like
events,
we
wouldn't
uninstall
the
system,
one
unless
you
had
actively
pointed
it
to
like
to
the
shared
like
the
primary
one,
that
the
system
installs
or
to
I
don't
know
some
flag
to
say,
like
just
clean
up
everything.
C
I
don't
know
if
that
helps,
so
I
like,
I
like
what
you
did
with
changing
the
binary
paths.
I
think
I'd
argue
in
general,
we
shouldn't
be
deleting
the
binaries
unless
we
know
that
they
are
in
a
location.
That's
specific
to
our
instance
of
ncdm.
C
C
So,
for
example,
I
need
to
look
at
exactly
how
what
I
specify
on
it.
At
the
end,
it's
been
a
while.
A
But
yeah
it
is.
It
assumes
that
there's
just
one
that
it
manages
for
exactly
one
fcd
process
and
that
ncd
process
is
is
itself.
You
know,
wrapped
in
a
system
in
a
in
a
single
std,
systemd
service.
C
Yeah,
so
we
could,
we
could
add
the
notion
of
well.
If
we
want
to
continue
to
I
I
don't
know
whether
the
upstream
project
is
going
to
continue
to
recommend
or
whether
even
still
recommends
splitting
main
and
events
into
several
sed
clusters,
and
we
could.
We
could
certainly
add
that
notion
if,
if
they
do
assuming
they
still
do,
we
could
add
the
notion
of
like
everything
default,
but
just
like
a
second
instance
called
events
right
or
a
second
instance
called
psyllium
actually
is
the
other
use
case.
C
Psyllium
the
cni
provider
or
network
magic
thing
also
needs
or
can
in
some
modes
you
can
provide
it
with
an
std
cluster.
So
there
there
can
be
actually
three
instances
of
lcd
running
on
a
control,
plane
node
in
certain
configurations.
A
It's
it
sounds.
It
sounds
to
me
like
what
you
are
saying
is
that
the
the
the
problem
that
nate
has
raised,
which
is
of
hey,
you
know
a
different.
You
know
different,
let's
say
different
different
source,
for
let's
say
the
lcd
binary
and,
and
you
know
avoiding
cleanup
if
you,
if
it's
adm,
if
ncdm
didn't
didn't
install
this,
it
sounds
like
what
you're
saying
is
that
that
is
a
specific
instance
of
maybe
the
more
general
problem
of
hey.
There
might
be
multiple
instances
of
ncd
on
a
machine.
A
That's
that's
a
valid
use
case.
Ncdm
should
support
that
if
it
did
support
that
it
would
automatically
or
yeah
it
would
necessarily
support
this.
This
use
case
of
of
well,
maybe
not
necessarily,
but
it
would
be,
it
would
be
a
step
in
you
know
in
in
that
direction
of
supporting
different
yeah,
different
or
multiple
lcd,
binaries.
A
D
Prakash
here
I
just
want
to
give
comments.
D
Can
I
so
my
understanding
was
that
kubernetes
only
deals
one
cluster
at
a
time,
and
if
it
is
one
cluster,
it
will
be
only
one
etc.
At
a
time,
however,
the
responsibility
of
that
goes
to
cluster
api
if
they
want
to
have
multiple
clusters
and
manage
it.
So
it's
so
is
it
not
that
it
should
belong
to
the
cluster
api
to
invoke
those
extra
hdds
rather
than
doing
internally.
C
I
can
I
can
take
this,
like
certainly
the
the
originally.
There
was
one
lcd
backing
at
kubernetes,
one
instance
of
ncd
backing
a
kubernetes
cluster
and
then
I
think
for
performance
reasons
back
in
the
ecb
two
days,
the
default
configuration
or
the
configuration
in
the
open
source,
kubernetes
communities
project,
the
cube
up
configuration
start
split
that
out
into
two.
C
This
is
like
a
very
long
long
time
ago,
split
that
onto
two
lcd
clusters,
and
the
idea
is
that
the
event
objects
which
are
sort
of
higher
traffic
or
like
higher
throughputs
would
go
in
and
actually
like,
don't
matter.
In
other
words,
if
you
lose
and
that's
if
you
lose
an
event
object
in
kubernetes,
it
really
shouldn't
matter
they
only
last.
C
I
don't
know
like
an
hour
anyway,
then
they
get
garbage
collected.
Those
would
go
into
a
separate
lcd
cluster,
and
so
they
were
every
every
kubernetes
cluster
that
was
brought
up
in
this
configuration
and
then
the
ones
which
copied
that-
and
there
were
strong
reasons
to
copy
it,
because
that
was
sort
of
the
tested
configuration
those
cl.
Those
kubernetes
clusters
would
have
two
lcd
clusters
backing
them.
C
They
would
have
one
for
event,
objects
and
one
for
everything
else
and
the
kubernetes
api
server,
for
this
reason
supports
a
flag
or
a
set
of
flags
on
the
command
line,
where
you
can
say
like
these
kinds
of
objects,
or
maybe
this
lcd
path
goes
to
this
scd
cluster.
So
there's
no
guarantee
in
kubernetes,
for
example,
that
all
the
objects
in
your
kubernetes
cluster
come
from
the
same
lcd
cluster,
and
so
the
most
visible
effect
of
this
is.
C
If
you
look
at
the
resource
version
on
events,
it
will
not
be
in
the
same
sequence
as
the
resource
version
elsewhere.
It
doesn't
matter
it
probably
doesn't
matter,
but
like
there's,
no
guarantee
that
across
kind
on
two
different
kinds,
the
resource
version
will
be
the
same
because
they're
not
necessarily
the
same
as
cd
cluster,
and
you
can
actually
observe
that
in
most
configurations
or
in
a
lot
of
configurations,
I
should
say
if
you
look
at
event
objects.
C
I
do
agree
that
whatever
tool
is
setting
up,
the
cluster
should
also
be
setting
up
etcd
in
general,
unless
you
want
to
do
an
external
lcd
cluster,
but
in
general
the
the
tool,
whether
it's
cluster,
api
or
k,
ops
or
kind
or
whatever
it
is,
should
do
that.
I
think
what
we'd
like
here
is
that
that
all
of
those
tools
should
use
scd-adm
to
actually
do
the
heavy
lifting
of
setting
up
ncd.
C
And
yeah,
like
I
think
I
like
the
way
you
expressed
it
daniel
like
if
we,
if
we
address
the
broader
issue,
then
we
like
nate's
nate's,
the
the
challenge
that
nate
has
encountered.
It
can
be
seen
as
an
instance
or
related
to
that.
I
think
I
think
that
what's
different
is
an
instance
of
etsy
of
an
fcd
cluster
doesn't
necessarily
own
the
binaries.
It
is
using
in
any
case.
B
I
do
believe
that
working
on
support
supporting
multiple
instances
would
in
effect
solve
the
use
case
that
I
was
running
into,
whereas
instead
of
on
doing
it
in
it,
you
know
just
immediately
going
and
unpacking
the
cash.
It
makes
sense
to
look
to
see
if
it's
already
done,
in
which
case
that
covers
my
use
that
case
and
and
the
cleanup
as
well.
B
A
For
I'm,
just
I'm
trying
to
recall
so
kube
adm
doesn't
really
have
the
same.
A
I
guess
issue
because
kuwaitium
while
it
it
needs
to
deploy
control,
plane
components,
but
it
does
that
by
delegating
to
kublet
right,
it
drops
manifests
that
have
image
tags
and
then
kubla
says
okay,
I
need
to
pull
these,
and
so
it's
cool
adm
is
not
really
installing
or
or
cleaning
up
for
that
matter
you,
you
could
have
you
yeah.
You
could
use
the
same.
Kube
adm
multiple
times,
give
it
different
kubernetes
versions,
and
then
you
know
you'll
end
up
with
with
multiple
or
with
container
images.
A
For
you
know
that
span
these
different
kubernetes
versions.
So
I'm
just
I'm
wondering
yeah
how
you
know
can
we
have
that?
Can
we
have
that
same
sort
of
simple
experience
with
ncdm?
A
I
yeah,
I
don't
know
off
the
top
of
my
head
or
or
you
know,
or
do
we
need
something
like
an
etsy
datium
install
and
then
it's
the
idiom
in
it.
For
example,
will
you
know-
maybe
maybe
maybe
maybe
or
maybe
we
have
we
require?
A
I
don't
know,
then
I
guess
that
breaks
that
breaks
cli.
You
know
compatibility
if
we
require
then
two
separate
steps.
B
B
I
I
was
gonna
state
that
qb
adm
has
the
concept
of
a
cluster
and
their
configuration
files
are
great
for
being
able
to
deploy
multiple
clusters
using
cube,
adm
onto
the
same
nodes
and
if,
if
fcd
adm
had
the
same
concept
of
a
of
a
cluster,
I
think
that
you
know,
and
it's
not
a
cluster
in
this
case
it
would
be
an
instance,
but
I
I
you
could
call
it
cluster,
but
it
could
accomplish
the
same
sort
of
goal
and
help
with
the
logic
on
on
what
to
process
against
and
what
to
clean
up.
C
Yeah
and
we
don't
need
to-
we
can
continue
to
have
scd-adm
init,
do
the
default,
whatever
we
define
the
default
to
be
so,
for
example,
like
default
to
an
instance
that
we
call
main,
which
might
just
not
have
a
suffix
or
what?
C
If,
if
we
do
suffixes
right,
you
know
default
to
the
standard
ports
default
to
the
standard
version
and
default
to
like
downloading
from
github,
and
then
we
can
have
options
to
say
like
use
the
os
packages
or
use
this,
this
lcd
version
or
use
this
suffix
to
differentiate
in
this
port
type
thing.
That's
that's
fine.
I
think
we
can
debate,
which
should
be
the
default
right.
Should
it
be
os
packages
or
should
it
be
downloaded,
but
I
I
I
think
that's
I
I
don't
think
it
needs
to.
C
I
think
we
can
continue
to
have
the
the
simple
experience
without
ruining
the
the
the
capability
to
have
more
complicated
options.
Okay,.
A
All
right,
I
I
I
guess
maybe
we
can
like
sketch
a
little
more
on
on
on
a
on
a
github
issue,
but
that
seems
that
I
I
just
know
what
you
know.
I
really
like
what
you
said.
I
think
yeah
tell
me
if
I'm,
if
I
miss
misquoting
but
like
separate
in
it
from
install
or
you
know,
is
that.
C
Is
separate
from
separated
from
ownership,
I
think
like
just
because
an
instance
doesn't
necessarily
own
the
binaries,
but
but
it
like,
if
it,
if
it,
if
it
downloaded
them,
it
probably
should
delete
them,
and
so
therefore,
unless
they're
in
the
cache,
so
for
example,
one
way
to
do
that
would
be.
C
We
expand
those
files
into
a
directory
that
is
specific
to
the
instance,
so
that
then
the
instances
are
isolated
and
you
know
if
the
cache
gets
deleted.
It's
not
a
problem
or
if
the
cache
you
know
some
copy
gets
updated
in
the
shared
cache.
It's
not
a
problem,
and
so
therefore,
if
if
if
there
are
binaries
in
my
instance
that
I
am
deleting
right
now
and
I
delete
that
instance,
I
should
delete
that
copy
of
the
of
the
binaries.
So
I
should
clean
those
up.
C
Obviously,
if
I'm
using
the
shared
os
binaries,
I
should
not
delete
them,
because
I
don't
own
those
unless
I
copied
the
shared
os
binaries
into
my
instance
in
which,
because
I
should
delete
them
right.
So
it's
like
it's
this
notion
of
like
do
I
own
this
thing
is
it
is
it?
Is
it
definitely
mine
or
is
it
not
mine,
and
how
do
these
things
get
cleaned
up
and
that's
what
I'd
be
thinking
about.
C
Yeah
I
mean
like
the
that's
one
way.
I
think
the
other
way
is
yes,
I
think
I
think,
actually
in
a
systemd
unit
file,
we
can
put
additional
inform
like
metadata.
I
think
we
can
put
any
sections
we
want
in
there.
Don't
quote
me
on
that,
but
yeah
I
I
would.
I
would
probably
lean
on
establishing
our
own,
like
subtree
and
saying
that,
like
the
things
in
this
sub
tree
are
owned
by
this
instance.
A
Okay,
yeah:
well,
I'm
just
wondering
how
so
so,
let's
say:
if
I've
got
two
instances,
then
you
know
if
I
run
etwm
reset.
That
sort
of
that
applies
to
kind
of
the
the
default,
the
the
instance
that
was
deployed
in
in
the
default
fashion
and
then
how
do
we?
How
do
we
indicate
or
is
that
just
something
that
we
you
know
we
should
talk
about,
and
you
know
the
github
issue,
but
I
don't
know
if
you
have
any
ideas
off
the
top
of
your
head.
B
One
of
the
the
things
that
that
I
really
like
the
idea
of
is
kind
of
following
along
with
cubadm's
config
file,
but
maybe
taking
it
a
step
further
for
ease
of
use,
for
instance,
if
somebody
wanted
to
just
use
the
default
method
and
just
do
an
xcdm
in
it,
my
my
thinking
is
that
we
could
have
if,
if
it's
not
if
a
config
file
is
not
used
to
in
it
in
an
instance
that
we
could
drop
one
with
the
default
settings
somewhere,
either
in
sed
etsy,
std80m
or
wherever,
and
then
actions
against
that
instance
can
be,
those
can
be
specified.
B
I
want
to
act
access
against
this,
this
config
file
and
that
config
file
would
help
it
know.
Okay,
I
I
have
the
binaries.
I
need
to
clean
them
up.
This
is
their
location
and
we
could
do
the
same
thing
even
if
the
defaults
aren't
called.
Even
if
it's
just
commands
on
the
command
line,
we
could
drop
a
config
file
with
the
commands
that
were
used
that
overrode
the
defaults
as
well.
C
I,
like
that
approach
a
lot
and
then
I
think,
like
I
can
see
there
being
a.
I
suppose
I
had
two
instances,
one
of
which
was
the
I.
Maybe
this
is
what
you're
getting
at
now,
one
of
which
was
the
like
unnamed
instance,
the
default
one
and
one
of
which
was
events,
and
I
wanted
to
reset
scd
events,
but
I
forgot
to
pass
it
in.
C
I
don't
want
to
like
accidentally
do
that,
but
what
we
could
or
like
I
could
see
that
that's
a
big
foot
gun
what
we
could
do
is
we
could
recommend
to
people
that
they
provide
explicitly
named
if
they
have
multiple
instances.
They
provide
explicitly
named
instances
so
that,
like
the
default,
will
be
some
other.
C
You
know
some
other
instance
that,
but
if
you
wanted
to
have
two,
you
should
probably
call
them
primary
and
events
or
primary
and
other
or
whatever
it
is,
rather
than
relying
on
one
of
them
being
the
unnamed
one
or
the
default
one,
and
one
of
them
being
when
you
have
to
remember
to
pass
the
flag
on,
because
if
you
forget
to
pass
the
flag,
you
don't
want
to
do
thanks.
B
Yeah
we,
you
could
have
restrictions
on,
on
name
of
instance,
based
off
of
config
files
that
exist
so
that
we
know
that
it's
unique
and
then
for
for
things
like
the
the
reset
call.
If,
if
no
arguments
are
passed
in
like
a
warning,
this
will
be
delete.
The
default
cluster
are,
you
sure,
pass
in
a
config
file?
If
not
a
warning
of
that
nature,.
A
Okay-
let's
I
I
guess
I
I
don't
know
if
that
that
issue
you
opened
an
issue
nate
for
implement
phases
for
the
internet
and
reset
command,
as
maybe
maybe
it's
a.
This
is
a
yeah.
This
is
a
new
issue
and
we
can
brainstorm
a
little
bit
more,
but
yeah
this
seems
it
seems
reasonable.
A
I'd
like
to
yeah,
I
mean
I
I'd
like
to
also
hear
you
know
if
anybody
else,
because
I
I
guess
that
you
know
maybe
to
play
devil's
havoc
a
little
bit
right,
the
we
we
don't
want
to
add
complexity
unless
there
is
sort
of
demand
that
justifies
it.
Not
so
so,
just
apart
from
something
being
useful
on
its
own-
and
I
agree
that
this
is
this
is
useful.
A
You
know
as
an
idea,
but
I
also
it
would
it
would
help
if
we
yeah,
if,
if
if
we
saw
that
it
would
yeah
that
it
would,
you
know
it
would
be
used
because
it
would.
I
think
it
would
add
complexity
so
anyway.
Well,
who
wants
to
who
wants
to
create
nate?
Do
you
want
to
create
an
issue
or
should
I
create
an
issue,
and
then
we
can
kind
of
go
back
and
forth
on
on?
A
You
know
on
on
where
to
go,
but
I
think
yeah
I
think
we
can.
I
think
we.
A
A
See
apart
from
that,
or
actually
I
guess
I
should
ask,
is
you
know,
is
that
is
that
sort
of
wrap
up
that
topic.
A
Yeah,
okay,
I
didn't
have
anything
apart
from
that
there
was
oh.
There
was
a
another
fixed
pr.
A
C
C
I
don't
know
if
it
matters
for
bash,
I
presume
it
doesn't
matter
for
bash.
Otherwise
none
of
our
scripts
would
work.
The
there
is
a
cons.
I
have
a
vague
concern
around
the
I'll
have
to
make
sure
that
our
copy,
so
we
have
scripts
that
check
that
the
correct
headers
are
on
top
right.
I
don't
I'll
have
to
see
what's
gone
wrong
like
I'll
see,
what's
called
what's
happening
here.
C
My
guess
is
that
the
reason
why
these
are
bashy
and
the
reason
why
the
bash
is
on
line
15
instead
of
line
one
is
because
we
copied
them
from
somewhere
else,
and
I
will
have
to
go
and
see
where
we
copied
them
from
and
find
out
why
they
did
that,
and
it
might
just
be
that's
yeah,
but
this
seems
like
a
reasonable
pr.
I
can.
I
can
certainly
take
this
one,
particularly
if
you
want
to
look
at
163.
A
A
A
A
A
Okay,
yeah:
let's,
let's
follow
up
on
on
the
sort
of
the
multiple
instance
idea
in
in
an
issue,
and
I
I
mean
I
think
yeah.
I
think
I
think
that's
that's
a.
A
That's
a
it's
a
nice.
It
sounds
like
a
nice
direction
to
go
also
just
yeah,
just
just
I
guess,
architecturally
also
just
to
to
separate
management
of
fcd
from
the
idea
of
managing
the
the
binaries
themselves
right,
because
etsy,
adm
and
theory
all
it
needs
is
fcd
the
binary
of
a
particular
of
the
version
that
the
user
wants
and
where
that
lives
really
is
not
is
not
relevant
to
managing
ncd.
A
As
long
as
long
as
that,
binary
doesn't
get,
doesn't
get
you
know,
sort
of
moved
or
deleted
because
because
ncdm
right
now
it
does
right,
it
does
create
the
systemd
service
and
that
that
needs
to
point
to
the
to
the
binary.
But
as
long
as
that
binary's
there
interesting
all
right,
yeah,
let's,
let's,
let's
think
about
that,
a
little
bit
more
okay,
I
that
that's
that's
it.
D
God
so
nate
are
you
modifying
179
now
to
from
phase
to
the
existing?
What
we
discussed
or
is
it
still
open,
179.
B
I
would
say
still
open
and
I
was
going
to
open
a
new
issue
for
for
supporting
multiple
instances.
B
B
Okay,
good,
one
thing
that
I
I
would
like
to
to
ask
about
as
well.
There
was
a
a
pull
request
that
was
opened
a
little
while
back
for
adding
renewed
certificate
functionality
and
it
looks
like
it
was
closed,
the
sale
and,
and
the
last
comment
on
it
I'll
go
ahead
and
put
it
in
the
chat.
The
last
comment
on
it
was
that
somebody
was
gonna,
go
ahead
and
and
check
it.
B
I
believe
that
the
original
contributor
felt
like
they
had
addressed
everything,
and
so
I
was
hoping
to
bring
that
up
and
see
see
what
we
need
to
do
to
to
move
that
forward.
B
A
B
It
doesn't,
and
I
can
I
can
comment
on
the
pr
if
that's
helpful,
okay.
A
Yeah-
and
I
I
mean-
I
think
that
let
me
see
the
pr
itself-
I
don't
think
there
are
any
conflicts,
so
we
should
be
able
to.
I
actually
I
mean
I
don't
know
what
the
do
we
need
to
do.
We
need
to
reach
out
to
the
pie
timer
because
he
did.
I
guess
he
he
closed
it.
B
It
was
closed
for
being
rotten.
Okay,
okay,.
A
Okay,
excellent,
okay,
yeah,
so
then
yeah
we
can.
We
can
reopen
it.
I
I'll
I'll
ping
him
and
you
know
see
if
he
has
any
if
he
wants
to
make
any
updates.
Okay.
B
Yeah,
unfortunately,
I
I
work
in
a
slow
environment
and
so
being
able
to
handle
like
upgrades
and
everything
is,
is
not
easy,
and
so
we
we.
We
have
automation
that
renews
all
of
our
certificates
every
week
to
make
sure
that
we're
always
good.
And
so
I
went
to
add
scd-80m
to
that
and
saw
this
and-
and
I
I
did
pull
in
the
pr
and
mess
around
with
it
into
my
fork
and
and
play
with
it,
and
it's
looking
good
so,
but
I
definitely
don't
want
to
be
running
anything.
That's
not
upstream.
B
A
All
right
yeah,
I
will.
I
will
look
at
this
yeah
because
I
agree
it's
it's.
It's
it's
good
functionality.
I'm
gonna
double
check
to
see.
You
know
if
kube
adm
has
sort
of
changed
its
approach,
maybe
there's
some
something
that
we
can
learn,
but
otherwise
yeah.
I
think
this
was
in
a
pretty
good
state.
There
was
just
that
question
of
you
know:
do
we
need
to
document
a
you
know
caveat
for
for
users,
but
it
sounds
like
f.
A
You
can,
if
you
can
add
that
info
to
the
pr
that
that
no
could
be
bi
server
will
be
able
to
sort
yeah
just
continue.
Okay,
it'll
pick
up
the
rotated
certificate.
That
would
be.
I
would
appreciate
that.
Okay,
I
I've
reopened
that
and
I'll
I'll
keep
an
eye
on
this
thanks
yeah,
absolutely
thanks
for
bringing
it
up.
A
So
maybe
in
the
in
the
in
the
clustered
life
cycle
meeting
tradition,
I
will
say
going
once
going
twice
all
right
thanks
everybody.
A
Thank
you
for
yeah
for
attending
for
yeah,
sharing
the
sharing
your
ideas
and
feedback,
and
we
will
see
you
hopefully
next
time
around
in
a
couple
weeks.
Sure
thanks
have
a
good
one.
Thanks.