►
From YouTube: 20180516 sig cluster lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
This
is
the
cuvette
DM
office
hours
meeting.
If
folks
have
a
chance,
I
put
a
link
to
the
agenda
in
chat.
If
you
could
add
your
deets
and
any
topics
want
to
discuss
there
right
off
the
bat
we
have
the
one
we
were
talking
about
yesterday.
Look
this.
If
you
want
to
start
us
off
with
a
discussion
around
the
Kubik
config.
B
A
For
the
edification
well,
there's
like
I
read
your
PR
and
I
already
know
what
most
of
the
network
flows
I
have
a
lot
of
comments
about
upgrades,
but
we'll
save
that
till
the
end.
So
if
you
want
to
like
give
people
the
TLDR
of
the
details,
I
think
that
would
probably
be
useful
for
search
educating
those
who've
not
know
in
the
path
of
this
stuff.
So
ok.
B
So
currently,
if
so,
currently
we
don't
have
any
interface
between
the
Q
cubed
M
and
the
cubelets,
so
Q
Baron
can't
tell
cubelets
what
to
do
at
all,
and
this
is
a
challenge
because,
for
example,
we
said
things
like
the
service
subnet
in
cabanas.
If
we
change
that
in
cubed,
M
cubed
and
cube
a
DM
in
its
time,
it's
gonna
break
all
cubelets
DNS
resolving.
B
So
then
you
had
you
have
to
go
and
ssh
into
every
single
node
and
create
on
custom
drop
in
file,
which
is
really
tedious
and
error-prone,
and
then
first,
your
dns
resolution
works.
So
getting
this
cluster
level
configuration
slowed
down.
Making
that's
information
flow
down
to
the
cubelets
is
something
we
wanted
to
do
in
a
long
time.
I
remember
like
discussions
in
Austin
like
Seattle
2016,
something
when
we
talk.
This
is
something
we
have
to
do,
and
here
we
are
hopeful.
This
is
the
first
time
we
have
to
actually
execute
on
it.
B
So
now
we
have
versioned
and
sealife
likes
are
bad
for
a
lot
of
reasons.
The
first
thing
is
like
there's:
no,
we
have
only
one
API
and
that's
what's
defined
there.
We
can't
do
any
conversions
when
things
change,
so
it's
that
we
have
to
do
this
six
month,
the
vacation
cycle
and
change
every
deployment.
The
let
us
ever
like,
enable
this
flag
and
yeah
change.
B
Changing
defaults
is
also
really
tedious,
but
with
the
component
configuration
of
the
qubit,
we
can
do
things
like
conversions
between
versions
easily,
it's
actually
automatically
handled
by
the
API
machiner,
and
things
like
that.
So
the
the
proposal
to
execute
on
in
time
for
111
is
to
make
qubit
and
write
a
config
file
for
the
qubit.
So
inside
of
the
cubed
M
API
will
embed
the
whole
structure
of
the
cubelet
component
configuration.
So
you
can
do
cube
at
a
minute.
B
That's
left,
config
and
inside
of
that
config
you
have.
What
configuration
for
all
the
qubits
and
cluster
should
be
configured
cube.
Atom
will
then
do
the
normal
in
its
flow.
It's
for
the
first
thing
it
does
in
the
pre-flood
tract
is
to
execute.
Systemctl
starts
qubits.
If
it's
not
already
running,
it
then
reads
config
and
uses
it
internally
cube
atom.
Lettuce
then
cubed
and
will
in
this
case
and
force
a
few
security
related
options
for
the
qubit
config.
For
example,
securing
yeah.
A
A
B
B
Then
we
have
configuration
for
the
cubelets
110
as
well,
but
when
we're
doing
upgrades,
this
is
like
a
chicken-and-egg
as
JSON
as
said,
and
hence
we
are
working
it
around
in
various
ways
the
propose,
but
with
this
solution
we
will
have
one
static,
cubed
M
drop
in
for
the
cubelets.
It
won't
change
between
minor
versions,
as
it
have
done
all
times
so
far.
B
A
C
A
B
Think
we
should
do
that
because
if
it's
I
don't
see
any
benefit
with
so
I
mean
again,
we
only
have
one.
You
only
have
to
specify.
With
with
this
new
flow.
You
only
have
to
specify
three
parameters:
Figg
cube,
config
boot
up
cube
container.
That's
that's
all,
and
as
long
as
we
just
say,
you
should
have
a
running
cubelet
with
these
three
so
pointers
the
files,
it's
okay,
I,
don't.
D
E
D
A
D
B
B
That
is
also
like
one
thing.
We
have
to
configure
there
via
wire
flag,
but
it
doesn't.
That
is
not
so
anything
that
is
specific
specifiable
we
are
flag.
Basically
still
that
is
only
a
flag
is
either
deprecated
or
purely
deprecated,
experimental
or
purely
instant
specific,
like
know
that
piorno
hostname
all
right
so.
D
I
think
we
still
need
to
give
ourselves
the
flexibility
to
be
able
to
override
those
and
one
of
the
things
that
there
was
a
discussion
on
the
PR
I,
really
think
that
we
need
to
move
the
it.
If
we
are
going
to
manage
a
system
D
configuration
unify
lor
system,
D
drop
in.
We
really
need
to
put
that
under
user
Lib
system
D,
which
is
where
general
a
package
managed.
D
B
I,
don't
care
strongly
about
that
see?
Was
this
useful
and
but
but
yeah.
So
what
this
proposal
is
about
is
like
the
generic
configuration
things
like
authorization,
authentication
ports
enable
what
what
DNS
to
use
stuff
like
that.
That
is
cluster
since
a
trusted
generate
4oq.
But
this
still
I
want
the
Deb's
to
manage
to
have
this
static.
B
Static
drop
in
file,
folder
cubelet
from
the
queue
bed
and
Deb
that
sets
cute
config
boost
or
queue
config
and
config,
and,
as
you
said,
if
we
ever
really
really
really
have
to
change
something
in
that,
we
can
still
do
that.
That's
what
we've
been
doing
in
all
releases
so
far,
but
at
least
we're
going
to
remove
the
churn
of
doing
things.
B
Every
minor
release
that
affects
the
normal
authorization
notification
stuff,
for
example,
to
see,
see,
advise
support,
command
line
flag
is
going
to
be
a
removed
in
112,
and
that
means
that
if
we
do
something
wrong
there,
it's
gonna
start
crash
looping.
If
we
have
the
wrong
drop
in
and
stuff
with,
with
this
new
thing,
we
don't
necessarily
have
to
do
things.
I
think.
A
A
A
default
unit
file
that
gets
dropped
in
with
a
package,
but
one
piece
that
we're
missing
and
that
we
were
talking
about
is
the
minutiae
of
Kubb
ad,
a
minute
having
the
drop
in
override
capability
as
part
of
initialization,
to
prevent
a
number
of
use
cases
that
we
actually
have
gotten
coming
in
from
issues
which
include
the
C
group
override,
which
include
an
open
IP,
which
include
the
other
apparatus.
So
that
way
like,
if
you
do
an
it,
it
sets
the
other
flags
that
we
can't
do
as
part
of
the
dynamic
kulit
configuration
parameters.
A
B
Yeah
then
I
think
we
should
do
a
lot
yeah.
Another
thing
that
this
base
of
related
to
this
is
that
I
propose
to
split
up
the
CRI
specific
flags
that
we
have
currently
like:
network
plugin,
equals
c
ni,
c,
ni,
config
directory
on
c
and
my
bin
directory.
These
are
all
just
for
the
doctor
runtime
and
are
useless
for
any
other
runtime,
so
I'm
proposing
to
create
one
environment
for.
E
A
A
B
Cool
yeah
I'm
in
a
meeting
room
and
Wi-Fi
so
might
not
be
the
best
here.
Yes,
I'm
I'll.
Take
it
again,
so
one
thing
we
I
guess
we
want
to
do
at
this
time
as
well,
is
to
separate
the
the
CRI
specific
flags
so,
for
example,
everything
that
touches
CNI,
isn't
that
is
specific
to
the
doctor
of
run
time,
isn't
configurable
via
qubit
config
again,
as
it's
like
just
specific
to
this
one
runtime
and
isn't
specific
to
cubelets,
necessarily
useful
for
the
qubit
as
a
whole,
so
I
propose.
B
We
have
me,
make
one
file
like:
let's
see,
kubernetes
cube,
let's
see
RI
dot,
n
or
something
that
includes
just
one
environment
variable
which
defaults
to
the
CNI
things
needed
for
dr.
If
you
want
to
set
or
use
any
other
runtime,
we
say
like
replace
the
contents
in
this
file
with
your
doctor
or
your
CI
socket
and
then
I'm
in
favor
of
your
eye.
B
D
A
A
B
So
I
think
so
we
we
already
have
the
cubelet
extracts
thing
so
I
mean
it's
already
technically
possible
to
to
do
the
to
specify
new
custom
mugs,
I
think
having
another
file
like
an
environmental
file.
We
sauce
this
cubelet
extra
arms
parameter
from
would
be
beneficial
and
I
also
think
3.
This,
like
three
different
files,
would
be
useful
one
for
user
settings,
one
for
cubelet
cubed
I
mean
each
time
things
we
might
have
to
do,
and
one
for
the
container
runtime
specific
bits.
F
I
would
suggest
to
be
a
different
approach
like
one
which
we
have
right
now
with
cube
idiom,
specific
parameters
which
we
can
generate
by
cube,
ADM,
integer
m
comment
and
when
users
will
be
able
to
add,
with
like
a
bigger
name
of
a
file,
name
so
drop
in
which
will
overwrite
something.
What
cube
ADM
generates.
D
The
biggest
thing
is:
is
that
really?
The
one
thing
that
we
don't
have
right
now
is
a
clear
distinction
between
what
is
owned
by
what
part
of
the
process
and
I
think.
We
really
need
to
document
that
and
and
start
giving
instructions
to,
users
on
how
to
actually
interact
with
the
files
that
we're
providing
as.
A
Well,
okay:
this
is
this:
we
could.
We
could
rathole
on
this
piece
so,
like
I,
think
we're
an
agreement
that
we
need
to.
We
need
to
modify
this
and
I
want
to
make
sure
that
Lucas
can
go
on
to
the
other
pieces,
but
I
think
we're
we're
in
violent
agreement,
but
the
minutia
needs
to
be
hammered
out.
Does
that
seem
fair
to
everyone?
I.
B
B
Configurable
via
the
file,
that
is
version,
and
only
only
have
this
like
for
small,
specific
use
case.
It's
like
node
IP
that
is
interesting
and
similar,
but
in
general
I
want
us
for
the
like
90%
of
the
options.
I
want
us
to
use
version
config
and
hence
the
proposal
so
at
cubed
em
and
for
Cuban
of
embeds
cubelets
cubelets
component
configuration
in
its
own
API
and
we
enforced
things
like
authorization
for
the
cube
lights
and
stuff.
D
I
have
a
question
about
that
Lucas.
From
from
what
I've
read
the
initial
config
file,
even
when
using
dynamic
config
is
just
read
in
at
runtime
and
then
or
read
in
at
start
time,
and
then
it's
written
down
into
the
director,
then
the
configure
pulls
down
it's
written
down
into
the
directory
specified
at
the
command
line
for
the
dynamic
config.
With
that
being
the
case
shouldn't
the
config
file
that
we're
writing
down
for
either
dynamic,
config
or
non
dynamic.
Config
is
kind
of
that
initial
config
shouldn't
that
live
somewhere
under
like
Etsy
kubernetes.
B
So
the
reason
for
that
I
guess
is
that
we
do
like
we
do
clear
the
directory
when
we
do
cube
it
and
reset
so
to
get
everything
into
the
clean,
State
and
also
I
mean
I
I.
Don't
have
strong
preferences
on
where
to
put
it,
but
I
think
what
is
used
by
the
cubelet
at
runtime
is
okay
to
have
there.
For
example,
we
have
that's
like
the
config
Yama
file.
We
have
a
PKI
directory
there
with
some.
D
A
D
A
B
I
could
if
I
proceed.
It
might
be
clear
because
this
list
queue
but
yeah
mo
or
this
config
amel
is
gonna,
be
when
upgrading
and
when
upgrading
it's
gonna
be
changed
so
I
mean
we're
basically
emulating
the
dynamic
cubelet
config
with
this
initial
config.
So
now
we've
written
down
this
config.
We
reply
in
the
in
its
law,
we're
uploading
it
a
config
map
which
is
version
/.
B
B
Yes,
so
when
you
do
join
its,
it
works
as
earlier
doing
the
discovery
of
things
cubed,
M,
then
cubed,
I'm
joined,
will
then
grab
the
config
map
using
the
bootstrap
token
credential
put
it
into
on
disk
in
Warroad
cubelet
and
now
the
cube
is
cast
start
up
and
do
the
TLS
bootstrap
and
work
normally.
When
we
then
have
an
upgrade.
Let's
say
we
upgraded
a
server
to
112.
D
A
B
So
flex
take
preference
over
over
the
config
and
then
I.
Think
for
that
advanced
use
case
would
would
this
kind
of
work,
so
you
do
cubed
em
upgrade
your
operating
now
to
112.
You
have
the
new
112
based
configuration
set
in
the
cluster
in
a
config
map,
and
then,
when
you
do
the
up
update
configuration
step
on
the
node
before
upgrading
the
qubit,
their
local
it
once
well.
B
I
think
this
could
easily
take
an
argument,
config
map
name
or
whatever
that
then,
if
you
have
a
specific
machine
profile
for
this
node
you're,
just
redirecting
to
another
config
map
you
own,
so
then
you
have
like
done
whatever
to
to
your
thing,
and
also
this
is
I
mean
I
mean.
If
you
have
a
specific
machine
profile,
you
can
then
tweak
it
as
you
like
and
put
it
in
another
config
map,
with
your
reference
for
download.
D
Yeah
I
think
that's
fine
for
the
standard
kind
of
an
it
and
join
type
of,
or
even
an
upgrade
scenario,
but
that
doesn't
necessarily
address
these
ability
around
modifying
config
out
of
an
from
a
join
or
an
upgrade
and
I,
don't
think
we
can
just
rely
on
just
the
command-line
arts,
because
a
lot
of
those
are
being
deprecated
and
removed
and
new
configuration
options
are
only
being
added
in
the
config
right
now.
So
we
need
to
take
that
into
account
at
some
point.
I'm.
A
So
I'm
not
I,
just
want
to
be
clear
that,
like
for
most
of
this
I'm,
not
opposed
to
the
workflow,
not
opposed
to
the
idea,
and
in
fact
it
cleans
up
the
code
from
your
PR
in
a
couple
of
different
ways
that
is
actually
beneficial,
so
I
think
wouldn't
there
there
are
some
questions.
I
do
have
with
regards
to
upgrades
that
are
not
clean
at
all.
I,
don't
know!
Where
do
you
want
to?
We
talked
a
little
bit
about
some
of
the
details
about
in
it
and
joined.
B
I
could
do
some,
but
but
yeah
I
mean
for
I
I,
definitely
agree.
We
should
have
a
duck
print
or
whatever
at
some
point
after
code
freeze
that
is
but
but
but
some
some
kind
of
holistic
planning,
total
I
agree
there
identify
like
create
a
bunch
of
user
stories.
I'm
basically
doing
this
kind
of
thing.
In
my
notebook
right
now
regards
to
security
profiles,
trying
to
identify
the
different
user
stories
for
security,
we
want
to
support
incubate
him
and
have
that
for
for
a
future
version,
and
we're
I
mean
I.
B
A
The
the
upgrade
scenario
is
a
little
scary
to
me,
because
there's
a
bunch
of
people
currently
have
I'm
going
graded
right
into
it.
People
currently
have
a
bunch
of
unit
file
overrides
right
and
order
system.
D
drop-in
overrides
right
that
they've
specified
and
for
this
upgrade
scenario
we
need
to
have
a
very
detailed
outline
view
of
the
world
of
how
they're
going
to
transition
to
this
configuration
change,
because
some
things
will
map
into
a
config
and
some
things
won't.
You
know
one.
D
Thing
Tim
is,
in
general,
we've
been
telling
people
to
like
there's
been
a
lot
of
guidance
to
where
people
are
just
overriding.
The
drop
in
file
that
we're
managing
in
the
package
I
think
we're
already
breaking
those
users
today
on
upgrade
when
we
update
when
they
update
the
cube
ADM
package
and
it
clubbers
the
drop
in
file
of
at
that
point.
So.
B
I
think
the
most
common
recommendation
I
might
be
wrong,
but
is
to
set
20
extracts
or
something
wrapping
right,
and
that
is
that
is
setting
the
extra
Cuba
extracts
environment
variable,
which
is
not
touched
by
cubed
M.
So,
even
though
we
do
this
whole
operate
switch
to
the
new
thing
that
is
still
gonna
work
exactly
as
before,
but.
A
C
A
A
B
So
it
has
been
this
carry
every
time
and
we're
trying
to
make
it
less
scary
over
time,
so
that
grid
flow
looks
like
so
right
now
what
we
have
in
our
Docs.
Is
you
curl
down
a
cube,
adium
stand-alone
binary?
You
use
that
standalone
binary
to
upgrade
your
API
server,
so
we
have
one
thing
which
is
really
annoying,
and
that
is
the
key
get
upgrade
on.
The
master
would
pull
both
a
new
cubed
in
binary
and
a
new
cubelet
version.
If
we
did
that
it
would
kill,
the
qubit
would
be
of
a
higher
version.
B
Let's
say
want
well
when
our
API
server
is
111
and
hence
we
can't
start
without
get
upgrade.
This
is
trouble,
something
because
I'm
pretty
confident
that
a
lot
of
users
do
apt-get
upgrade
like
all
the
time
or
whatever
like
without
thinking
about
it
and
I
mean
there's
nothing
should
happen,
but
in
general
we
don't
allow
nor
ete
test
our
clients
being
of
higher
version.
The
IP
api
server
itself,
so
that's
what
we
really
trying
to
not
make
that
happen
and
always
have
the
API
server
or
higher
version.
So
yeah
I,
don't.
B
B
Think
we
have
that
bit
at
least
okay,
we're
doing
worse,
another
IRA
say
but
anyway,
so
we
upgrade
the
the
queue.
But
the
problem
is
with
the
current
approach.
We
can't
do
apt-get
install
cubed
M
because
that
would
grab
the
new
drop
in
file
that
is
incompatible
with
the
cubelets
with
the
old
cuba
that
we
have
so.
Hence
we
have
to
curl
it
down
upgrade
and
then
upgrade.
B
B
B
And
yeah
on
on
after
upgrading
your
api
server
when
you,
when
you
should
upgrade
your
nodes,
it's
it's
like
read
the
new
config
kind
of
command,
and
then
you
can
do
apt-get
upgrade
first.
You
should
ideally
go
to
your
master
and
execute
cube
a
cube,
let
drain
and
cordon,
but
I'm
unsure.
If
users
actually
do
that,
because
yeah
I
don't
know
the
problem
is
that
it's
requires
basically
a
root
privileges
to
the
cluster,
to
do
it,
corn
and
grain.
A
A
B
E
B
E
We,
you
know,
we
missed
it.
We
missed
some
pretty
big
stuff
the
last
round
and
the
testing
plan
is
on
everybody's
to-do
list,
but
it's
not
done
yet
or
even
really
started.
So,
like
you
know,
we
can
talk
about
having
a
month
to
put
this
through
its
paces
as
much
as
we
want,
but
I
don't
know
that
we
know
what
its
paces
even
are.
I.
A
D
Why
not
111?
Just
because
you
know
if
we
do
formalizes
into
a
cap
and
actually
go
through
some
of
the
discussion
and
work
through
the
different.
You
know
points
of
minutiae
and
working
up
various
different
test
cases.
I,
don't
see
how
we
can
do
that
and
still
get
something
merged
by
the
code.
Freeze
deadline
I
mean
my.
B
A
F
My
current
understanding
is
what
our
packaging
is
broken
and
and
until
we
properly
solve
scenario,
what
we
publish
packages
properly
for
all
most
common
distributions
and
packages
to
be
built
for
those
specific
distributions
not
likely
currently
generator
on,
and
until
we
have
like
each
repository
for
each
major
branch.
I,
don't
think
we
will
have
some
good
packaging
solution.
C
B
F
Sense
to
you
well,
I
understand
its
incremental
approach,
but
the
current
way
of
food
sample
generated
new
packages
from
Brazil
built.
We
cannot
really
produce
with
packages
which
will
be
compliant
to
standards
for,
for
example,
for
red,
hot
or
Santos
and
even
deben
packages.
What
we
are
producing,
we're
not
they
been
fully
compliant.
F
F
You
should
have
it
under
a
TCC's
config,
something
when
each
of
these
distributions,
where
I
have
my
own
rules,
where
with
system
D
files
should
be
rotated,
and
when
each
of
these
distributions
we
have
like
automatic
packaging
drop-ins
for
a
system
d
like
one
require
would
like
installing
system
the
unit
file
daemon
should
be
rewarded.
One
should
do
not
so
this
matrix
of
supported
distributions
is
huge,
and
then
why
yeah.
B
C
E
D
I
was
going
to
say,
I,
don't
think
we
need
to
target
like
full
distribution
compliance
right
now,
I
think
there
are
certain
steps
that
we
can
take.
That
would
get
it
as
closer
that
the
biggest
concern
that
I
have
right
now
is
that
we're
talking
about
basically
a
relatively
large
change
to
the
way
that
we're
configuring,
the
cubelet
from
how
we
are
today
and
to
manage
the
complexity
of
that
change,
where
users
have
been
just
kind
of
like
modifying
a
file
that
we
lay
down
in
the
packaging.
D
B
C
E
F
D
C
B
B
B
A
You
have
to
go
back
to
to
provide
the
testing
cycles
and
the
documentation
cycles
they're
going
to
need
to
make
this
success
because
it
is
a
huge
feat,
change,
I,
understand
that
and
it's
a
change
in
functionality
and
behavior
it.
But
it
provides
a
set
of
capabilities
in
the
long
run
that
we've
always
wanted,
because
it
gets
us
out
of
the
business
of
eating
it
on
every
upgrade.
A
C
C
A
B
A
Think
it
is
I,
think
you
could
you
could
you
could
have
a
drop
in
file.
That,
basically
is
a
commented
out
section
that
says
for
a
person
to
opt
in
uncomment,
this
block
right
and
a
person
who
just
do
feature
gate
blah
blah
and
that
would
do
the
override
for
initialization
but
the
code.
The
code
paths
are
much
cleaner.
With
the
current
configuration
inside
of
rubidium,
with
the
with
your
PR.