►
From YouTube: Kubernetes SIG Cluster Lifecycle 20170822
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Everyone,
this
is
the
cyclists
life
cycle
call
of
22nd
august
and
yeah
we're
currently
filling
out
the
agenda
and
attending
lists,
and
we
have
a
couple
of
items
to
discuss.
Currently
code
freeze
is
coming
up
soon
in
next
Thursday
I
think
so
have
to
discuss
that
a
bit
and
testicles
and
some
new
features.
We
want
emoji
yeah,
so
J's.
Could
you
please
take
the
first
one
that
was
yours.
C
Oh,
oh
yeah
yeah,
so
we're
planning
an
alpha,
the
Alpha
release
for
tomorrow
and
we're
looking
for
green
test
signal
on
that
in
the
release
master
blocking
test,
you
can
go
to
test
grid
by
Kate
that
IO
to
see
if
those
those
list
of
flaky
tests,
if
you
click
on
summary,
for
release
master
blocking
and
if
we
can
get
all
green
testing
on
them,
we
can
get
a
usable
alpha
which
I
think
it
would
be
great
for
all
the
consumers
of
cluster
lifecycle
to
work
against
so
yeah.
C
B
There
we
go
in
the
chat
as
well,
so
basically
it
cubelets
isn't
creating
the
its
own
directory
if
it
doesn't
exist
at
master
I.
Don't
know
when
this
was
when
this
regression
happened,
but
some
sometime
in
the
1
8
1
8
dev
cycle.
So
we
need
to
get
that
in
before
cube.
Lm
test
will
go
Queen
because
right
now,
cubelet
is
sitting
down.
Does
nothing
just
crash
loops.
B
Currently
reading
the
thread
hereand,
it
seems
like
it
got
approved
6
minutes
ago,
so
I
hope
it
can
get
lgt
end
as
well
and
eventually
merged
in
time.
For
tomorrow
the
submit
queue
has
been
a
little
bit
flaky
latest
the
past
3-4
days.
So
we'll
we'll
see
what
we
we
can
do,
I
don't
know
Jace.
Do
you
mean
that
we
should
like
pump
priority
or
something
like
on
the
POS
that
needs
to
get
in
for
tomorrow
or
what.
C
It's
really
just
green
test
signal,
so
if
this
it
comes
down
to
this
causing
us
with
a
failure
in
the
test
itself,
that's
really
the
only
thing
to
prioritize
I
mean
really
I.
Think
cluster
lifecycle
is,
it
should
be
a
huge
consumer
of
the
Alpha.
So
if
there's
things
that
you'd
like
to
see
for
your
own
needs
and
testing
in
the
Alpha,
then
I'd
say
try
and
you
know
get
those
PR
SN,
but
really
the
main
thing
is
that
we
need
green
test
signal
across
all
those
release
master
blocking
tests.
Yes,.
B
Cool
actually
one
one
thing
that
much
some
day
to
go
to
cubed
M
is
and
Jeff's
question
from
Google
has
done
great
work
on
this.
Basically
now
control
plane,
CI
images
up
push
the
GCR,
so
it's
easy
to
consume
for
the
first
time
kind
of
ever,
it's
easy
to
consume
ma
bills
from
master
of
the
control
plane.
So
that's,
that's
really.
Awesome
can
be
done
with
just
cubed
M
in
it
kubernetes
version
CI,
flash
latest,
which
is
which
is
cool,
yeah.
Well,
we'll
watch
this
PR
and
hopefully
get
it
first
for
tomorrow
should
be.
C
B
D
That's
like
a
fundamental
contract
of
Damon,
says
and
so
I
had
to
kind
of
disable
some
of
that
code
in
order
to
actually
test
the
strategy,
and
so
I
just
want
to
get
her
opinion
or
somewhere
negative
community's
opinion.
So
I'll
write
something
up
today
of
how
to
cleanly
relax
that
constraint
for
this
strategy,
without
you
know,
causing
regressions
otherwise
or
just
general
like
changes
in
behavior
that
I'm
sure
people
wouldn't
wouldn't
appreciate.
D
So
that's
the
status
on
that,
but
once
we
kind
of
figure
out
the
best
way
to
solve
that,
then
I
think
we're
in
good
shape.
You
know,
because
it
networks
quite
nicely
so
I
should
hopefully
get
a
resolution
for
that
today
and
put
a
PR
up
and
should
be
hopefully
not
too
not
too
stressful
from
there.
It's
a
pretty
it's
a
pretty
clean
PR,
it's
fairly
isolated,
so
I.
Hopefully,
it's
low
risk
and
not
too
contentious.
Oh
yeah,.
B
B
B
B
C
B
C
E
B
B
So
some
there
are
a
couple
of
PS
up
from
for
me
targeting
one
eight,
the
most
or
like
some
knows
what
ones
is
our
dry
run
functionality
which
makes
it
possible
to
do
cubed,
a
minute
dry,
run
and
eventually
Cuba
name
up
great
dry
run
and
some
something
something
like
that
and
we
have
a
new
config
command,
subcommander
cubed
M,
which
basically
just
use
and
allows
to
use
to
update
the
like
in
cluster
configuration
for
cubed
M,
which
is
used
when
upgrading.
So
if
we
don't
have
this
cubed
M
upgrade
wouldn't
know
like.
B
What's
the
current
state
of
the
cluster
and
load,
don't
know
how
to
like
upgrade
it
either.
So
that's
a
new
thing
in
1:8
and
before
one
seven
users
can
upgrade.
They
should
create
this
config
map
using
the
cuban
config
command,
and
then
we
have
currently
to
ps4
upgrades
one
with
a
smaller
scope
to
get
it
merged
faster
and
one
with
the
full
full
cubed
upgrade
command
and
all
implementation
I
think
both
of
these
are
in
good
shape.
We
just
need
more
reviewers
of
the
code
and
approval,
sir.
F
G
Yes,
I've
got
a
PR
open,
I
was
dropping
the
note
comments.
Welcome
I
think
it
ended
up
being
a
pretty
straightforward
change.
The
the
change
here
is
that
when
bootstrapping
nodes
authenticate
the
first
time
to
talk
to
the
certificate
signing
API
right
now
in
1:7,
they
always
authenticate
as
a
single
group.
So
there's
only
one
type
of
strapping
node
after
this
change.
G
Bootstrap
token
can
carry
extra
group
information
which
we'll
be
using
or
H
a
billion
in
1/9,
probably
not
not
in
1/8
right
o
in
one
I
know,
the
new
bootstrapping
master
nodes
will
able
to
talk
to
the
current
master
to
some
new
API
that
doesn't
exist
yet
and
authenticate
himself
as
a
new
boot
stripping.
Master
Yoda-
and
this
is
also
potentially
useful,
more
generically
to
have
kind
of
node
pools
with
different
identities.
B
B
B
C
B
Yeah,
you
know
by
week
now
I'm
gonna
be
away
sometime
next
week,
so
I'm
not
sure
I
will
be
able
to
attend.
Then,
but
yeah
I
mean
looks
like
it's
tomorrow:
cool
at
which
time
in
11.
E
B
It
would
be
nice
to
have
them,
but
I'm
fine
with
like
enabling
them
when
it.
This
goes
GI
as
well
or
or
something
like
that
or
yeah
I.
Don't
know,
I
mean
this
replaces
the
token
is
CSV
file
partially
or
fully,
but
has
some
other
features
as
well,
but
the
main
concern
here
was
that
it
shouldn't
be
enabled
by
default.
It's
all.
Costas
may
not
need
it
and
I
mean
this
is
hard
to
all
the
other.
Authenticators
and
controllers
are
enabled
like
on
demand.
B
So
if
you
specify
the
CSV
file
or
dental
carries
automatically
like
enabled,
but
in
this
case
we're
relying
and
we're
relying
just
on
the
API
and
it's
an
API
to
even
change
it,
it's
hard
harder,
because
we
don't
have
an
explicit
signal
from
from
flags
or
configuration
whether
we
should
name
of
these
are
not.
Unless
we
make
we
have
the
flag
for
by
default,
which
we
have
now
but
yeah
Mike.
Do
you
have
any
like
specific
comments
on
whether
we
should
do
this
or
not
like?
B
D
I
think
our
midterm
plans,
at
least
from
YouTube's
side,
with
this
distributing
the
CA.
Isn't
that
much
of
a
problem
for
us?
It's
so
like
the
the
CSR
endpoint
we
would
want
to
use
so
that
we,
you
know,
that's
managed
by
the
kulit
itself,
but
I'm
not
sure
that
we
would
necessarily
use
the
bootstrap
token,
at
least
in
the
midterm.
As
long
as
we
can
distribute
a
the
the
CSR.
This
shouldn't
necessarily
be
a
problem
as
long
as
we
can
leave
the
CSR
endpoint
loop
into
like
a
pre-shared
token,
or
something
like
that
right,
but.
G
That's
kind
of
similar
to
what
I
am
doing
some
internal
installers.
Basically,
the
authentication
mechanism
is
still
useful,
having
a
having
a
basically
token-based
authentication,
where
the
backend
is
a
communities
resource
the
secret.
That's
that's
good.
The
actual,
not
the
kind
of
HVAC
mechanism
is
not
as
not
as
important
anymore.
G
D
G
To
enabling
the
controller
might
default
potentially
less,
it
opens
an
avenue
for
escalating
privileges
that
you
might
not
have
considered.
Imagine
that
creating
a
secret
in
this
namespace-
and
it
has
a
lot
of
caveats,
because
the
secret
you
create
creates
a
user,
creates
a
token
that
authenticates,
as
you
know,
with
a
fixed
free
thick.
G
So
you
can't
like
create
a
bootstrap
token
that
gets
you
into
arbitrary,
using
angel
groups,
but
it
is
like
I
could
see
this
being
something
that
an
admin
might
not
have
considered,
and
that
enabling
this
is,
you
know,
potentially
a
breaking
change
and
kind
of
a
far-fetched
scenario,
but
I
think
it
is
again.
I
can
understand
that
hesitation
that
enabling
it
by
default.
B
Yeah,
so
it
is
which
are
tokens
as
such
are
basically
two
different
SS
matt
mentioned
to
two
different
mechanisms:
one
for
validation
and
one
for
authentication,
and
there
are
two
controllers
in
the
controller
manager,
one
for
validation
and
one
for
authentication
and
one
authentication
module.
So
it's
it's
hard
to
know
whether
we
should
enable
it
as
best
practice
or
like
just
let
everyone
opt
in,
but
yeah
we
can.
We
can
bring
it
up
to
figure
out
this
well
I.
G
Don't
think
it's
to
owners
to
say
that
this
is
something
that
the
Installer
is
going
to
enable
it
it's
using
this
functionality
just
because
the
bootstrap
token
and
functionality
develop
validation
and
authentication
side
of
it
are
pretty
intricately
linked
with
the
installer.
It's
not
sort
of
like
a
generic
clustered
functionality
that
other
applications
in
the
cluster
are
necessarily
using.
B
B
F
Just
the
API
server
counsin,
yeah
yeah,
so
I
threw
this
over
the
wall
to
propose
a
fix
for
the
API
server
counts
and
I
got
some
comments
on
it
and
actually
quite
a
few
comments
on
it.
So
I'll
address
those
today
and
probably
into
tomorrow
and
update
the
PR
on
it,
but
it
seems
like
it's
has
a
little
bit
of
traction
and
it's
doable.
F
F
F
So
the
problem
statement
is,
if
you
have
multiple
master
API
servers,
they
sort
of
the
end.
Points
within
kubernetes
do
not
eventually
reach
consensus
on
the
number
of
endpoints
that
there
are
and
there's
a
hard-coded
count
on
the
command
line,
options
to
the
API
server,
and
so
the
proposal
is
to
do
to
dynamically,
create
a
config
min
up
and
put
those
endpoints
into
a
config
map.
F
F
Right
now,
the
algorithm
for
for
the
API
server
count,
if
you
have
less
than
a
number
of
API
servers,
then
account
and
those
that
that
end
point
never
reaches,
but
it
never
goes
back
to
normal,
and
so
it's
sort
of
a
big
problem
in
MHA
sort
of
sense.
If
you
have
one
I,
have
a
dynamic
number
of
API
servers.
B
Yeah,
so
basically,
when
I,
when
I
wrote
the
cubed
MHA
proposal
like
an
implementation
proposal,
it's
still
work
in
progress,
but
anyway
I
just
proposed
like
set
API
so
itself,
API
saw
the
count
2000
or
something
millions,
and
so
the
Miller
gave
the
issue,
but
I'm
really
glad
we
have
somebody
working
on
the
real
six
Oh
appreciate.
F
F
F
E
E
B
If
it's
like
one,
we
have
three
API
servers
upper
running
and
one
of
them
goes
down
and
the
two
other
will
race
to
update
the
end.
Point
of
that,
because
it
is
that
this
master
tree
has
gone
down
like
the
the
time.
Prime
derek
has
expired,
and
then
they
will
race
to
patch
the
endpoints
to
like
remove
this
API.
So
it
doesn't
like.