►
From YouTube: Kubernetes SIG Cluster Lifecycle 20190326
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Last
week
we
discussed
that
Robert
Bailey
from
Google
has
been
assigned
other
other
tasks
and
can't
can't
be
a
cig,
lead
kool-aid
anymore,
and
we
we've
got
a
nomination
form
from
just
in
Santa
Barbara
to
step
up.
Instead,
do
we
have
any
other
nominees
I
haven't
heard
of
anyone,
my
personally
at
least,
but
last
week
we
are
two
weeks
ago.
We
said
you
could
contact
either
myself
or
Tim
on
the
fact
or
I
know.
Is
it
in
the
sig
some
sig
communication
channel?
Whatever
do
we
know
of
any
other
person.
A
B
We
have
the
Adam
operator
cap,
which
was
has
been
out
for
I
guess
a
couple
of
months
now
it
actually
had
a
surge
I
think
that
two
weeks
ago
I
think
we
were
basically
pretty
pretty
close
to
merging
it.
But
it's
had
a
surge
of
interest
from
most
people
working
from
Red,
Hat
I'd
say
it
feels
like
they
are
interested
in
participating,
which
I
think
is
great
news.
I'm
going
to
commitment
from
Emma
they're
going
to
do
that
so
I.
A
So
let
me
check
it's
targeted
at
C
cluster
life
cycle
right,
it
is
so
I
mean
technically
we
myself,
you
and
Tim
should
be
like
the
person's
to
actually
hit
that
proof
label
and
just
get
it
in
I.
A
Think.
Obviously,
if
there
are
loads
of
outstanding
comments,
maybe
not,
but
if
we
have
like
a
general
consensus
of
that,
is
something
we're
gonna
do
and
we
ride
the
kepis
or
have
the
status
of
the
kept
being
provisional.
I,
don't
see
a
problem
with
okay,
we're
not
even
before
at
the
stage
of
merge
fast,
because
it's
been
like
two
months
but
still
like
we
could
do
our
merge
fast
and
and
like
it
rates
afterwards,
I'm
personally
fine
with
that.
What
the
other
thing
I.
A
B
B
Think
we've
always
I
feel
been
pretty
consistent
around
saying
that,
like
the
end
goal,
is
that
the
person
that
develops
or
the
team
that
develops
a
core
DNS
would
also
own
the
core
dienes
operator,
and
we
probably
have
to
do
some
work
to
get
there
in
terms
of
creating
libraries,
toolings
documentation
and
probably
building
a
couple
of
example.
Operators.
First
in
a
repo
we
have,
but
the
long-term
goal
is
sort
of
raishin
of
this
and
I
think
API
machinery
had
a
concern
around,
like
all
those
gonna
have
to
review
by
API
machinery
and
I.
G
B
So
I
guess
like
API
review
on
the
like,
when
we
established
those
first
patterns,
get
input
from
the
API
review
team
and
the
grounds
that
then,
if
they
like
it
or
dislike
it,
they
can
review
it
once
and
then
the
pit
of
successful
will
open
up
for
it.
In
that
way,
I
like
that
I,
like
that
phrase
and.
G
B
A
H
B
A
Cool
yeah,
so
I
mean,
with
the
with
the
massive
amount
of
comments
in
this
thread.
I
definitely
think
we
won't
ever
be
so
productive
that
we
can
sort
them
out
in
a
github
thread
ever
so
I
think
I
think
that
definitely
needs
to
be
just
a
merge
now
and
then
we'll.
Let
Daniel
and
injustice
start
hosting
the
meetings
and
then
we'll
just
say
a
pointer
that,
like
look
next
Friday,
show
up
here
and
tell
us
everything
you've
got
about
about
this
and
we're
try
we'll
try
to.
I
B
I
A
That
or
what
we
also
discussed
was
like
last
meeting
like
we
would
have
a
place
in
the
docks
or
something
that
is
called
C.
Cluster
lifecycle
owned
stuff
and
maybe
a
bit
like
a
better
name,
but
that
kind
of
thing
so
like
here
is
cube.
A
Atm
here
is
like
cops,
and
here
is
at
an
operator
foo,
and
here
is
an
operator
bar
and
like,
etc,
and
then
we'll
have
a
play
at
aggregated
place
where
people
can
go
and
discover,
and
whenever
we
need
to
say
that
hi,
where
are
all
the
things
you've
built,
we
can
point
people
at
that
place
for
discoverability
and
that
ties
into
the
other
question
with
which
Tim
talked
about.
It's,
like
the
setup
guides
being
extremely
outdated
and
Bo.
Both
of
those
would
be
kind
of
not
sold.
B
It'll
say
yeah
we
can,
if
we
can
have
that
shared
repository
or
shared
location,
I
should
say
so
that,
like
the
cops
and
cube
ATM
and
cluster
API
and
cube
spray
and
everyone
doesn't
develop
their
own
ones,
but
rather
all
reference
sets,
and
you
know
that's
that's
where
the
bundle
format
could
be
an
appropriate
format
for
that
sort
of
thing,
and
it's
just
said
no
question,
but
it's
a
very
closely
related
question
I
think
like
and
we
can.
We
establish
something
some
location
where
we
can
all
basically
share.
They'll
share
the
workload
there.
H
A
A
J
A
A
K
Lucas,
so
if
you
are
a
member
of
the
Google
Group,
you
should
have
seen
a
post
and
a
document
from
you
this
morning,
where
we
are
trying
to
define
the
requirements
and
scope
for
cluster
API
beyond
v1,
alpha
1
and
in
the
agenda
doc.
Here.
I
have
a
link
to
the
document,
there's
also
a
doodle
poll
for
trying
to
find
a
time
between
tomorrow
and
Friday.
So
over
the
next
three
days
where
we
can
have
an
additional
working
session
or
sessions.
If
needed,
to
go
over
some
of
the
content
in
the
document.
K
We
would
love
for
everybody
to
provide
comments
and
input,
and
we
want
this
to
be
a
complete
collaborative
effort,
and
so
we
do
plan
to
talk
about
this
at
tomorrow's
cluster
API
meeting
and
the
following
one
next
week
and
we
are
hoping
that
we
can
come
to
agreement
on
the
project,
scope
and
requirements
by
the
end
of
next
week.
So
that's
April,
5th
I
believe
so.
If
you're
interested,
please
check
out
the
document
sign
up
for
the
doodle
poll.
If
you
want
to
have
and
any
additional
discussions
and
just
look
forward
to
everybody's
input,
Thanks.
K
K
A
K
V1
alpha
1
is
scheduled
for
this
Friday
March
29th
I
would
like
to
suggest
that
we
disassociate
cluster
API
releases
from
kubernetes
version
releases.
So
it's
not
really
a
114
released
status,
it's
cost
rate
guy
really
status,
but
the
v1
alpha
1
is
due
out
this
Friday
and
then
anything
beyond
that
is
up
near
while
we
figure
out
the
requirements
in
scope
and
then
once
we
have
that
our
plan
is
to
use
the
kept
process
for
tracking
all
the
work
going
forward.
A
A
D
D
D
A
D
The
the--and
I
bumped
I
actually
haven't
heard
of
that
so
I
met
up
here
in
Carroll
east
bump
CNI
to
0.75,
because
there
was
a
CVE,
a
security
fix
in
the
copper,
ATC
and
I
package.
So
the
the
latest
patch
versions
of
all
the
release,
branches
work,
but
the
old
patch
versions
in
the
release
branches
suddenly
stopped
working
as
if
our
pinning
to
the
old
Siena
is
not
working
in
the
digital
games.
A
B
K
C
F
A
L
Yeah
we
had
a
great
meeting
this
morning.
We
currently
have
a
patch
for
this
for
a
config,
serialize
er,
that's
up
and
API
machinery
needs
to
get
the
final
stamp
of
approval
on
that.
It's
been
in
a
pretty
final
state
for
a
while.
Now
that
we
have
the
code
through
we're
now
merging
to
115
again,
so
we
have
two
patches
that
are
currently
pending,
also
a
global
flag,
refactor
to
help
just
dry
out
some
of
the
code
for
the
good
lip
and
then
our
link
to
the
meeting
notes
is
there.
L
F
Thank
a
bunch
of
the
mini
cube
update.
So,
as
I
mentioned
before
we're
doing
the
1.0
release
this
week,
I
will
default
to
Kate's
1:1
form.
We
have
a
bunch
of
other
user-friendliness
patches,
including
support
for
changing
image
repositories
from
Kate's
GCR,
which
is
important
for
many
cube
users
in
China,
for
instance,
and
we'll
be
taking,
will
be
slowing
development
down
next
week
in
order
to
focus
on
documentation,
basically
hosting
a
fix
it
not
just
for
mini
cubes
own
internal
documentation,
but
for
the
public
documentation
on
the
kubernetes
I/o
site.
F
So
right
so
GCR
is
not
available
in
China,
so
we
we
now
are.
We
have
a
image
repository
flag
where
you
can
select
a
alternative
internal,
our
alternative
mirror
to
fetch
images
from
Oh.
Wonderful,
thank
you.
Eventually,
we
would
like
to
make
it
automatic
that
if
it
detects
that
GCR
isn't
available,
that
it'll
switch
to
some
vetted
set
of
mirrors,
but
right
now
you
have
to
manually
pick
your
near.
A
B
Something
with
it
or
or
we
I've
been
really
I've,
been
neglectful.
I've
been
really
focused
on
getting
112
out
the
door
and
113.
Those
are
the
ones
where
we
actually
switched
to
you.
Jews
will
become
@cd,
Adm,
we're
still
using
the
like
rs4
Canal,
just
because
there
we
haven't
had
time,
but
once
we
get
past
1:13,
we
will
then
be
free
to
do
that.
B
Whether
we
should
do
that,
whether
like
there
will
be
another
at
cd32
at
CD,
for
which
will
also
like
destroy
everyone's
data.
I,
don't
know,
but
we
will
see
like
so,
but
I
guess
if
it
should,
we
have
that
we
can
cross
that
bridge
when
we
come
to
it
and
we
have
the
code
in
in
that
city
manager,
so
that
would
greatly
simplify
SUV
ADM
I'm,
also
inclined
to
say
we
should
not
create
the
possibility
for
such
a
such
a
disruptive
change.
B
A
thing
at
Citi,
2,
XD,
3
x,
eb,
2,
NS
53
have
nothing
in
common
other
than
the
four
letters
of
their
name.
They
are
totally
different
databases,
and
so
anyone
that
has
been
running
a
cluster,
let's
start
on
C
2,
is
had
a
hard
time
switching
to
it.
That
is
what
the
sed
manager,
which
will
then
merge
into
at
C.
B
A
A
Cool
and
with
regards
any
any
other
general
updates
like
to
do
which
we
got
I,
don't
know,
releases
plans,
timing.
If
people
want
to
get
get
contributors
start
contributing
to
a
CD,
I
DM.
What
do
they
do
and
how
and
do
you
have
specifics
about?
You
have
meetings,
for
example,
and
that
kind
of
just
iterate
that
Harrison
I.
A
M
No,
no
yeah,
we
don't
so
that's
something
that
you
to
think
about
and
yeah
we'll,
probably
probably
do
something
similar
to
maybe
the
cuvette
DM
model
that
seems
to
be
working
so
yeah,
maybe
some
contributor
guidelines
and
and
and
and
testing.
That's
that's
one
of
the
things
that
I'd
like
to
work
on.
Soonish
is:
have
you
know,
yeah
automated
testing,
so
that
PRS
are
not
something
that
reviewers
have
to
test
manually.
M
B
M
M
You
know
well-defined
API
the
flags
of
de
facto
API.
So
that's
something
that
probably
probably
is
worth
working
on,
because
people
are
asking
for
more
knobs.
You
know
adding
in
the
interim,
adding
adding
more
flags
and
I
suppose
some
said
yes,
ATM
right
now
is
you
know
alpha
that
you
know
we're
not
bound
to
those
flags
but
yeah,
the
sooner
that
we
have
an
API
and
schema
that
we
can
iterate
on
and
provide
you
get
some
kind
of,
oh
I,
guess
without
further.
A
A
N
Okay,
so
for
cube,
a
DM
like
most
of
stuff
was
mentioned
by
Lumiere
already,
but
the
other
big
thing
is
that
we
are
currently
gathering
information
on
what
changes
it
will
be
included
in
the
next
version
of
the
configuration
we
have
gathered
already,
some
substantial
substantial
list
and
the
sort
of
initial
mock-up,
though
not
all
the
proposed
changes,
are
into
the
initial
markup
as
of
like
an
hour
ago.
This
list
is
open
for
voting
and
will
probably
like
count
the
votes
either
tomorrow
on
the
cube
ATMs
of
our
meetings
or
on
next
week's.
A
A
G
So,
as
some
of
you
may
be
aware,
there's
a
an
effort
going
on
to
figure
out
exactly
how
we're
going
to
deal
with
CR
G's
that
can
are
kind
of
critical
to
cluster
functionality
so
or
could
potentially
get
in
the
way
of
pod
launch.
So
this
is
stuff
like
runtime
class,
for
instance,
and
also
some
of
the
volume
related
things
which
could
be
mounted
as
a
persistent
volume
and
therefore
also
get
in
the
way
pod
launch.
G
Most
of
the
current
examples
have
been
moved
into
core
EP
eyes,
because
we
didn't
have
a
good
solution
for
this,
but
we're
trying
to
figure
out
a
solution,
so
I
put
together
a
document
that
I
shared
with
said
cluster
lifecycle
and
also
say
GARCH,
because
that's
where
the
effort
originally
started
and
I
was
asked
to
come
talk
about
it
a
little
bit
at
this
meeting.
So
I
didn't
know.
G
G
G
So,
while
CR
DS
are
purely
decorative,
we
could
have
stuff
like
conversion
webhooks,
which
will
eventually
be
necessary
for
some
of
these
api's
or
even
validating
or
mutating
web
hooks
for
defaulting
or
validation,
logic
that
need
to
actually
be
run
as
processes
and
so
I
think
we
need
to
treat
these
much
the
same
way
that
we
treat
the
controller
manager
or
the
API
server
and
a
typical
typical
deployment
setup.
So
my
proposal
is
that
we
produce
a
series
of
static
pod,
manifests
and
also
obviously
the
CRD
ammo,
and
we
say
hey.
This
is.
G
This
is
one
way
to
install
it.
This
is
our
kind
of
canonical
install
path
if
you're
using
a
a
cluster,
that's
managed
by
a
static
cubelet,
we're
by
cubelet
with
static
pods,
and
so
you
can
install
these
if
you're,
using
an
installation
tool
that
doesn't
do
that.
Then
here's
kind
of
the
path
that
we
recommend
you
take,
and
so
we
provide
these
static
on,
manifests
and
some
sort
of
add-on
manager.
G
Each
thing
that
would
just
reconcile
the
static
pod
manifests
onto
the
into
the
pod
manifest
directory
and
make
sure
that
certificates
are
set
up
for
web
hooks
and
stuff
like
that,
and
so,
and
that
would
be
an
option
now
like
we
wouldn't
require
people
to
do
this.
This
would
just
be
our
reference.
Implementation.
A
So
if
I
have
something
that
or
if
I
have
a
CRD
schema
and
I
want
that
to
get
applied
to
the
cluster,
will
there
be
an
official
like
how?
How
is
that
gonna
from
from
the
cid
schema,
which
I
suppose
is
on
disk
somewhere?
How
is
that
going
to
bubu
strapped
into
the
cluster
and
using
what
credentials?
Because
there's
something
that
needs
to
write
to
see
are
these
have
write
access
to
see
IDs
on
the
API
server
so
like
will
we
have
a
well-defined
path
for
the
cubelet
that,
like
I,
don't
know.
G
I,
that's
a
good
point.
I
should
address
the
are
back
aspects
explicitly
in
the
document
I'm
assuming
that
we'll
have
a
user.
That
is
basically
like
the
a
user
that
has
permissions
to
create
the
CR
DS,
either
a
C
or
D
bootstrap
user,
or
one
of
the
admin
users
to
have
basically
cuz
it
is
is.
This
is
a
would
be
a
necessary
part
of
the
cluster
in
this
scenario,
in
some
capacity,
whether
you're
installation
manager
is
doing
it
or
whether
you're
using
this
add-on
manager.
So
we'd
have
to
have
that's
something
with
similar
permissions.
A
G
Because
we're
dealing
with
things
that
are
I'll
have
like
CA
fields,
we
have
kind
of
two
options,
so
the
first
is
to
allow
the
user
or
there's
two
there's
two
paths
that
the
user
could
take.
The
first
is
for
the
user
to
provide
a
pre-existing,
CA
and
CA
key,
and
we
can
generate
a
certificate
for
that
and
the
second
is
we.
We
can
just
generate
a
self-signed
CA
to
make
it
easy
for
the
user
like
if
you
don't
have
one
will
generate
a
self-signed
CA.
G
So
it's
not
there's
not
a
whole
lot
of
risk
here,
simply
as
web
hooks
web
hooks
require
HTTPS,
at
least
at
the
moment,
even
if
they're
just
kind
of
running
on
the
loopback,
adapter
or
whatever.
So
we
need
a
certificate
authority
and
certificate
certificate,
but
it
doesn't
necessarily
have
to
be
your
master
one.
If
your
organizational
policy
dictates
that-
and
you
still
want
to
use
this-
you
can
always
you
could
always
supply
the
CAE
or
we
could
have
an
option
for
you
to
pass
in
certificates.
A
G
A
G
That
would
that
would
basically
be
the
default
employment
policy
topology.
This
is
kind
of
matching
up
with
at
least
a
little
bit
how
we
currently
have
our,
for
instance,
our
if
you
cluster
up
on
GCP
like
for
our
a
te
test
or
whatever
you
got
a
master
with
a
static
with
a
with
a
cubelet
on
it,
and
it
launches
a
bunch
of
static
pods,
starting
the
api
server
and
stuff.
So
it
would
just
it
would
be
sitting
alongside
those.
A
G
B
A
A
It's
all
real,
like
complex
thing,
essays,
I'm,
just
thinking
through
the
cases
if
we
could
simplify
it
somehow
or
if
we
could
build
in
a
small
like
boots
boots
trapper
in
the
cubelet
like
is
that
is
that
total
like
in
the
same
way
as
we
have
the
static
port
bootstrapper?
Could
we
have
like
register
your
admission,
webhooks
here
or
like
put
your
CR
DS
here
or
like
just
thinking
out
loud
but
but
like
if
we
could
yeah
there's.
G
I
think
that
document
mainly
discusses
it
from
the
perspective
of
having
like
the
controller
manager,
do
it,
but
I
think
the
same
concerns
apply
or
having
the
api
server
do
it,
which
is
you
end
up,
like
you
end
up
a
with
the
cubelet
oddly
having
extra
permissions
to
do
stuff
that
it
wouldn't
normally
be
able
to
do,
and
you
also
end
up
with
a
solution
that
isn't
really
useable
as
easily
or
convertible
as
easily
for
stuff.
That
isn't.
Does
it
look
like
a
normal
kubernetes
cluster?
G
So,
for
instance,
if
you
are
one
of
the
folks
that's
doing
like
kubernetes
on
kubernetes,
you
don't
necessarily
need
to
run
this
as
a
static
pod
and
your
management
cluster.
You
could
and
you
can
easily
convert
these
into
something
that
just
runs
a
deployment
at
your
management
cluster
or
if
you
are
like
someone
who
is
trying
to
use
the
kubernetes
control
plane
without
cubelets,
which
we
have
had.
G
People
have
interest
interest
in
without,
like
other
cubelets,
you
could
convert
this
solution
to
run
wherever
you're
running
the
control
plane,
whereas
if
it's
built
into
the
cube
EULA
you're
kind
of
limited
to
a
certain
topology
and
while
it's
still
like
well,
it's
definitely
still
a
reference
implementation.
The
ability
for
that
reference
implementation
to
be
converted
into
a
useable
form
with
other
installation
topologies
is
a
useful
feature.
I
think.
L
Yeah,
the
suggestion
of
the
controller
manager
makes
a
lot
more
sense
to
me
than
the
couplet
since
the
code
is
concerned
with
execution
and
the
controller
managers
concerned
with
making
sure
control
loops
have
what
they
need.
The.
G
That's
an
easy
way
to
shoot
yourself
in
the
foot
and
get
the
cluster
into
an
irrecoverable
state
so
in
to
encourage
or
to
prevent
cluster
instability
like
that.
I
really
think
it's
important
that
we
treat
these
components
as
infrastructure
components.
The
same
way
we
treat
the
controller
manager
itself,
for
instance,.