►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180724
Description
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.slrrqfmrca78
Highlights:
- New Cluster API Provider repositories
- Update on Cluster API alpha release
- PSA regarding 1.12 execution
- SIG Charter
- v1beta1 changes for config
- CRI and working with sig node
- Docker version for Ubuntu 18.04
A
Hello
and
welcome
to
the
Tuesday
July
24th
edition
of
the
cig
cluster
lifecycle
meeting
through
Nate's
community
today,
I
put
a
couple
things
at
the
top
of
the
agenda
just
sort
of
his
announcement:
/
fYI.
The
first
is
just
a
reminder
to
people
that
we've
created
a
few
new
repositories
for
the
provider-specific
code
for
the
cluster
API.
So
far
we
have
them
for
GC,
p,
AWS
and
OpenStack.
A
A
A
B
So
near
and
discoverability
our
medic
questions
I'm
not
going
to
talk
to
cops
because
the
near
is
more
far
than
near,
but
with
our
future
of
cluster
API
discoverability
for
specific
versioning
is
gonna,
be
a
thing
right.
So
how?
How
our
folks,
couldn't
you
manage
that
for
like
because
you
have
so
many
different
providers?
You
know
people
want
the
most
up-to-date
bits
for
deployment
with
the
latest
provider
where
possible.
So
you
know:
are
they
common?
B
Are
all
providers
commonly
using
cube
ATM
for
deployment
to
manage
the
actual
control
plane
aspects,
or
are
they
using
separate
custom
deployers?
How?
How
is
this
being
managed
and
disseminated
so
that
way,
there's
some
level
of
consistency.
Otherwise,
I
can
see
this
sprawling
pretty
badly
in
the
future.
Yeah.
A
B
More
than
that
is
consistency
across
providers
right
like
if
I
am
going
to
say,
I
want
to
know
that
I
have
a
level
set
bar
for
all
the
control
plate,
components
as
well
as
the
core
add-on
ones.
If
you
want
to
call
it
that,
because
you
can't
really
run
it
without
the
core
add-ons,
because
you
want
to
make
sure
that
there's
some
level
set,
consistency
across
the
security
has
been
enabled
by
default.
B
C
This
is
this
is
not
specific
to
the
provider
or
that
that
component
right.
This
is
the
general
district
question
again
right,
where
any
any
tooling
that
the
user
ends
up
using
ends
up
making
a
decision
about
like
how
to
choose
the
set
of
things
that
it
installs
right
and
like
cops,
has
effectively
a
mini
distro
in
that
regard.
In
that,
like
it,
bundles
a
bunch
of
decisions
for
you,
it's
not
that
wasn't
a
deliberate
decision
right
but
like
and
I
think,
but
any
tooling,
like
luster
CTL
pre,
make
the
same
sort
of
decision.
C
I
would
love
to
see
us
come
up
with
a
non
tooling
specific
way
to
express
that
so
that,
like
the
discovery,
ability
problem
that
Tim
says
is
and
the
testing
problem
right,
which
is
enlightening,
is
bad
like
we
actually
one
answer
to
that,
but
we
don't
really
have
an
answer
to
that
today
as
far
as
I
know
other
than
hope
that
cops
or
plus
or
CTL,
or
whatever
it
is
or
qadian,
like
does
of
all
of
that
work.
For
you,
oh.
B
I
think
that's
definitely
worthwhile
topic
for
tomorrow.
I'll
be
happy
to
join
them,
because
this
is
amid
a
problem
like
the
it's
very
difficult
when
people
want
to
have
an
expectation
bar
for
consistency
and
it's
kind
of
very
inconsistent
across
certain
things.
So
as
long
as
whatever
we
produce
has
that
consistency
bar,
you
know,
I
think
that
should
be
something
that
we
as
sig
leads,
can
say
that
there
is
a
bar
for
eventual
eg,
a
beta
or
GA.
That
says,
like
you
know,
we
will
be
at
some
level
set
playing
field.
B
A
I
think
it's
actually
a
lot
of
the
the
provider.
Specific
stuff
is
more
about
the
machines
and
the
node
configuration
and
the
control,
plane
and
sort
of
add-on
configuration
I
think
would
be
centralized
in
the
cluster
API
repo
in
the
cluster
cuddle
tool
right.
So,
if
you
use
cluster
cuddle,
you'd
expect
to
get
sort
of
a
consistent
control,
plane
and
consistent
core
add-ons.
That's
really
like
how
is
a
node
configured?
How
is
it
node
bootstrapped?
How
are
the
machines
provision,
which
is
the
provider
specific
code
that
make
sense
yeah.
A
No
I
agree
and
I
think
that
one
of
the
one
of
the
two
dues
is
to
have
some
sort
of
conformance
write
type
test
for
the
cluster
API
providers,
and
we
need
to
figure
out
what
what
we
mean
by
conformance
like
what
are
we
trying
to
validate
and
verify
it
is
how
far
down
does
that
go?
Is
it
just
we
can
create
and
delete
VMs
like?
Is
that
enough
to
be
conformant?
Is
it
that
the
VMS
have
to
be
secured
in
a
specific
way?
C
B
A
reminder
a
PSA
for
those
who
may
not
be
aware,
but
the
standard
operating
model
that
we
use
for
triaging
issues
within
the
kuba
TM
repo
and
which
is
actually
beneficial,
I
think
maybe
to
be
applied
across
other
repos
as
well,
is
that
we
usually
default
ly
triage
and
maintain
a
milestone
and
have
a
person.
Who's
been
active
in
a
specific
area
default
ly
a
sign,
but
that
doesn't
mean
that
they
are
the
sole
person
responsible.
For
that
thing.
B
B
Use
a
separate
label
for
active,
at
least
within
our
process,
for
the
time
being,
so
that
it
denotes
that
somebody's
actively
working
in
a
patch
and
it's
applied
to
the
QB
diem
repo.
It's
not
applied
generally
throughout
all
of
KK,
but
you
can
just
modify
it.
We're
needed
as
long
as
the
folks
who
are
maintained,
errs
and
who
are
active
contributors
already
have
Ackles
to
the
Covidien
repo
to
market,
or
should
so
there's.
A
That
one
thing
we
found
in
the
cluster
API
repo
is
that
if
you
used
a
label
that
was
kind
to
slash
whatever
string
you
wanted,
you
didn't
actually
have
to
have
a
close
to
set
labels
directly.
You
could
actually
do
that
through
the
slash
kind
space,
something
so
it
might
be
worth
making
a
kind
slash
active,
because
then
anybody
who
wants
to
contribute
can
like
just
use
the
they
sort
of
built
in
the
proud
tooling
to
set
them,
which
is
kind
of
cool.
A
E
Might
be
a
little
kind
of
off
the
side,
but
there's
from
what
I
remember,
there's
a
system
that
checks
for
a
set
of
allowed
prefixes
as
just
a
simple
way
to
avoid
allowing
people
doing
things
like
labeling
approved
or
something
through
the
label
Connect,
but
that
certainly
something
they
could
be
improved
and
that,
like
that
label,
could
be
one
of
the
things
that's
white
listed.
If
you'd
like
that,
please
file
an
issue
against
tested,
bro.
D
B
E
E
A
It
goes
to
sort
of
assigned
to
active,
right,
I,
think
that's
the
distinction
that
they
were
making
the
cubeb
in
repoed
that
maybe
isn't
being
made
elsewhere
right.
So
this,
if
you
know
people
don't
just
lick
the
cookie
and
then
walk
away
and
never
make
progress
and
have
nobody
else
makes
progress
either
that
if
something
is
not
actually
actively
working
on
it,
anybody
else
is
free
to
take
it.
So
exactly
like.
B
We
have
default
designees,
but
we
assignee
may
never
get
to
the
issue
within
the
given
milestone
and
if
it
matters
to
the
consumer
or
the
person
who
wants
to
participate,
they
can
absolutely
get
involved.
We
want
to
make
sure
that
that's
that
that
process
is
clear
to
all
the
people
who
are
active
contributors.
B
The
mayor
started
a
an
issue
and
copied
the
template
to
the
dock,
and
I
did
a
first
pass
cut
through
the
dock.
I
think,
if
folks
want
to
have
comments,
please
comment,
maybe
by
the
end
of
the
week,
maybe
by
next
week.
We
do
our
submission,
and
it
gives
plenty
of
time
for
folks
to
add
comments.
It's
pretty
generic
and
simple.
Some
of
the
the
pieces
that
I
need
help
with
is
basically.
B
How
to
concisely
convey
scope
in
out
of
scope,
because
the
common
theme
that
people
typically
have
for
rejection
and
as
part
of
their
charters
is
that
their
scope
is
too
broad
and
they're
out
of
scope.
Items
are
not
specific
enough,
so
sick
apps
is
a
perfect
example
of
a
charter
whose
scope
is
way
too
broad.
D
But
it's
the
in
scope
and
I
will
scope
initial
drafts
and
I
I
honestly,
don't
know
what
else
to
do.
If
somebody
else
has
anything
I've
seen
some
charter
PS
rejected
because
they
don't
have
enough
information
like
the
first
scope
sentence
for
us.
Perhaps
we
should
extend
it
into
a
paragraph
of
text
to
provide
more
detail.
Yeah.
A
B
I
think
the
updated
template
is
the
best
one
honestly
the
node
has
been
updated
to,
but
the
the
node
still
includes
some
of
the
old
language.
I
know
that
Derek
said
he
was
going
to
do
it,
but
it
looks
like
he
hasn't
updated.
It
I
think
the
biggest
thing
that
is
good
inside
of
the
node
one
is
scope
foundings.
A
B
D
D
B
You
should
link
to
the
appropriate
I'll,
take
a
look
through
your
links,
section
and
then
comment
an
update
today.
I
do
think
that,
as
we
add
test
grid
bits
for
some
of
these
sub
projects,
we
should
definitely
update
60ml
with
the
appropriate
section
that
includes
the
link
to
the
where
people
can
discover
these
binary
artifacts,
also
with
regards
to
taking
a
step
back
for
a
second,
because
this
crosses
over
topics
is
how
are
we
going
to
be
publishing
artifacts
for
these
separate
repositories,
I.
B
Different
cluster
API
controllers
and
what
not
30
there's
container
artifacts
for
all
of
those
as
well
as
integration
for
cluster
funnel.
So
those
artifacts
need
to
be
published
by
somebody
with
some
key.
It
says
they're
good.
So
when
randos
take
it
off
the
interwebs
it
you
know,
isn't
some
botnet
virus
yeah.
A
And
I
think
that
we
expect
that
the
providers
to
be
published
as
containers
right
so
I
think
the
missing
part
is
that
last
piece
you
were
talking
about,
which
is
so
provider
X,
creates
container
says
this
is
the
container
which
abusing
or
this
is
confusing
for
this
specific
release.
And
how
do
we
link
that
into
what's
actually
being
used
right
or
so
the
the
mapping
there
yeah.
A
A
A
A
All
right
anything
else
about
the
Charter
summary
is
a
plea:
if
they
can,
they
should
look
at
the
the
doc.
If
you
have
comments,
lumière
is
going
to
create
a
PR
from
the
doc,
maybe
next
Monday
a
little
link
to
that
PR.
During
next
week's
meeting
notes,
he'll
be
able
to
put
any
file
comments
on
the
PR
and
then
we'll
send
it
out
for
review.
B
If
folks
haven't
seen
the
tracking
issue
from
Fabrizio,
as
well
as
the
V
one
day,
one
doc,
which
she's
currently
updating
Liz,
Fabrizio
and
I,
had
sat
down
and
talked
about
in
more
detail
about
what
the
changes
are
all
of
them
kind
of
makes
sense,
but
they
are
again
yet
again.
Another
transition
towards
getting
ourselves
into
a
sustainable
path
for
configuration
for
rubidium,
so
comments
and
feedback
are
welcome.
B
If
you
have
comments
with
regards
to
those
changes,
nothing
is
really
her
shattering,
but
part
of
the
work
is
to
separate
out
who
owns
what
and,
namely
the
biggest
thing
I
think
is
separating
out
the
component
configuration
details
from
the
comedian
config.
This
will
also
allow
for
better
been
during
for
folks
who
would
like
to
take
the
configuration
from
Covidien
and
be
able
to
vendor
inside
of
their
repository.
This
is
useful
for
a
lot
of
people
who
do
tools
and
automation.
B
The
problem
currently
with
the
current
vendor,
an
apprentice
of
the
master
config,
is
that
you,
whenever
you
touch
something
such
as
component
config,
you
swallow
the
whole
universe
of
kubernetes,
a
good
chunk
of
it.
That's
just
a
def
craft
problem
of
many
things
inside
of
kubernetes
and
that's
part
of
what
Lucas's
proposal
is
to
push
component
config
into
the
staging
repository.
So
all
of
that's
could
work,
but
it's
not
going
to
happen
anytime
soon.
D
D
B
B
B
A
B
I
think
this
covered
crosses
over
with
Justin's
last
topic.
There
is
that
the
version
that
was
currently
vendored
as
supported
for
Corinna
DS
is
old.
Its
1703
was
the
latest
version
and
there
have
been
SEC
updates
and
other
fixes
that
have
occurred
asynchronously,
but
dr.
tradition
does
not
do
back
patches
for
some
of
those
issues
that
that's
one
made
a
point.
B
So
this
is
kind
of
a
PSA
that
I'm
going
to
go
talk
with
sig
node.
We
know
about
this
problem.
We
talked
about
and
I
think
a
couple
times
within
this
thing,
but
we
don't
typically
have
opinions
on
one
CRI
versus
the
other.
We
just
need
to
basically
hook
up
all
the
automation
so
that
way,
it's
actually
validated
from
end
to
end.
D
B
B
D
B
D
What
problem
with
the
blog
post
is
that
it's
already
in
history,
like
people,
have
already
read
it
and
went
away
and
like
we
should
probably
document
somewhere
that
support
this
experimental
for
this
year
eyes.
D
C
I
guess
so
the
the
sort
of
thing
which
prompted
this
was
I
like
cops
as
we
talked
about
previously
sort
of,
does
take
an
opinion
on
everything.
Right
so,
like
cops,
is
a
complete
set
of
offers.
You
sets
of
things
and
I
was
looking
at
adding
support
for
Ubuntu
18:04,
which
is
the
latest
LTS
1604.
We're
no
longer
pass
our
IDs
as
we
change
them,
and
so
that's
why
I
was
doing
that,
but
are
tested
or
verified
or
recommended
whatever.
The
word
is
version
versions
of
docker
for
crew
Benes
111.
B
B
C
We
just
we
just
do
that
right
so
and
I
think
we
it's
more
the
the
harder,
but,
in
my
opinion,
is
figuring
out
which
things
we
are
going
to
which
slices
we
are
going
to
test
and
how
we
avoid
doing
that
in
cups
and
incubating
and
in
cluster
CTO
and
in
everything
else
right.
That
is
not
a
good
use
of
everyone's
time,
but
I
mean
I
think
we
can
certainly
agree
in
the
meantime
like
how
we
I
don't
really
wanna
go
into
like
it.
C
With
regards
to
the
particular
question
that
I
asked
the
like
doctor
thing,
I
think
a
good
auction
might
be
to
just
install
from
the
bear
buy
memories
rather
than
the
package,
but
I
guess.
The
question
is
like:
how
are
we
going
to?
Should
we
choose
configurations
and
try
to
get
them
under
testing,
whether
it's
with
cops
or
comedian
and
try
to
converge
on
those
sort
of
things?
I
do.
B
Think
we
to
talk
with
sig
node,
because
they
do
they
do
a
whole
vetting
process
for
see
our
eyes
and
it
is
non-trivial
and
they
usually
age.
Men
like
published
almost
like
a
white
paper
that
came
along
with
it
that
talked
about
here
are
the
performance
implications.
Here
are
the
artifacts
that
we've
seen
here's
how
the
Delta
has
applied,
because
history
has
taught
us
in
the
past
many
moons
ago
the
younger
version
of
docker
had
had
a
tendency
to
break
things
he
synchronously.
C
I
I
understand,
agree
I'm,
I,
guess.
My
viewpoint
is
that,
like
signal
looks
more
narrowly
at
let's
say
docker,
for
example,
and
this
sig
looks
at
combining
all
the
various
pieces
together
right,
so
this
sig
will
pull
in
the
the
cloud
provider
and
the
cloud
controller
and
make
sure
that
they
all
work
together
together,
which,
hopefully
you
know
we
can
pass
off
most
of
the
hard
work
of
CI
validation
to
sig
node,
and
then
it's
more
like.
Oh
we've
got
the
flags
wrong.
C
That
sort
of
level
of
complexity,
I
hope,
never
make
our
lives
better
right,
but
I
guess
figuring
out.
It
would
be
great
I
guess
this
can
feed
into
tomorrow's
discussion,
but
figuring
out
how
we
decide
what
a
slice
is
that
we
want
to
test
and
like
today,
we'll
have
to
test
it
in
in
different
ways
and
if
we
can
get
to
being
able
to
test
it
in
one
way
that
or
being
able
to
express
that
in
one
way
at
least
I
would
make
it
sounds
better.
I,
absolutely.
B
Think
that
we
should
take
ownership
stake
on
the
slice
of
components
for
a
given
version
and
make
sure
that
we
have
consistency
across
version,
a
diversion
B,
because
it's
in
our
it's
in
our
best
interest
and
it
gets
testing
cycles
done
and
it
avoids
the
np-hard
complexity.
We
could
find
ourselves.
And
if
we
don't.
C
A
I
guess
I
want
to
come
back
to
us,
something
that
you
guys
both
said
earlier
and
to
reiterate,
this
goes
back
to
the
Sigma
Charter.
The
position
that
do
we
believe
that
our
sig
is
sort
of
the
the
integration
point
for
the
other
pieces.
You
know
the
control,
plane,
components,
the
node
components,
etc.
To
sort
of
validate
the
overall
configuration
that
we
as
a
community
want
to
support
for
a
kubernetes
release.
I.
C
A
A
That
already
exists.
It's
almost
sort
of
a
kin
to
the
cluster
API,
which
is
okay.
Now
we're
gonna
own
the
machines
as
well
and
we're
gonna
own
the
Eunos
or
the
OS,
and
the
CRI
as
sort
of
part
of
the
whole
puzzle
and
I
think
with
cube
ADM.
We
may
be
needed
implicitly
were
kind
of
needing
to
do
that
anyway,
because
we
had
to
have
something
under
cube,
ADM
to
actually
run
tests
against,
and
that
was
what
was
validated
I.
Think
it's
what
what
Tim
was
sort
of
pointing
out
earlier.
C
A
Right
so
it
seems
like
something
that
would
be
really
good
to
capture
in
the
signature.
I
don't
know
if
there's
a
part
of
the
template
that
is
like
I
know
that
there's
a
part
for
the
things
that
are
out
of
scope,
but
is
there
a
part
for
like
things
that
you
do
that
kind
of
cut
across
other
other
groups
where,
where
the
coordination,
oh.
D
B
E
So,
on
the
note
of
keeping
up
with
all
like
all
the
different
versions
and
validating
them,
not
what
I
remember
from
discussing
this
problem
with
sig
note
is
that
they're
interested
in
trying
to
push
some
that
off
onto
the
different.
You
know
vendors
of
the
CRI
kind
of
right.
Now
they
sort
of
need
to
handle
doctor
and
doctor
shim,
but
I
think
long
term
they'd
even
like
to
kind
of
push
them
that
towards
dr.
E
B
So
this
is
like
this
weird
state
space
where
it
crosses
over
sig
clustered
life
cycle
through
sig
node
into
policy
of
what
it
means
to
be
a
supported
version
so
like
this
almost
cuts
into
sig
arch
and
steering
for
that
matter
to
like
what
does
it
mean
to
say,
you're
supported
right,
like
there
needs
to
be
some
level
of
guarantees
from
the
consumer.
Besides,
just
conformance
that
allows
the
provider
to
say,
like
XYZ
is
a
supported
thing
for
kubernetes
ABC.
So.
E
I
also
remember
them
saying
that
that
it's
not
kubernetes
saying
that
this
one
is
supported
like
that.
We
like
a
lot
of
post
or
something,
but
it's
it's
the
vendor,
saying.
Okay,
our
integration
is
available
now
and
the
only
way
that
it's
like
you
guaranteed
fully
supported
is,
if,
like
the
complete
package,
including
the
CRA,
is
conformance
tested.
E
B
G
G
E
This
hard
in
a
similar
place
where
they're
not
super
I,
think
they're
not
super
interested
in
having
to
like
maintain
you
know
an
official
doctor,
one
because
I've
put
some
sort
of
awkward
awkward
place
with
the
other
CR
eyes,
but
for
the
moment
like
they
can't
afford
to
break
everything.
I
think
we
just
need
to
push
them
that
okay,
you're
supporting
one
it's
gotten
pretty
old.
Can
we
support
something
newer.
B
F
B
C
I
mean
support
is
a
supported.
Is
it
tricky
works?
We
don't
technically,
like
you
know,
you
can't
phone
me
up
and
say:
hey
you
validated
cops
and
I'm
gonna
help.
You
gotta
help
me
Justin
right,
you
can
hand
down
slack,
but
no
guarantees
right
and
I.
Think
I
think
we
have
the
conformance
test,
which
is
a
is
a
stamp
I.
Think
the
I
think
what
we
want.
C
I
think
one
approach
could
be
that
we
come
up
with
a
one
or
a
few
slices
that
we
ourselves
test,
and
ideally
they
pass
conformance
or
hopefully
pass
conformance
and
that
those
are
then
verified
if
not
supported
it,
but
we
have
we.
The
challenge,
I
think
is
came
up
on
the
Twitter
this
weekend.
Right
is
that
it's
it's
hard
to
put
the
open-source
communities.
We
want
to
still
be
a
workable
thing
right.
It
shouldn't
be
that
you
have
to
use
a
commercial
sort
of
a
certified
distribution
of
kubernetes
in
order
to
get
communities
to
work.
C
E
B
Think
so
I
did
have
them,
maybe
admit
it
separate,
but
I'll
bring
it
I'll
bring
it
up
tomorrow.
With
regards
to
cluster
APN
on
prim
stuff,.