►
From YouTube: Kubernetes SIG Cluster Lifecycle 20190306 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/1Ys-DOR5UsgbMEeciuG0HOgDQc8kZsaWIWJeKJ1-UfbY/edit#heading=h.rwck60ac93yg
A
Hello
and
welcome
to
the
Wednesday
March
6
ition
of
the
cluster
API
subgroup
meeting
for
Sigma's
tube
lifecycle.
We
have
a
relatively
short
agenda
today,
so
let's
just
go
ahead
and
dive
in
I
was
that
the
first
one
is
from
qui
about
the
presubmit
CI.
You
want
to
give
some
updates
based
on
what
you've
in
the
notes.
B
Yes,
so
there
are
two
there:
two
issues
under
pre
submarine
CI,
the
first
one
is
there's
a
duplication
of
the
two
jobs
at
least
two
here.
The
Basel
verify
job
basically
be
covered
by
the
trust,
API
test
job
and
the
cross
API
test
actually
called
other
verification,
scripts
so
removed
that
and
the
second
one
is
I
want
you
kind
of
a
get
painting
here
is
in
the
car.
In
the
current
verified
by
the
script,
we
basically
see
each
other,
the
verification
when
Basel
is
not
a
well
bono
machine.
This
is
happen.
B
This
happened
to
be
the
case
with
a
CI
container
we're
using
so
so.
Basically,
we
skip
all
the
way,
the
verification
in
in
the
past
CI
job
runnings
and
if
your
laptop
happen
to
not
install
base
or
so
it's
all,
it's
also
skipped
so
so
we
didn't
found
this
issue
because
it's
varying
the
log
and
we
never
able
to
kind
of
dig
into
the
log
to
see
the
Skip
message,
but
I
think
it's.
We
should
remove
this.
This
9
basically
don't
skip
it
and
just
repel
failure.
A
So
that's
like
a
link
in
chat
to
PR
633,
which
is
where
that
got
changed
to
the
initial
version
that
it
looks
like
been
submitted
or
verify.
Basil
did
not
have
the
the
skipping
check
and
then,
when
Alvaro
submitted
or
sent
out
633,
he
added
that
and
doesn't
Santa
Barbara
commented
in
December
that
that
it's
likely
to
be
possible
to
you
know
require
basil
for
developers
for
communities
anyway.
A
So
that
sounds
fine
with
me.
We
should
maybe
just
double-check
with
Alvaro.
Why
why
he
added
that
in
that
commit
and
if
he
thinks
it's
still
important
to
keep
it
there
and
I,
don't
see
him
on
the
call
today.
So
you
might
just
paint
him
into
into
the
issue
or
into
your
into
a
change,
removing
it
and
saying
like
hey.
We
talked
about
this
on
the
call,
we're
fine
and
removing
it.
Please
let
us
know
if
we
can't.
D
C
A
B
B
F
A
E
C
The
reason
I
really
like
basil
is
because
it
is
really
good
at
building
images,
particularly
when
you
are
building
on
a
different
architecture,
and
it
is
a
paint
to
that
in
the
cops
build
if
you
have
a
look
at
it.
Like
has
darker
that,
like
goes
and
like
instils,
take
the
right
version
of
go
and
like
builds
the
artifacts
and
like
on
packs
in
my
system.
It's
just
really
complicated
or
the
veil
that
sort
of
just
works
there's
a
lot
of
boilerplate,
but
it
just
works.
And
that's
that's.
C
Why
I
like
basil
for
this
but
and
I,
think
I
think
when
we
use
basil
we
should
also,
if
we
use
basil,
we
should
also
be
sure
that
it
still
go,
build
and
go
tests,
because
otherwise
that
breaks
people's
IDs.
So
the
basic
flow
should
still
work,
but
I
for
building
images.
I
I
would
I
would
argue
that
in
my
experience
it
has
been
great
to
have
it
driven
by
basil.
E
Yeah,
just
like
you
know
my
contribute
experience
coming
back
to
to
kubernetes.
After
a
little
while
and
finding
new
things
discovering
new
things
with
Veysel
was
a
little
bit
frustrating
because
I
didn't
I
didn't
know,
I
had
to
go
and
check
it.
I
got
some
weird
failures
and
see
I
test
on
different
basil.
E
I
mean
it
turned
out
to
be
fairly
minor
thing,
but
you
know
it
was
quite
confusing
to
to
begin
with
I
too
wasted
a
lot
of
time
with
phase
all
last
night,
trying
to
figure
out
why
it
wouldn't
build
images
turns
out
that
it's
because
I
have
a
modern
version
of
Python,
but
I
mean
my
point
of
view.
If
it
was
the
default
thing,
I
would
have
to
figure
it
out
from
the
beginning,
but
at
that
point,
I
was
already
convinced
that
everything's
fine,
because
goat
has
passed.
Why
is
basil
failing
I.
E
C
C
Think
I
think
I
think
we
can
have
the
broader
discussion
of
whether
we
want
to
support
whether
we
should
run
basil
make
or
both
I
think
that
could
be
a
separate
discussion
rather
than
particularly
having
this
sing
I.
Think
that's
almost
like
God
like
well,
we
can
make
a
sig
decision,
but
we
could
also
make
a
like
project
wide
decision,
I
think
or
we
can
try
it
in
our.
So
you
can
see
how
we
feel
but
I
anyone.
That's
tried
to
build
images.
C
C
G
E
Mean
the
kind
of
unity
one
point
I
wanted
to
make
is
that
for
a
small
project,
if
it's
a
six-hour
project,
it
may
be
a
little
too
much.
You
may
be
an
overkill,
and
so
if
I
can
get
experienced,
people
I
don't
know.
That's
like
you
know.
It's
trying
to
introduce
Basel,
just
sort
of
reading
the
docs
and
whatever
we
take
to
add
it
to
a
project,
and
it
felt
like
I
have
to
do
a
lot
and
then
what
every
contributors
gonna
have
to
do,
make
it
so
much
simpler.
E
A
I
think
we're
we're
pretty
settled
on
that
one.
If
we
could
want
to
take
the
discussion
about
Basel
slash
build
systems
offline,
maybe
we
could
do
that
in
a
smaller,
smaller
setting.
Other
people
that
are
interested
or
like
like
was
mentioned
on
the
mailing
list
and
then
sort
of
circle
back
with
some
suggestions.
Instead
of
trying
to
debate
during
this
meeting
the
next
one
Jason
the
pivot
phase
work
looks
like
it's
getting
close
to
being
done.
You
know
talk
about
that.
Rick,
Lee,
yeah,.
F
So
I
finally
had
a
chance
to
pick
back
up
the
pivot
phase
work
local
testing
has
done,
has
been
pretty
good
so
far,
I've
tested
all
sorts
of
scenarios
with
machine
deployments
and
machine
sets
and
thankfully,
obviously
want
to
eliminate
some
of
the
complexity
with
some
changes
that
we
had
go
in
upstream
recently
for
machine
deployment.
Adoption
previously
that
had
to
be
kind
of
a
hack
to
kind
of
read,
opt
machine
sets
machine
deployments,
but
that's
pretty
straight
forward
now.
F
Currently,
the
the
only
thing
that
I'm
really
finishing
off
right
now
is
just
getting
the
test
harness
back
into
shape.
There's
a
small
set
of
tests
for
the
pivot
phase
itself
and
I've
been
working
on
trying
to
get
the
cluster
deploy
or
tests
patched
up,
because
the
changes
in
kind
of
the
ordering
and
some
of
that
stuff
it
broke
some
of
those
tests,
so
I
think
I
have
everything
passing
in
the
cluster
player
right
now,
except
for
some
of
the
delete
tests
hoping
to
get
those
wrapped
up
either
today
or
tomorrow.
A
I'm
great
I
added
the
next
one
on
here
on
behalf
of
truck
because
it
sounded
like
he
was
bucked
on
me,
which
is
truck,
has
an
open
PR
for
documenting
the
release
process,
which
looks
pretty
complete.
There's
one
thing
that
I
think
is
missing
that
I'm
adding
a
comment
for
right
now,
but
if
people
want
to
go
and
take
a
look
at
that,
this
should
merge
relatively
soon.
A
So,
if
you
have
thoughts
or
comments,
I
know,
Andy
has
left
some
great
comments
about
how
we
can
replace
some
of
the
manual
steps
with
automated
steps,
which
sounds
amazing,
because
the
manual
steps
are
pretty
onerous
right
now.
So
yeah
people
have
experience
from
you
know
other
projects
or
other
tools,
they've
used.
That
will
help
automate
some
of
the
stuff
that
the
awesome
I
did.
H
Want
to
ask
around
that
Chuck
is
check
on
the
call.
I
didn't
see
him
on
the
list.
Well,
I
would
ask
about
the
GCR
buckets
for
pushing
images.
I
saw
that
there
was
a
staging
bucket
or
a
series
of
staging
buckets
and
I
wasn't
sure
if
there
were
any
updates
on
getting
access
to
those,
but
I
can
follow
up
with
Chuck,
because
I
know
he
was
in
conversation
about
it.
Do
you
know
what
the
name
of
the
staging.
C
This
I
think
this
is
being
driven
by
the
working
group
Cates
infra,
which,
when
I
were
just
in
and
Tim
Holcomb,
created
a
script.
So
there
should
be
GCR
buckets
at
least
creatable
for
all
projects.
The
image
promoter,
which
would
be
the
thing
which
takes
an
image
out
of
the
GCR
staging
bucket
into
the
fraud
bucket,
is
close.
I
think
it's
gonna
be
demoed
in
two
weeks
last
meeting.
So
it's
not
there
yet,
but
very
close
I,
don't
know
I
feel
like
you
should
have
permissions
to
push
to
the
GCR
staging
buckets.
A
C
Sub
project,
and
possibly
more
granular
than
that,
but
at
least
every
sub
project
gets
its
own
GC.
Our
staging
bucket
I,
don't
know
I'm,
not
sure
on
the
details,
but
I.
Imagine
that
each
project
therefore
has
its
own
list
promotion
file,
which
controls
promotion,
which
then
goes
for
single
Mikey,
a
single
prod
bucket,
which
I
believe
is
going
to
be
replicated,
but
basically
a
single
Prodigy's,
er.
Okay,.
A
And
right
now
we
have
our
existing
sort
of
quote
prod
GC
are
you
know,
I,
think
like
you
and
I
have
Eccles
to
get
to,
or
is
that
gonna
be
replaced
with
like
a
different
one,
or
is
the
idea
that
the
image
pusher
will
push
from
the
the
common
staging
bucket
that
you
know
everybody
can
get
to
and
that's
they
say.
The
only
way
to
promote
stuff
to
prod
is
through
the
the
pitcher
sort
of
a
human
is
a
last
resort.
Fallback.
C
A
H
H
I
know
this
probably
isn't
the
right
way
to
look
at
it,
but
I'm
I'm
tend
to
exclude
the
documentation
ones
from
absolutely
critical
just
because
we
can
come
in
after
the
fact,
if
need
to
there's
currently
15
open
cluster
API
vo
and
alpha
one
issues,
including
the
documentation
ones,
and
several
of
them
are
documentation
the
ones
that
aren't
look
like
they
may
be
either
minor
things
that
we
can
defer
or
stuff
that's
in
flight
with
either
PRS
that
are
currently
open
or
soon
to
be
open
and
Vince
feel
free
to
add
stuff
as
well.
Yeah.
D
H
Thanks
Vince
yeah,
so
that
one
is
about
defining
a
health
check.
Strategy
for
machine
sets.
There's
a
lot
of
conversation
dating
back
to
April
2018
on
the
github
issue,
but
there
hasn't
haven't
been
any
updates
since
mid-january
and
I
wouldn't
say
that
I
have
the
context,
because
I've
recently
shifted
over
to
cluster
API.
So
if
there's
others
who
have
been
involved,
it
would
be
awesome
if
we
could
get
some
movement
either
decide
we're
going
to
push
this
out
of
V
1
alpha
1
or
figure
out
what
we
want
to
do
in
the.
J
Yes,
I
think
it
was
discussed
couple
of
things
going
on
in
the
meeting,
so
the
conclusion
eventually
was
that
we
implement
two
main
objectives
in
terms
of
strategy.
First
of
all,
it's
mentioning
one
of
the
common
tag
is
that
whenever
the
bill
is
not
reporting
from
certain,
maybe
explaining
appearance
from
some
suitable
means,
probably
why
minute
and
into
the
police
the
motion
that
this
was
two
very
basic
parameters
that
we
decided
as
part
of
the
health
check
strategy,
then
I'm
not
sure
whether
this
should
fall
under
B
1,
alpha
1,
all
should
fall
out.
J
J
D
A
A
H
Of
that
I
had
a
question
for
the
group.
None
of
the
poor
requests
is
currently
associated
with
a
milestone.
They
may
be
marked
to
fix
issues
that
are
associated
with
milestones,
but
would
would
you
all
be
willing
to
make
an
effort
to
try
and
associate
the
PRS
with
milestones
so
that
it's
a
little
bit
easier
to
see
which
ones
we
need
for
b1,
alpha,
1
and
which
ones
are
longer
term.
A
Yeah
I
think
that's
great
excited.
Just
looking
at
appears
that
I'm
the
reviewer
on
trying
to
figure
out
how
urgent
it
was,
and
you
know,
I
poked
it.
We
could
go
say
me
to
rebase
and
it
hasn't
been
touched
since
and
I
can't
tell.
If
that's
you
know
we're
waiting
for
decisions
to
be
made,
or
it's
just
not
something
that's
going
to
make
the
first
cut.
A
This
was
number
651
if
people
interested
about
what
we
mean
by
taints,
and
this
is
about
updating
comments,
but
it's
also
related
to
another
PR
that
actually
changes
the
behavior
of
the
machine
controllers
in
terms
of
how
paints
get
applied
to
machines.
So
I
was
trying
to
go
back
through
my
backlog
and
make
sure
things
that
were
sort
of
kept
moving
forward
and
again,
like
I,
can't
tell
that
you
were
saying
on
that
one.
Is
it
urgent?
Is
it
important?
Is
it
long-term
right,
I
think
that's
a
great
idea.
Okay,.
H
A
A
H
The
other
thing,
I'll
just
say
about
milestone
tracking,
is
all
periodically
over
the
next
several
days
in
couple
weeks
be
going
through
and
asking
if,
if
a
various
issue
or
PR
is
strictly
necessary
for
the
Alpha
or
asking
for
status
updates.
So
if
you
can
just
keep
an
eye
on
those
everyone
and
largely
I'm
here
to
try
and
help
get
this
V
1
alpha
one
release
out
the
door
as
smoothly
as
possible.
So
just
keep
an
eye
for
questions
and
comments,
and
please
try
to
answer
timely
if
possible.
H
A
Once
going
twice,
okay,
one
last
thing
to
wrap
up
the
meeting
I
did
want
to
make
sort
of
an
announcement.
Let's
let
people
know
that
I
will
be
switching
to
a
different
team
at
Google
and
as
a
result,
sort
of
stepping
back
from
a
lot
of
my
responsibilities
and
sig
cluster
lifecycle,
including
my
work
on
the
cluster
care
project
for
the
people.
Here.
It's
it's
been
a
great
pleasure.
You
know
personally
professionally
working
with
all
of
you.
A
I'm
really
gonna
gonna
miss
come
to
these
meetings
and
seen
your
lovely
faces
every
week
and
you
know
building
open
source
software
together
in
kubernetes,
and
you
know,
I
think
there
there
are
a
number
of
other
Googlers
that
are
going
to
step
up.
I've
talked
to
Tim
st.
Clair,
a
little
bit
about
the
the
co-chair
workforce,
a
cluster
lifecycle
and
and
started
that
transition
as
well.
A
C
A
A
And
you
know
also
on
my
new
team
I
have
another
meeting
at
exactly
this
time,
so
I
probably
won't
be
able
make
this
meeting
any
more.
If
you
guys
have
questions
for
me,
I'll
still
be
around
for
a
little
while
on
github
and
we're
starting
to
comment
on
slack
but
I,
pretty
I,
probably
be
in
person
nearly
as
much.