►
From YouTube: 20180829 sig cluster lifecycle kubeadm office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
I
can
agree.
Hello
today
is
August
29th
2018.
This
is
the
Covidien
office
hours
as
part
of
sequester
lifecycle
live
Amir.
You
want
to
talk
about
a
couple
of
things.
I
got
to
add
some
items
to
the
agenda
too.
If
folks
were
on,
the
call
can
add
their
deeds
to
the
attending
list.
I
would
also.
B
Okay,
so
the
first
item
I
added,
was
about
the
end
to
end
this.
We
have
currently
at
this
grid
and
I
can
share
my
screen.
The
idea
here
is
to
basically
discuss
which
tests
we
should
remove
and
I
can
send
a
PR
like
later
this
week
to
help
the
testing
for
folks.
If
we
have
a
decision,
what
can
be
removed
should
I
proceed
with
sharing
the
screen.
Sure,
okay,.
B
Okay,
can
you
see
it
yep
all
right
so
so
this
is
the
most
life
cycle
section.
We
have
some
tests
disk
here
that
probably
should
be
removed.
The
first
one
is
the
GC
1.8
I
think
there
is
here
probably
due
to
the
kubernetes
anywhere,
which
is
still
work-in-progress
the
fix,
and
so
this
one
should
we
remove
it.
Yes,
even.
A
Think
we
should
probably
wait
because
the
window
for
one
nine
still
exists
and
people
the
upgrade
test
from
one
nine
two
went
the
standard.
One
nine
test
could
be
removed,
the
one
nine
on
110
and
the
one
nine
two
one
ten
tests
could
probably
be
stay
there
for
a
period
of
time
until
we're
like
a
couple
weeks
into
the
1/12
cycle.
A
C
A
C
A
A
B
B
A
I'm
gonna
defer
this.
This
is
stuff
we
in
the
cube
idiom
space
do
not
need
to
maintain.
This
is
all
Google
isms
for
their
slash
cluster
directory
because
they
support
some
of
this
stuff.
So
I
this
this
falls
squarely
into
Robbie,
so
I,
don't
we
don't
need
me
to
maintain
this?
We
don't
need
to
own
this.
This
is
squarely
on
Google.
To
be
honest,
yeah.
A
The
standard
answer
is
that
we
anything
that
falls
underneath
these
test
suites.
These,
sometimes
we
need
to
do
is
a
minor
triage,
but
then
they
they
goes
back
to
to
the
Google
order.
So,
like
see
see
a
certain
group
of
owners,
Mike,
Denis,
Mike,
Denis
and
Robbie-
are
usually
the
routers
to
whoever
would
be
the
people
that
would
fix
those
issues.
Okay,.
B
A
I
think
what
is
kind
of
asked
for
from
the
sig
right
now
is
basically
to
start
testing.
You
know
step
the
more
cycles.
We
spend
testing
in
sort
of
vetting
some
of
this
stuff.
You
know
the
better
it
gets,
and
one
of
the
conundrums
or
problems
we
have
with
radium.
Is
we
don't
actually
have
we
have
a
test
suite,
but
that
test
suite
is
not
exercised
and
part
of
that
test.
Suite
I
think
is
meant
to
have
tests.
A
We
can
add
tests
to
there
that
verify
things
like
h8
deployments
and
eventually
we'll
have
automation
around
that
suite.
Inside
of
the
I
think
you
know
the
actual
jobs
that
we
post
and
test
grid.
We
don't
have
all
that
yet,
but
if
folks
want
to
help
with
that
automation,
it
would
be
highly
helpful
right
like
if
we
had
a
test
which
verified
parshas
of
the
hea
deployment
to
make
sure
that
they
were
seen
and
correct.
B
A
A
B
A
B
A
I
think
we
need
to
set
up
the
apparatus
for
113
I,
don't
think
it's
gonna
happen
for
112,
but
I
think
right
now
we
we
do
need
folks
to
kind
of
pivot
from
feature
development
work.
There
are
some
there's
a
couple
last
features
that
need
to
get
in
like
the
last
PR.
Changes
from
Fabricio
need
to
get
done
as
well
as
Liz's
PR,
but
other
than
that
I
think
switching
to
bug
triage
mode
early.
A
B
B
A
What
we
can
do
is
we
can
prep,
PRS
I,
think
prepping
PRS
for
113
and
making
a
hard
push
to
beta
is
totally
legit
now
so
folks,
if
folks
want
to
do
the
getting
the
work
ready
in
place
for
113,
but
we'll
just
hold
on
the
PRS
until
113
opens
up.
That
makes
total
sense
to
me.
The
only
reason
why
we
wanted
to
wait
was
we
wanted
to
make
sure
that,
like
the
beta
config
was
all
done,
so
we
have
guarantees
for
the
configuration,
because
it
affects
almost
every
single
command
line.
A
Now
as
we're
changing
these
options
right
so
like.
If
people
have,
if
we're
gonna
give
beta
guarantees,
we
need
to
make
sure
that
the
C
lies.
Don't
change
across
the
cycle,
so
if
you
pushed
it
into
the
other
configure
be
weird
right.
So
if
we
did
this,
if
people
want
to
prep
the
pr2
change
alpha
phases
into
the
specific
verb
for
an
it
or
join
or
config
that
that
is
totally
fine
to
do
that
now
and
then
we
have
the
may
have
113
already
properly
front-loaded.
Oh.
B
A
Think
I
think
moving.
The
commands
could
probably
happen
before
the
config
is
settled
because
then
the
config
will
percolate
through
it
also
kind
of
be
a
forcing
function
for
us.
Like
you
know
we
could.
Ideally
we
would
wait
for
config
first
and
then
we
would
do
the
commands,
but
I
think
if
we
were
to
force
the
commands
first,
then
there
would
be
this
whole
conundrum,
but
where
we
have
to
get
the
config
in
place
and
we
kind
of
punted
on
config
this
cycle.
A
B
B
A
Like
you,
could
you
could
do
this?
I
would
wait
until
for
meteor
has
his
last
config
change
in,
because
that
would
affect
the
CLI
and
that
that's
the
last
one
that'll
happen
by
next
week.
Tuesday
then,
from
that
point
out,
the
config
should
be
locked
into
1:13.
We
should
not,
unless
there's
bug
fixes.
So
besides
bug
fixes,
we
should
not
be
doing
major
changes
to
the
config
so
meet
v1.
A
Alpha
3
should
be
locked
for
112
cycle,
and
that
should
be
enough
for
you
to
make
the
shifts
in
the
COI,
and
you
could
just
let
that
PR
hold
until
113
opens
up,
and
it
should
just
be
a
shuffling
right
for
the
most
part
we
might
need
to
MA
I
would
I
would
I
would
separate
out
some
of
the
issues
with
some
of
the
CLI
things
in
alpha
phases.
Is
they
will
need
to
find
a
new
home
I
would
do
pure
shuffling
as
one
PR
right
like
just
stuff.
That
goes
straight
to
an
it.
A
B
A
A
I
would
I
would
do
something
very
simple:
it's
always
better
to
try
to
not
do
it
all
at
once,
because
then
you'll,
you
own
all
the
dependencies
that
come
along
with
it.
If
you
just
focus
on
just
in
it,
these
are
the
sub
commands
that
really
belong
in
it.
I
think
that
will
be
unambiguous
and
pretty
straightforward.
B
Mean
it's
for
Britta
here
I
wanted
to
talk
about
some
things.
I
have
noticed
in
the
config,
so
the
UID
business.
We
we
are
discussing
this
with
you
Tim
with
Ross
with
Richie,
oh
and
yeah,
so
we
correctly
company.
It
has
no
way
to
get
the
Machine
UID.
They
only
have
like
a
small
utility
to
get
generate
like
as
compliant
you
IDs,
but
there's
no
way
to
get
the
machine
yogi.
B
So
do
we
want
to
go
this
way
like
venturing
a
separate
driver
library,
I,
don't
know
if
we
can
ask
the
API
machinery
folks
to
add
them
or
to
add
the
library
or
perhaps
we
should
add
it
in
a
coop,
a
DM
or
we
there's
a
rope.
So
we
don't
render
the
library
and
implement
everything
ourselves
which
is
I'm.
A
Ok
with
it,
with
van
during
a
UUID
library
that
guarantees
you
need
this
per
machine,
there's
plenty
of
utilities
that
do
this
that
are
out
there
that
are
sort
of
platform
agnostic
utilities
there
has
to
be
like
this
is
not.
This
is
a
state
space
of
problems
that
has
existed
for
a
long
time
and
has
been
solved
in
many
other
languages.
B
B
A
Alright,
I
have
to
dig,
there's
probably
been
multiple
attempts,
like
you,
probably
found
a
couple
of
different
PRS
I'm
sure
there
are
other
ones
or
issues
that
have
been
filed.
I
can
sync
with
folks
and
try
and
figure
that
out.
I
do
think
that
it
would
be
ideal
to
have
a
UUID
versus
something
like
a
relying
on
hostname,
because
it's
not
guaranteed
unique
right.
A
B
A
We
can
hash
on
this
idea.
I,
don't
think
we
need
to
have
everybody
here
to
talk
about
this.
One
I
think
the
the
general
constraint
of
having
a
unique
identifier
per
machine
is
not
new
I.
Think
the
question
is
whether
or
not
we
need
to
bake
it
into
our
status
and
I.
Think
the
answer
is
we
need
some
identifier,
at
least
for
the
time
being.
Ideally,
it's
gonna
be
unique,
at
least
from
my
perspective,
I
don't
know
if
other
people
have
other
thoughts
there,
Jason
yeah
thoughts.
B
B
B
A
Let
me
I
think
we
can
probably
table
this.
This
is
not
a
unique
state
space
of
problems,
let's
let's,
let's
loop
in
some
other
folks
and
try
to
see
if
we
can
get
a
resolution
on
this.
If
we
need
to
hostname
is
not
unique,
but
it's
what
we
use
for
everything
else,
lack
of
a
better
term.
It's
what
we've
done
for
all
the
other
goo
it
will
solve
the
problem,
get
it
over
the
hurdle
for
a
period
of
time,
because
it's
just
an
index
right.
A
D
A
A
You
would
have
to
pass
it
through,
so
you
would
have
that
you
would
have
the
argument
being
passed
through
all
the
way
through
all
mm-hmm
I
don't
know,
I'd
have
to
double-check
this
I
know
you
can
do
it
on
the
couplet
overhead,
no
name
I
have
not
seen
whether
or
not
you
can
actually
plum
through
the
node
name,
override
on
a
control
plane
instance
on
an
it
I,
don't
actually
know
the
answer
to
that.
I
know
for
certain.
You
can
do
it
on
the
couplet
early
and
enjoy
a.
A
B
B
A
Don't
okay,
we
don't.
This
was
the
argument
that
we
originally
had
in
the
config
change
of
putting
it
as
a
map
like
whether
or
not
we
need
a
key
or
whether
not
we
can
just
have
a
list
of
IP
honours.
Do
right
so
like
as
you
do,
join
your
head.
Just
appending
to
the
list,
so
I
think
hostname
for
now
is
fine
and
we
can
always
address
this
because
the
control
plane
joint
is
still
experimental,
just
put
a
comment
block
around
it
and
be
like
for
now.
We
will
use
hostname
as
index
for
the
key.
B
Okay,
okay,
I'm
gonna
end
this.
Let's
end
this
topic,
I'm
gonna
make
a
comment
in
the
PRF.
Oh
that's,
pretty
I
made
something
else.
I
wanted
to
talk
about
it
about
the
config,
and
so
we
have
feature
gates
in
the
joint
configuration,
but
also
there
are
feature
gates
in
the
init
configuration
coastal
configuration
more
precisely
so
do
you
want
to
have
feature
gates
in
the
joint
configuration?
Is
this
correct
depends.
A
What
the
kids
are
and
I've
take
a
look,
because
we
don't
I,
don't
know
what
feature
gates
apply
anymore,
because
we
have
our
own
version
of
feature
gates
that
are
not
the
same
as
upstream,
and
so
what
that
famous
KK
or
the
actual
other
components
right
and
the
major
feature
gates.
We
have
our
own
version
of
that.
So
if
something
applies
to
the
joint
configuration
for
the
couplet,
then
maybe
but
most
of
the
feature
gates
that
I'm
aware
were
controlled
plane
related.
What
feature
gates
are
you
thinking.
B
B
B
I'm
gonna
file
an
issue
about
that.
So
the
next
topic
that
we
do
if
we
spoke
about
it
like
the
last
time,
so
we
have
feature
gates
in
the
know.
We
we
have
potential
for
feature
gates
on
worker
nodes,
but
we
don't
have
a
separate
image.
Repository
like
the
only
way
to
get
the
image
repository
is
to
join
the
cluster
and
then
finish
the
image
repository
from
the
config
map,
like
it's
distorting
the
caustic
information,
so.
A
The
segregation
of
two
use
cases-
this
is
the
one
where
we
talked
about.
We
said
like
we
should
defer
to
what
customers.
People
who
use
cube.
Idiom
wants
like
having
the
segregation
between
nodes
in
the
whole
cluster
space
for
registries.
I
think
should
be
driven
by
demand
if
there
is
demand
for
it,
then
sure
but
I,
don't
I,
don't
see
it
as
a.
A
That's
like
a
concrete
thing
that
needs
to
get
done
for
all
of
the
cloud
providers
as
English
stuff
out
a
tree
that
that
should
be
like
an
on-demand
scenario,
more
so
than
a
I
think
driving
driving
that
type
of
feature
addition,
the
there
I
think
it's
probably
more
prudent
to
address
the
current
bug,
lists
that
we
have
and
try
to
burn
it
down
for
the
rest
of
the
cycle.
I,
don't
I,
don't
think
making
config
change
is
worth
it.
Unless
we
have
users
who
are
saying
like
we
really
need
this.
B
The
policy
which
is
currently
bound
to
the
imagery
poster
in
the
coastal
configuration
yes
and
they
pretty
much
wanted
a
custom
location
for
this
Pacific
age.
Like
that's
weird,
though,
like.
A
B
A
Mean
I
can
understand
a
different
registry
for
things
like
add-ons
but
like
if
for
your
control
plane
that
the
couplet
in
the
proxy
and
the
when
you're
doing
enjoying
right
are
it's
just
part
of
the
control
plan,
I,
consider
part
of
the
control
plan,
so
I
don't
see
why
you
would
have
a
separate
location.
We
can
always
just
deferred,
saying
like
there
can
be
only
one,
for
you
can
only
have
one
registry
repository
for
your
major
images.
It
doesn't
matter,
it
doesn't
have
to
be
GC
REO.
A
We
allow
you
to
override,
but
we
call
it
the
cluster
configuration
image
repository
and
go
IIIi.
Don't
think!
That's
a
tough
constraint
for
people
to
swallow,
but
that's
I
think
if
there's
enough
pushback
then
we'll
make
a
change
but
I'm
I'm,
not
I'm,
not
convinced,
see
honest,
given
the
fact
that
the
whole
world
revolves
around
GC
reo
for
the
last
three
and
a
half
years.
You
know.
B
A
B
A
B
So
it's
it's
like
its
first
fishes.
Of
course
the
configuration
and
then
applies
it.
Yes,
so
I
got
the
order
wrong
here.
It
is
yep.
A
Can
always
always
finally
issue
like
when
in
doubt
like
file
the
issue
with
details
of
the
users
story
in
the
u.s.
case
scenario,
if
there's
a
hole
in
the
current
logic,
then
we
should
fix
that
hole.
But
if
there,
if
there
isn't-
and
if
it's
more
of
a
feature
request
for
splitting-
which
it
kind
of
sounds
like
to
me,
then
we
should
we
should
drive
the
feature
requests
based
on
user
demand.
But
we
shouldn't
drive
that
feature
otherwise
I
think
I,
don't
think,
there's
a
user
story.
That
makes
a
ton
of
sense
for
that.
B
F
A
Think
the
general
PSA
to
focus
on
112,
burn
down
and
documentation
updates
is
is
beneficial
that
the
list
has
been
triaged
so
feel
free
to
and
the
standard
rules
apply
it.
So,
if
anything
is,
is
not
marked
as
lifecycle
active,
you
know
anyone
can
take
it.
Some
issues
are
finagle
e
and
thorny,
but
I
do
think
they
require
addressing
I'm
gonna,
try
and
dive
into
the
proxy
problem
and
see
if
I
can
force
something
through
this
cycle.
Oh.
A
A
Thirty-Eight,
so
most
of
them
should
be
reasonable.
Ish.
Some
of
them
might
need
to
get
punted
out
if
it's
not
possible
to
do
I
think
the
big
ones
of
upgrading
need
to
be
verified.
Those
ones
I'm
sure
we're
gonna
find
issues
along
the
way.
So,
if
you
start
to
do
you
know
upgrading
of
the
control
plane
for
this
cycle
and
making
sure
that
all
the
configuration
works
properly
in
my
grades
properly,
that's
going
to
be
key
because
we've
had
a
number
of
issues
over
time,
both
with
configuration
migration
and
upgrades.
B
B
A
A
B
A
There's
38
issues
and
folks
can
jump
on
any
particular
one,
and
we
can
also
clarify
I
left
some
things
open
that
we're
kind
of
like
feature-based
that
the
graduate
boots
tokens-
that's
not
even
in
our
boat,
like
somebody
else
is
actually
doing
this
within
API
machinery.
We
just
need
to
make
sure
that
when
it's
done
that
all
the
I's
are
dotted
and
T's
are
crossed,
and
ideally
we
move
some
of
that
goop,
that's
in
our
stuff
that
shouldn't
be
inside
of
our
types
that
bear
our
configuration
out
so.
B
A
If
folks
want
to
do
a
separate
section
session,
like
maybe
maybe
next
week,
we
can
actually
iterate
through
this
list
get
and
do
some
initial
testing,
then
we
can
actually
refine
this
list
for
the
rest
of
the
cycle,
because
right
now
this
is
like
the
first
pass
cut.
I
expect
it
to
be
modified
as
people
actually
test
stuff
and
find
bugs.
Oh.
B
A
B
B
A
B
B
A
More
sense
to
say,
like
these
are
the
highest
priority
things
that
we've
had
for
a
long
time
and
we
need
to
address
those
first
and
other
stuff
feel
free
to
open
the
issue,
but
we're
not
going
to
really
spend
a
ton
of
time
trying
to
get
it
onto
a
road
map
if
folks
aren't
going
to
commit
to
it,
because
that
kind
of
is
also
been
a
problem
right.
So
for
like
112,
we
had.
A
We
had
a
bunch
of
open
issues
like
a
bug,
comment
where
+50,
you
know,
between
112
and
one
between
111
and
112,
and
basically
it's
like
a
wish
list
of
other
things
that
folks
have
not
committed
to.
You
know
actually
helping
the
execute
on
so
I
think
we
need
to
be
realistic
about
some
of
the
stuff
we
can
accomplish
within
a
reasonable
time
frame.
B
A
B
B
So
we
have
three
minutes:
I
guess
I!
Think
there
was
a
PR
I
mean
I'm
going
to
leave
this
discussion.
Somebody
added
somebody
was
adding
a
new
add-on
to
the
website.
Again
like
one
of
those
Adam
sets,
we
don't
have
tests
for
I.
Think
you
suggested
to
you
Superboy,
to
like
have
a
conformance
outputs
that
maybe,
if
the
PR
sorry.
A
Like
if
you
see,
if
you
want
to
add
some
some
new
XYZ
to
the
default
set
of
CNI
plugins
or
on
their
list
of
add-ons,
that
you
should
should
vet
it
first,
because
there's
no
test
coverage
for
any
of
that
stuff.
There's
another
there's
another
major
conundrum
which
you
know
folks
are
looking
for
things
to
do.
The
the
default
configuration
of
C
and
I-44
container
D
needs
to
be
addressed
right,
like
we're
currently
looking
at
some
of
that
stuff,
but
it'd
be
helpful.
A
If
other
people
did
that,
because
there
is
no
default
docker
configuration,
we
need
to
update
the
docs
for
the
112
cycle.
It
has
to
be
container
to
your
cryo
and
there
needs
to
be
docs
in
place
to
point.
People
in
the
right
location
know
how
to
get
this
stuff,
and
eventually
we
need
to
update
the
tests
as
well.
B
So
the
dogs
they're
a
big
subject
as
well.
We
didn't
cover
anything,
I
mean
I,
guess
Jennifer
was
probably
busy
with
other
projects.
She
didn't
have
the
time
to
make
the
the
big
move
to
to
help
us
with
that.
Do
you
think
I
should
be
moving
this
like
in
the
next
month?
Do
you
think
I
should
start
doing
this
and
see
the
reaction
from
signals?
B
A
A
Getting
a
CRI
documentation
in
place
to
recommend
for
users
is
super
important
for
marriage,
and
we
need
to
coordinate
that
with
the
the
sig
note.
Folks,
there's
an
open
issue:
I
can
loop
you
in
on
which
is
basically
like
they.
There
needs
to
be
a
step
in
the
installation,
instructions
that
basically
points
out
like
here.
The
CRI
is
go
pick
one
and
here's
the
installation
instructions
for
each
individual
CRE
for
the
different
platforms,
because
docker
1703
is
their
beef.
Now
so.
E
B
Okay,
well,
that
will
be
appreciated.
So
one
question
about
this
position,
a
particular
page
here
it's
about
comedia
minutes.
We
have
altered
content
here
and
that's
that
is
kind
of
part
of
the
big
move.
But
I
really
wanted
to
move
this
a
wave
of
the
page
for
112
Tim.
Do
you
think
that
I
should
do
it
and
like
do
have
an
idea
which
way
I
should
be
moving
stuff
like
in
the
question,
creates
from
a
single
master,
I
guess.
B
We
have
that
we
have
a
configuration
example
and
everybody
goes
to
this
page
and
they
are
using
this
page,
but
I
think
we
shouldn't
be
using
this
page.
Stick
I
wanted
to
do
this
for
112,
like
move
the
config
out
of
here
and
Lucas
promised
that
he's
going
to
write
like
a
separate
document
for
explaining
how
the
config
works.
But
this
is
alpha
and
I.
Think
I.
A
Still,
why
don't
we
take
this
one
offline
I?
Still
in
of
the
opinion
you
know,
I
have
been
for
a
while
that
we
should
all
of
the
configuration
for
the
master,
config,
slash
cluster
config
init
big
breakout
should
really
be
in
go
doc.
That
gets
referenced
from
the
mainline
documentation
for
specific
versions,
because
I
don't
feel
like
maintaining.
This
is
tenable
right.
A
So
if
we
actually
had
proper
go
doc
documentation
where
we
actually
have
the
examples
listed
out,
so
you
know
you
can
do
proper,
go
doc,
documentation
where
you
have
separate
individual
examples
and
you
reference
them
in
the
main
line.
That
would
be
more
beneficial
and
more
maintainable
as
we
shift
from
or
mature
the
API
over
time.