►
From YouTube: Kubernetes SIG Node 20200914
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
good
morning
it's
a
sig
note,
ci
subgroup
meeting
it's
monday
september,
14th
hi
everybody.
So
today
I
wanted
to
start
with
the
review
of
action
items
from
the
last
meeting.
I
think
most
of
items
are
already
in
agenda,
so
I
just
review
really
quickly
what
we
have
and
then
we
will
go
into
actual
agenda
items.
A
I
didn't
find
any
agent
items
for
first
one
check
with
docker
if
they're
still
interested
to
run
some
tests.
I
think
roy
on
the
call
and
roy
was
commenting
on
that
before
roy.
Do
you
do
you
have
any
updates
or
we
just
need
to
move
to
next
meeting.
A
A
Okay,
I
think
we
can
skip
to
the
next
one.
The
next
one
is
to
put
documentation
that
we
have
into
some
of
the
community
places.
Maybe
contributors
kubernetes.df
work
community
website.
I
have
some
updates.
I
went
to
contribute
country
backs
meeting
and
asked
whether
we
can
put
it
there,
and
the
response
was
that
I
mean
specifically
joel
was
saying
that
he
has
a
concern
with
keeping
the
documentation
up
to
date.
So
I
want
to
discuss
briefly
on
this
meeting.
A
What
we
can
put
that
is
likely
to
be
stable
and
what
we,
what
other
places
we
have
to
keep
this
documentation
next,
one
yeah,
it's
the
same,
but
yeah.
I
I
think
it's
too
combined
next
one
duplicates
of
two
notes.
I
think
matt
I'll
put
a
oh.
No
I
mean
is
this
amin?
B
I
I
I
I
had
one
down
below,
but
then
I
saw
you
added
that
so
I
just
removed
it.
A
B
A
I
want
to
keep
all
the
actions
in
the
same
place,
so,
let's
just
copy
for
the
discussion,
maybe
after
I
meet
okay
and
yeah
and
last
one
is
come
up
with
a
critical
test.
Example
how
to
move-
and
I
have
some
questions
that
I
want
to
discuss
so
bear
with
me.
A
I
put
the
action
items
there.
So
roy
are
you
still
there
yeah?
A
So
I
think
the
proposal
from
rowan
karan
to
move
for
from
cos
81
lts
to
cos
85
lts.
I
know
that
on
85
we
just
hit
a
problem
with
container
d
with
python
version.
Hopefully
it's
already
resolved
and
should
be
working,
so
maybe
we
just
migrate
to
85.
A
A
A
C
Has
a
technical
issue
to
have
his
voice
getting
hurt.
A
Yeah
I
figured
because
he
showed
on
the
call
and
maybe
he's
trying
to
think
that
we
cannot
hear
him
anyway.
We
can
discuss
it
in
the
end
of
meeting
if
we
still
here.
Okay,
so
I
mean
you
want.
You
have
a
next
fight
item.
D
Okay,
so
I
got
one
of
the
items
from
the
spreadsheet
and
I
came
up
with
more
questions
than
answers.
So
my
first
question
about
this:
I
got
the
gci
gce
flaky.
D
My
first
question
is
why
this
is
running
periodically,
because
it's
only
running
flick
tests
and
if
it
was
supposed
to
be
solving
or
showing
which
one
are
flaky,
what
action
items
are
being
taken
on
their
outcomes
of
this
best
results.
You
know.
A
Can
you
give
more
context,
and
like
did
you
figure
out
what
this
test
test,
how
critical
it
is
from
your
perspective,.
D
D
So
I
have
two
questions
about
these
tests,
in
particular
the
first
one.
Why
is
cgp
machinery
test
is
doing
in
this
step,
looks
like
is
our
own
filtering,
so
I'm
not
I'm
not
sure
what
this
one
in
particular
is
doing,
but
the
second
one
I
could
understand
why
this
is
flaky.
D
It
fails
sometimes
so
basically,
what's
going
on
is
that
the
container
price
stop
hook
is
not
obeying
the
graceful
termination
time
of
the
main
container,
and
the
test
is
doing
something
odd.
That's
testing
a
loop
in
this
spray
hole
price
top
hook,
and
it
should
be
there
running
yet,
even
when
the
the
first
spot
terminates
or
the
first
part
should
be
waiting
for
this
spray
hook
terminate.
D
A
So
is
your
question
why
it's
sometimes
green,
so
it's
supposed
to
be
red
all
the
time.
D
I
think
the
test
is
wrong.
It's
testing
something
wrong
because
it
expects
that
the
prey
stop
hook
terminates
before
the
the
first
spot
terminates.
You
know
the
main,
the
main
part
running
when
you
do
the
prey
stop.
There
is
a
loop
that
keeps
running,
but
this
this
first
spot
kills
and
the
the
prey
stop
kills
together.
A
Okay,
so
I
mean
I
would
assume
that
priyastop
like
with
the
grace
ivorite
again.
So
first
of
all,
we
have
a
bug
in
kubernetes
that
if
it's
a
play
stop
hook
is
executable,
then
runtime
wouldn't
respect
timeouts.
So
if
it
runs
out
of
time
out
time
then
like
it
wouldn't
be
respected,
but
but
it
passes
sometimes
so
I
would
suggest
that
it's
not
time-out
issue,
so
is
apres
just
a
loop
that
never
ends.
So
it's
supposed
to
be
ending.
A
A
E
Sorry,
yeah
I'd
have
to
look
at
the
test
in
a
little
more
detail.
I'm
not
clear
exactly
what
it's
doing
but
you're
right
that
there
was
an
issue
with
with
pre-stop
hooksters
in
general,
not
respecting
grace
period
in
kubernetes,
but
they're
sublime
for
that
github.
E
A
I
would
expect
that
this
test
would
tell
that
we
wait
to
press
top
hook
to
complete,
so
there
should
be
infinite
loop
and
then
it
terminates.
So
I
I
would
expect
this
supposed
to
be
behavior
here.
E
Yeah,
just
reading
from
the
description
of
the
test
says
the
terminator
should
wait
until
the
presuppo
completes
right.
So
it
looks
like
the
expected
behavior,
I'm
not
sure.
If
it's
actually
behavior
is
that,
like
the
preset
hook
will
just
you
know,
the
pod
won't
be
terminated
until
the
presuppose
finishes
right.
Yeah.
A
So
this
is
how
I
read
the
description
of
the
test
and
it's
very
very
test.
I
would
love
to
have
it
to
have
it
tested.
The
question
is
like
maybe
maybe
somebody
changed.
This
prestop
hook,
description,
sometimes
and
like
convert
to
infinite
for
some.
D
Reason
I
sent
in
the
what
this
is
doing
is
very
simple.
Maybe
you
can
check
to
get
a
better
overview
of
the
test.
A
No,
that's
just
oh
it's
flaky
test,
so
maybe
there
is
another
one
that
is
not
flaky
and
I'm
confused
it's
in
gci.
A
I
would
expect
this
test
to
be
in
because
gci
is
supposedly
like
google
image
specific.
Is
it
what'd?
You
say.
A
So
I
think
I
mean
you're
right
with
the
first
assessment
that
these
two
tests
shouldn't
be
here
at
all,
so
they
just
picked
up
because
they
are
flaky
and
perhaps
this
test
doesn't
filter
enough.
Maybe
it
should
filter
by
a
folder
and
flaky
and
then
it
will
be
running
what
you
want
to
be
running.
A
There
are
like
three
issues
right.
First
is
check
that
tab
is
filtering
by
folder.
A
A
Yes,
stop
test
investigation.
Are
you
up
to
test
to
check
on
the
solution
to
do
this
investigation,
while
it's
failing
and
what
it's
supposed
to
test.
D
Yeah,
I
have
some
press
potions
about
this,
but
I
think,
as
you
say,
there
is
a
bug.
If,
if
you
have
later
the
issue,
we
can
bring
to
this
discussion
as
well.
Okay,
there's
a
bug
that
until
we
fix
it,
the
test
should
be
skipped.
I
don't
know
or
refactor.
A
A
F
A
Yeah,
so
by
the
way,
thank
you
for
joining.
I
I
understand
that
there
is
a
release,
120
meeting
right
now
right.
Is
it
concerning
you
do?
Do
you
have
a
hard
conflict
now.
A
Okay,
thank
you
for
joining
anyway,
so
context
here
is
that
this
is
top
coat
end-to-end
gci
flaky
and
it
seems
to
be
picking
up
all
the
flaky
from
like
from
api
machinery
and
from
signot
in
general.
So
I
think
this
pre-stop
test
is
not
supposed
to
be
tested
inside
the
gci
tab.
That's
why
we
wanted
to
filter
by
folder.
So
these
two
wouldn't
show
up
here.
F
A
No,
it's
very
logical
so
and
then
next
one
is
like,
since
this
is
flaky
and
this
seems
to
be
very
important
test,
especially
with
all
the
car
work
and
termination
work
that
we
plan
to
do.
I
really
want
this
test
to
stay
and
be
active,
so
I
mean,
if
you
can,
I
mean
right.
Sorry,
I
was
calling
you
I
mean
before
so.
If
you
can
check
whether
there
is
another
one
that
does
the
same
or
maybe
yeah,
so
that
would
be
very
important
to
fix.
A
I
don't
think
we
need
to
to
look
at
api
machinery
test
it's
failing
here
and
it's
out
of
scope
of
this
group.
So
if
you
want
to
take
a
look
welcome,
but
I
don't
think
we
need
to
discuss
it
here,
trying
to
scope
it
out.
A
Yeah,
I'm
muted
yeah.
Thank
you
very
much
for
looking
into
this
tub
and
yeah.
I
really
appreciate
it
cool.
So
matt
are
you
here.
B
Yeah
so
I
I
took
a
diff
of
the
of
the
two
different
jobs
and
there
was
a
bunch
of
differences
that
didn't
seem
consequential,
like
setting
resources
and
some
related
to
the
fact
that
the
conformance
tests
run
in
docker,
but
I
think
the
and
I
yeah
I
also
found
that
set
to
skip
serial
tests
in
just
like
the
node
kublet
master,
but
it
seems
to
skip
it
in
both
and
it
seemed
like
there
was
like
underlying
flag
in
the
docker
file
that
you
know
seemed
to
skip
it.
B
But
this
significant
difference,
as
amin
pointed
out,
is
that
the
conformance
tests
run
in
docker
and
the
and
the
kubelet
master.
Don't
so
you
know
I
don't
I
don't
have
the
context
of
of
that
or
what
exactly
we
want
to
do
about
it.
But
you
can.
You
can
read
more
details
in
the
in
my
comment,
but
that
seemed
to
be
the
main
difference.
A
So
your
suggestion
was
to
add
some
to
the
name,
the
master
tab.
I
think
my
setup
is
very
logical.
B
Well,
that
was,
I
think,
something
that
I
mean
brought
up.
A
I
think
it's
morgan
first,
maybe
yeah
morgan
brought
it
up
and
I
put
it
as
an
issue.
A
B
I
mean,
I
don't
know
my
guess,
my
guess
is
just
to
you
know.
Maybe
the
idea
is
that
running
a
docker
would
feel
more
like
the
real
world.
I
don't
know
I
I've,
I
really
don't
know
there
was
there
was
a
link
to
to
that
and
maybe
maybe
I
could
unearth
a
pull
request
or
something
that
has
more
details
but
yeah.
I
I'm
not
sure
on
that.
One.
B
B
Yeah,
I
I
don't
know
and
yeah,
and
there
was
about
45
different
additional
test
cases
inside
the
master.
I
haven't
had
time
to
like
track
down
exactly
how
they're
different
but
yeah,
there's
there's
there's
some
additional
ones
that
happen
in
the
master.
A
When
I
just
joined
a
group,
I
was
thinking
like
the
way
I
read
this
tabs
is
that
master
is
what
we
do
as
ongoing
basis,
and
once
we
release
the
version,
only
conformance
test
will
keep
running,
so
we
don't
run
any
master
test
any
longer.
Is
it
the
right
understanding.
A
F
Branch
not
quite
for
previous
release
for
previous
release
branches.
There
are.
There
are
a
good
handful
of
jobs
that
the
conformance
are
something
that
definitely
has
to
show
up
on
the
on
the
previous
releases,
but
a
lot
of
other
a
a
lot
of
other
tests
actually
also
may
also
make
it
in
there.
F
Usually,
the
thing
that
happens
is
that,
with
the
master
with
the
with
jobs,
have
they
worked
mastered
in
the
name,
for
example,
if
you,
if
you
look
at
a
match,
a
comment
in
github
and
one
of
the
first
annotations
is
fork
per
release.
So
whenever
the
four-part
release
annotation
is
present
on
a
proud
job,
it
actually
means
that
it's
whenever
a
release
is
being
caught
and
the,
for
example,
north
cublet
master
is
going
to
be
copied
into
node,
cubelet,
119,
1a118
and
so
on
and
so
forth.
C
Can
this
ask,
can
I
ask
a
question?
So
kubernetes
has
a
performance.
What
group
right
is
this?
No,
the
performance
test
has
something
to
do
with
that.
A
I'm
not
sure
either
and
that's
why
it
also
confused
me
when
master
like
comparing
mastering
conformance
tab,
because
I
feel
like,
like
I
looked
at,
how
features
being
g8,
so
we
remove
the
feature
flag
and
remove
this
feature
tag
on
from
a
test,
so
it
automatically
falls
into
both
conformance
and
mastered
up.
So
now,
like
any
features
that
graduated
became
a
conformance
feature
or
conformance
test.
A
B
There
inside
inside
the
configuration
there,
there
was
actually
one
additional
feature
flag
for
the
dynamic
kubla
config
inside
the
conformance.
B
So
it's
possible
that
you
know
that,
like
it
is
intended
to
run
something
a
little
different
but
and
in
the
dynamic
kubla
config
that
one
derek
recommended
removing
it
as
well,
so
that
if
we
did
that,
there'd
be
no
difference.
For
that.
A
C
Yeah,
so
maybe
we
can
ask
the
discretion
tomorrow
signal
to
meetings.
You
might
have
people
who
have
been
here
for
longer
right.
A
It's
actually
a
very
good
idea
to
escalate
this
because,
like
we
discuss
in
the
second
week-
and
we
can
escalate
it
to
signal
it.
B
Yeah
and
the
and
the
kubelet
master
is
release
blocking.
So
I
guess
maybe
that's
significant
too
in
terms
of
like
you
know,
enabling
you
know
basically,
testing
testing
changes
that
that
we
want
to
release
or
something
with
the
conformance
test,
but
then
not
actually
setting
them
inside
the
kublet
master
until
we're
ready
but
yeah.
We,
it
seems
to
make
sense
to
bring
it
up
in
the
meeting.
A
Tomorrow,
okay,
I
added
an
action
item.
Thank
you
for
looking
into
that.
It's
very
confusing
and
looking
more
into
test
grid
and
which
test
to
run
it.
It's
getting
more
and
more
complicated
for
me.
So,
like
more,
I
understand
more,
I
confused
and
it
it's
a
nice
sick
way
to
the
next
one,
a
runtime
class.
A
So
I
was
looking
into
making
runtime
class
to
be
a
critical
test,
because
runtime
class
feature
will
be
g8,
so
I
wanted
to
understand
how
to
move
it
into
a
critical
tab
or
like
this
new
tab
that
we
created
like
signal
critical.
A
My
problem
with
that
was
that
runtime
class
is
a
feature
runs
in
like
some
tests
run
as
a
orphan,
but
I
mean
we
can
fish
it
out
out
of
four
of
us.
It's
fine
and
it
was
part
of
different
type
like
and
other
tests
was
run
as
part
of
meister.
So
the
question
was
whether
we
want
entire
meister
to
be
critical
or
we
want
to
keep
internalized
separately.
A
So
I
would
say
we
need
master,
is
critical,
but
waiting
for
opinion
and
third
issue
was
that
some
tests
are
disruptive
tests
and
they
used
to
be
run
a
serial
test
before,
but
then
it
was
removed
from
serial
tab
because
they
just
stopped
you
so
comment
was
like
these
tests
are
disruptive,
let's
not
run
them
serial
tests.
So
I
wanted
to
ask
the
group:
does
anybody
have
any
history
with
like
understanding
what
the
disruptive
means
and
like
how
we
historically
run
them?
Can
we
round
them
as
separate
tab?
A
A
F
I
think
the
overall
comment
that
I
could
give
is
that
if
runtime
has
tests
that
are
maintained
and
are
supposed
to
be
meaningful,
it
will
be
worth
it
to
run
in
a
separate
app,
as
I
said,
as
a
separate
test
just
to
a
just
to
actually
just
to
actually
get
the
signal.
Okay,
I
think,
hey.
I
think
that
if
there
is
the
consensus
that
if
a
test
starts
tesla,
it
does
like
the
test
like
this.
A
F
Yeah,
yes,
yeah,
sorry,
that's
it
that's
kind
of
what
I
mean
like,
for
example,
if,
if
you're
running
a
disruptive
test
and
usually
the
way,
usually
disruptive
tests-
and
they
are
run
seriously
seriously-
and
I
mean
that's
what
that's
what
I
think
that
people
do
the
most.
So
if
this
is
not
the
this
is
not
a
rule,
but
usually
what
I
think
that
people
do
the
most
is
a
disruptive
test.
They
run
it
seriously
and
in
a
completely
different
tab
in
a
different
in
a
different
job
because
they
have
the
they
have.
F
F
A
So
it's
a
single
task:
do
we
need
to
have
so
easier
suggestion
to
take
all
the
tests
about
runtime
class
and
create
a
tab
and
then
run
it
serially
or
use
or
create
a
tab
only
for
disruptive
tests?
What's
the
best
practice
here
because
running
all
of
them
is
logical
because
you
see
in
one
place
or
the
test
about
runtime
class,
but
then
you
run
them
serially,
so
you
slow
down
the
execution
and
out
ultimately
increase
the
cost
of
run
so
running
is
just
a
single
test.
A
Is
less
logical
from
discoverability
perspective,
but
should
be
more
efficient.
A
A
No,
I
don't
I
can
yeah.
This
is
what,
as
I
I
saw
in
one
of
the
documents
that,
in
this
overview,
that
somebody
tried
to
create
a
tool
to
get
all
the
test
definitions
from
a
kk
repository,
so
we
can
at
least
know
which
test
supposed
to
be
run.
So
maybe
I
I
need
to
start
trying
this
too
and
understand.
What's
what's
missing,
can
I
authenticate.
A
F
I
I
think
I
think
I
think
she'll
be
yeah.
At
the
very
least,
I
think
it
should
be
good
to
okay
to
create
a
new
tab
for
a
new
test.
You
know
the
test
is
meaningful
and
we
don't
want
to
disrupt
signal
from
other
tests,
even
if
we
only
have
one
and
possibly
create
attack
for
it
that
we
can,
for
example,
if
we,
if
we
make
a
cereal,
then
we
can
just
run
all
the
serial
testing
down
in
the
same
tab.
F
A
A
Okay
and
next
any
any
other
comments
about
this.
A
So
yeah
and
what
you
all
think
about
moving
like
there
is
a
my
like
master
tab.
Do
you
want
to
move
master
and
create
it
under
like
market
critical,
so
it
will
show
up
here.
A
Way
to
move
a
subset
of
tests
and
under
critical,
you
can
only
move
entire
tab,
so
options
here
is
to
create
a
like
transition,
tab,
see,
kubelet
master
transition
and
then
like
transition
test
by
test-
or
just
I
mean
how
crazy
is
the
idea
to
create
a
tab
for
transition
master
test
into
critical.
F
A
No,
I
I
just
like
so
transition
to
critical.
We
discussed
last
time.
It's
it's
not
only
just
migrate
at
the
dashboard
definition
for
for
the
test
app.
We
also
wanted
to
increase
description
like
improve
description
of
the
tab,
so
we
want
to
make
sure
that
test
has
a
good
description.
That
tab
has
a
good
description
and
we
understand
what
we
move
into
critical,
so
it's
kind
of
like
a
revision
of
tests,
alongside
of
just
simple
migration.
A
A
Maybe
not
much
opinions,
okay,
okay,
then,
so
I
will
create
an
issue
to
that
and
try
to
do
it
by
area
and
see
how
it
goes.
A
And
last
question
I
want
to
discuss
is
place
for
documentation.
Last
time
we
discussed
with,
like
morgan,
wanted
to
take
this
action
and
migrate.
Some
of
the
documents
into.
A
Into
the
community
repository
specifically,
this
document
about,
like
this
is
this
document
about
description
of
tags
that
you
use
in
tests,
because
what
I
noticed
that
it's
really
hard
to
understand
why
this
tags
are
needed,
like
what
is
not
conformance.
What
is
not
feature?
What
is
not
alpha
feature
and
how
to
use
it,
so
you
have
a
document
describing
it.
It
will
be
really
good.
So
I
went
to
country
backs
meeting
and
suggested
that
you
move
this
document
to
community
repository
and
joel
from.
A
I
don't
know
where,
where
he's
from,
but
I
can
find
a
last
name,
he
suggested
that
it
may
not
be
the
best
idea
because
from
his
experience,
this
kind
of
documentation
get
stale
very
quickly,
so
his
suggestion
was
to
find
another
place
that
is
closer
to
code.
That
describes
the
same,
so
I
wanted
to
pull
some
opinions
from
from
this
group.
A
Like
do
you
want
to
push
to
move
it
to
the
community
and
promise
that
we'll
keep
it
up
to
date,
or
we
want
to
find
another
place,
like
maybe
testing,
for
a
repository.
F
I
don't
have
any
other
suggestions
other
than
community
repo,
but
test
infrared.
It
feels
a
little
that
one
feels
up.
F
I
don't
know
if
he
feels
a
little
bit
I
it
feels
a
little
bit
off
like
half
the
things
that
we
are
documenting,
the
the
the
bulk
of
what
we
are
documenting
actually
lives
in
kubernetes
kubernetes,
and
some
of
it,
like
the
job
configurations,
actually
live
in
testing
front.
A
Okay,
so
joel
suggested
that
we
created,
like
I
really
like
idea
to
put
it
in
community
because
it's
super
easy
discoverable
and
it's
already
have
some
documentation
about
enter
and
test.
So,
in
fact
there
is
an
entire
like
there
is
a
test
like
how
to
run
tests,
how
to
write
good
conformance
tests,
how
to
write
good
end-to-end
tests.
It's
mostly
generic
words
about
like
please
write
them
to
be
fast
and
don't
use
sleeps,
but
it's
actually
a
good
starting
point.
So
I
was
thinking.
A
Maybe
we
need
to
put
it
here,
so
it
will
be
very
logical
place
if
people
in
the
group
feel
it's
the
same.
I
will
start
conversation
on
country
backs
slack
repo,
slack
channel
suggestion.
If
you
want
to
do
that
and
here
for
more
opinions
than
just
drills,
that
sounds
good.