►
From YouTube: Kubernetes SIG Node 20200928
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning
it's
september
28th
and
it's
not
continuous
integration
subgroup.
It's
a
weekly
meeting
and
hi
everybody.
A
B
Yeah
sure
just
yeah,
I
left
a
note
for
myself
here.
I
didn't
didn't
so
necessarily
need
to
go
first,
but
I
wanted
to
sort
of
just
draw
a
attention
to
the
behavior
of
some
of
the
the
basically
how
the
tests
get
run,
and
so
let
me
put
a
link
a
couple
links
real
quick
just
to
basically
when,
when
we
have
the
tests
configured,
you
know
and
it's
it's
important
to
understand
what
is
going
to
get
run.
B
It's
important
to
look
at
the
basically
the
image
config
versus
the
the
what's
actually
configured
in
the
test.
Definition
itself
right.
So
we
have,
we
have
as
part
of
testdem
from
we
have
our
config
of
our
jobs.
B
We
also
have
the
image
definition
of
the
jobs
and
when
those
conflict,
it
seems
that
the
what's
in
the
image
sort
of
wins
out
over,
what's
in
the
I'm,
not
sure
if
they
get
combined
or
not,
but
basically
I
saw
in
test
grid.
If
we
go
look
at
the
test
grid
for
the.
B
The
job
of
the
flaky
job
for
cubelet
it
we're
actually
not
running
most
of
the
tests
that
maybe
there's
other
characters
but
we're
not
running
our
mark
flaky,
and
so,
as
part
of
that,
I
made
a
pr
to
move
that
that
the
test
that's
successfully
running
into
into
its
own
job,
so
that
we
can
have
like
here's
the
performance
test
group.
And
so
I
just
wanted
to
make
sure
that
everybody's
aware
of
that,
because
it
was
sort
of
illuminating
to
me
and
it
might
be,
it
might
be
confusing.
So.
C
Morgan
is
sorry
I
couldn't
get
that.
Would
you
it's
possible
for
you
to
share
the
screen
and
show
what
you
just
said.
C
D
Yeah,
that's
that's
the
second
link
in
that.
In
that
thing
there.
Okay,
sorry,
sorry!
I
I
only
saw
the
first
link
you
pasted.
Let
me
open
that
one.
B
Right
and
so
here's
the
the
tests
line,
you
can
see
that
it's,
it's
the
it's.
Basically,
it's
selected
the
node
performance
testing,
and
so
that
seems
to
I'm
pretty
sure
there
are
other
tests
that
are
marked
flaky,
but
those
aren't
being
run
right
now
or
maybe
we
don't
have
any
other
flaky
tests
and
that'd
be
great,
but
either
way
the
end
result
was
I
made
a
pr
and
moved
those
to
sort
of
a
separate
job
that
we
can
give
enough
resources,
because
those
tests
aren't
really
flailing
anymore.
They
haven't
been.
B
And
that's
yeah
literally,
the
only
thing
I'm
trying
to
say
is
you
can
do
this.
The
primary
use
for
this
seems
to
be
the
benchmark
run.
The
the
like
90
pods,
are
created
and
timing,
and
that
seems
to
be
done
such
that
you
get
a
fresh
image
for
every
single
individual,
little
tiny
test
that
you
run.
B
So
if
you,
if
we
look
at
the,
I
didn't
put
paste
it
in
there,
but
if
you
find
the
the
image
config
for
benchmark,
it's
like
benchmark
image
config,
it's
literally
like
20
or
30
image
definitions,
but
they
all
run
one
single
yeah,
probably
benchmark
if
it
yeah,
they
all
run
one
single
test.
It's
like
this
third.
B
File
and
so
resource
tracking
did
it,
but
you
can
see
it's
a
giant
file,
but
literally
all
they're
doing
is
running
one
single
test
per
per
image,
so
yeah.
If
that
was
new
to
you
great
and
it
was
a
successful
use
of
the
time.
C
B
Yep-
and
so
I
I
found
it
confusing-
I
just
wanted
people
to
be
aware
of
it
and
maybe
I'll
try
to
write
it
up
in
the
testing
guide
as
something
to
be
aware
of
so
I'll
action
out
of
myself
on
that
one.
A
B
It
seems
to
be
that
that
I
think
this
was
accidentally
done
to
get
the
to
get
those
to
be
focused
while
we
were
getting
them
to
work
again,
and
so
I
made
a
pr
to
sort
of
tease
it
apart.
A
little
bit.
B
A
Yeah,
I
I
just
wanted
to
look
at
this
again,
so
we
have
this
entire
tab
flaky
and
the
way
how
it
it
picks
a
test
to
run
is
just
by
image
file
right.
So,
even
though
it
says
that
I
want
to
look
at
all
flaky,
it
actually
was
looking
at
performance
tests.
E
A
E
A
It
will
be
separate
specifically
for
performance
tests
and
there
will
be
no
unclear
matching
between
mapping
between
like
what
is
in
the
image.
And
what
is
here.
B
A
Yeah
yeah,
this
is
great
because
I'm
I'm
still
trying
to
understand
which
tests
are
critical
and
which
node
and
I'm
trying
to
understand
the
logic
of
all
the
tabs
that
we
have
and.
A
Are
based
on
features
sometimes
based
on
whether
they're
running
fine
like
and
this
is
this-
is
a
puzzle
you.
A
Okay,
let's
move
to
follow-ups
from
last
meeting
because
I
don't
think
there
is
any
other
items
on
agenda
so
roy.
Do
you
have
any
update
on
this
docker
thingy.
F
I
I
try
to
like
ask
looks
no
response
within
google.
I
think
is
updated
the
ticket
yeah
also
from
the
test.
They
are
pretty
old
test.
Yes,
I
think
it
would
be
difficult
to
remove
it
if
yeah,
because
I
also
actually
include
the
lantern
and
yeah.
I
think,
if
no
response
for
a
while,
it
will
be
essence
to
be
safe.
Yeah,
that's
from
mine-
and
this
is
another
team
costume-
is
not
aware.
Yeah.
A
Okay,
next
action
item
filtering
so
yeah,
I
it's,
I
think
it
was
brought
by,
I
mean
and
I
I
was
suggesting
to
filter
by
subfolder,
but
then
I
looked
at
it
and
I
didn't
find
a
way
to
filter
by
subfolder
yeah.
A
I
was
under
easy
and,
like
I
was
like
okay,
I
will
spend
five
minutes
and
then
hour
and
a
half
later
and
like
no.
There
is
no.
E
A
So
what
would
be
suggestion
here?
Do
we
just
need
to
add
a
new
tag
on
ogc
like
okay?
The
issue
is
that
there
is
a
tab
for
gce
tests,
and
this
tab
includes
some
tests
that
are
not
belonging
to
gc
at
all,
so
we
wanted
to
filter
them
out.
So
I
think
it's
safe
assumption
that
what
we
can
do
is
to
just
add
gce
tab
on
name
of
a
test
and
start
filtering
by
that.
G
Okay,
so
it's
a
freshly
new
tab
and
move
everything
to
there
or
only
starting
with
this
flaky
one.
A
The
issue
is
that
this
tab
includes
this
graceful
port
termination
thingy
and
it's
under
gc
flaky
gti,
and
I
don't
think
we
need
it.
So
we
want
to
keep
all
these
tests,
but
we
don't
want
this.
B
G
B
G
G
A
Yeah,
I
thought
the
only
one
test
from
gti
subfolder,
because
there
are
a
few
yeah.
G
Yes,
doesn't
look
like,
but
I
think
it's
not
filtering
by
gci
specific,
but
I
are
using
gci
image
for
this.
So
it's
running
everything.
F
A
We
specifically
looking
at
this
tab
right
now
and
to
ngti
flaky,
and
the
problem
is
that
the
definition
of
this
test
just
search
for
flaky
test
across
entire
tree,
and
it
doesn't
sound
right
because
I
mean
supposedly
it
was
designed
to
run
flaky
test
on
the
gci
tab.
Not.
F
Before
those
flicky
tests,
I
saw
they
always
failed
for
quite
a
while
this
one
part
I
do
another.
A
Flaky
means
it
fails
inconsistently,
but
I
mean
I
don't
know
like.
I
know
why
it's
gti
flaky,
so
my
assumption
was
that,
based
on
name
that
it's
supposed
to
be
under
gti
folder
and
all
the
tests
there
should
be
running
in
here.
B
G
But
should
this
running
out
the
test
morgan
or
we
should
have
a
subset-
that's
specific
for
gci,
whatever
this
means.
B
F
The
yes,
that's
usually
because
of
google
cloud
instance
yeah
yeah
this
this
one
also,
I
wanted
to
change
in
the
open
source
community.
Basically,
I
want
to
change
all
the
gci
to
cause
now.
A
So
morgan,
just
like
I
mean
we're
already
running
everything
on
course,
so
this
assumption
doesn't
seem
to
be
right,
so
assumption
that
gci
test
has
just
test.
All
the
tests
running
on
different
image
doesn't
seem
to
be
correct
and
there
is
a
folder
specifically
for
gci
tests.
As
far
as
I
remember
so,.
B
I'd
have
to
I'd
have
to
see
the
the
test,
the
the
proud
definition,
the
job
definition
then,
and
then
also
it's
under
container
d.
So
I
haven't
been
looking
at
the
container
d-dab
at
all,
so.
A
Okay,
so
should
we
create
a
separate
work
item
and
just
discuss
it
there
I
mean:
can
you
follow
up
on
that
and
create
a
issue?
Yeah
sure,
thank
you.
Okay
and
next
one
is
yeah
escalation
to
signal
asking
about
different
node
conformance
and
not
master.
It's
still
unclear
for
me
what
the
difference
and
why
the
difference
I
don't
like.
I
know
that
just
conformance
tests
are
on
for
to
check
conformance
of
nodes,
but
I'm
not
sure
with
not
conformance
in
the
same
way.
E
So
from
the
from
the
dock
that
dawn
referenced
in
the
github
issue,
it
basically
sounds
like
it's
the
the
set
of
tests
that
should
be
agnostic
to
container
os,
and
all
of
that,
at
least
that's
that's,
that's
the
intention,
and
the
intention
is
that
it
will
run
after
with
every
pr,
but
it
seems
to
be
pretty
far
from
that
right
now
and
it
also
seems
to
be
pinned
to
running
with
docker.
E
So
I'm
not
so,
it
seems
like
there's
like
a
proposal
from
2018
what
it
was
supposed
to
be,
and
it
seems
like
it.
It
never
got
fully
moved
to
there.
So
I
don't.
I
don't
actually
know
what
the
what
the
intention
is
in
terms
of
like
how
how
how
we're
going
to
achieve
you
know
that
so
it
is.
It
is
agnostic
when
it's
currently
pinned
to
docker.
E
Yeah,
so
at
the
in
inside
the
github
issue
I
can
I
can
post
it
post
the
doc
in
chat
all.
A
B
B
A
Okay
context,
we
were
trying
to
understand
what
the
difference
between
not
conformance
and
not
meister
and
asking
dawn.
She
pointed
to
the
document
that
you
shared
before
saying
that
this
document
explains
the
difference.
B
E
Yeah,
so
I
don't
you
know,
I
mean
it
seems
like
we
maybe
need
more
information
from
don,
specifically
as
as
to
where
this
is
and
basically
either
either
we
figure
out
how
to
get
it
to
get
the
situation
to
where
it's
supposed
to
be
or
or
we
decide
that
that's
not
a
priority
and
act.
A
E
F
E
Pasted
which
basically
talks
about
the
node
component
tests
and
specifically
calls
out
running
in
docker
and
those
don't
seem
to
match
either,
and
I
know
this
is
you
put
this
on
the
agenda
for
like
the
last
few
meetings
and
we
didn't
even
get.
We
didn't
get
anywhere
close
to
talking
about
it.
So
I
think
if
we
don't,
if
it
doesn't,
doesn't
get
talked
about
in
tuesday's
meeting,
then
I
think
we
probably
need
to
reach
out
to
don
directly
and
just
have
that
conversation
directly.
A
Okay,
I
will
take
an
action
item
to
follow
up
with
her
and
and
this
document
that
you
just
mentioned.
Second,
one
is,
I
think,
it's
about
conformance
test
that
is
not
node
conformance,
but
just
conformance.
E
A
A
Thank
you,
sir.
If
I
misunderstood
okay,
this
is
it
and
next
one
yeah
laning
documentation.
I
still
in
process
understanding
where.
F
A
Land
it
and
what
you
want
to
add
sorry,
I
didn't
move
it
far
far
away.
A
Okay,
I'm
organizing
new
items.
I
wanted
to
point
that
we
made
the
progress
on
these
action
items
here.
They
all
are
waiting
for
approvals,
but
we
have
like
four
pr's
to
look
at.
Please
do
and
most
of
them
just
need
approval,
so
I
will
post
it
on
node
slack
for
direct
and
don't
attention.
A
Cool
yeah
go
ahead.
It's
you
right,
morgan,.
B
Oh
yeah,
I
just
wanted.
I
we
you
know
last
week
was,
you
know,
make
an
issue
for
this,
so
I
made
an
issue
for
it.
What
did
I
want
to
say
about
nothing?
It's
just
these
are.
These
are
the
tests.
This
is
the
one
that's
sort
of
the
last
black
mark
on
the
the
last
red,
failing
snowed
cubelet
serial
job,
that
is,
that
is
keeping
that
entire
tab
from
being
blue
or
success
or
whatever
you
want
to
call
it.
So
we're
very
close.
A
A
B
I
come
back
to
it
every
once
in
a
while
and
like
poke
at
it,
but
it's
just
I'm.
You
know
I
I'm
not
really
sure
how
to
debug
it,
because
it
seems
like
there's,
probably
either
some
kind
of
the
tests
either
break
themselves
all
running
in
the
same
job
or
you
know
I
don't
know
how
to
get
started
there.
A
Is
there
some
administrative
tasks
that
you
need
like
when
is
last
succeeded
or
that
kind
of
investigation,
or
you
already
did
all
that.
B
Well,
you
can
see
from
the
it's
actually
in
the
issue.
There's
a
there's,
a
there's,
actually
a
a
metric
that
is
collected
and
like
this
last
succeeded
like
990
days
ago,
or
something
ridiculous
okay,
and
so
it's
not
now
I
don't
think
it's
been
like
it
right
now.
It's
like
all
fails.
B
I
look
at
so
many
test
grids,
it's
hard
to
remember,
but
I
feel
like
this
one
was
maybe
intermittently
flaky
failing
and
now
it's
just
like
totally
her
splat,
but
I
could
be.
You
know
you
look
at
so
many
test
grids.
You
can't
really
be
sure,
so
it
might
just
be.
It's
been.
It's
been
blown
up
for
three
years
at
which
point
well.
This
is
the
only
place
these
tests
get
run.
So
how
much?
How
much
worth
are
they
being?
How
much
worth
are
they
providing?
B
If
nobody
is
looking
at
them,
and
if
nobody
cares,
I
guess,
but
they
seem
important
like
it's
serial
disruptive,
it
seems
these
seems
like
things
that
are,
you
know,
really
check.
I
know
the
dynamic
cube,
I'm
looking
at
them.
I
know
dynamic.
Config
can
probably
go
away
for
the
most
part.
I
know
that
you
know
obviously
legacy
docker
can
probably
go
away.
B
A
Okay
yeah,
if
somebody
has
time
please
jump
in
and
try
to
help
here.
A
B
Well,
disruptive
should
be
that
it.
It
actually
like
changes.
The
cubelet,
but
slow
should
be
slow,
should
be
there
slow.
But
that's
interesting
question,
but
so
the.
A
Reason,
like
the
question
I
asked
you
know,
is
there
a
pull
request
that
I
started
on
runtime
class
and
there
is
a
test
that
added
paint
on
a
node
and
then.
A
This
paint
when
test
completed
so
it
was
marked
as
disruptive,
even
though,
like
you
can
just
do
it
serial
right
or
serial.
I'm
sorry,
you
can
okay,
okay,
okay,
yeah!
Sorry!
I
something
clicked
in
my
head,
it's
monday,
so
I
marked
it
serial.
Instead,
because
I
mean
it's
supposed
to
be
working
fine,
it
will
remove
it.
So
it's
the
paint
it
creates
on
a
node
yeah.
So
I
wonder
if
there
is
any
anything
I'm
missing
here.
Oh.
B
A
I,
but
I
cannot
imagine
what
it
can
be
like
how
you
can
get
cluster
in
the
situation.
Maybe
if
you
cry
like
there's
some
crash
of
pods
like
crash
of
kublet.
C
So,
okay,
one
thing
we
are
working
on
this
runtime
class
to
ga
right.
I
just
wanted
to
know
what,
how
much
time
do
we
have
like.
A
So
enhancing
like
in
to
approve
ga
for
enhancement,
it's
six
of
october
and
then
quote
complete
in
somewhere
in
either
late
october
or
november.
I
don't
remember,
but.
A
A
Derek
promised
to
review.
We
just
need
him
to
review
this
document
say
yes,
it's
reasonable,
so
we
don't
have
any
any
blocking
issues
or
features
that
we
need
to
implement
and
we
haven't
enough
code
coverage
and
documentation.
So
if
these
two
conditions
are
satisfied,
then
it's
supposed
to
be
fine
to
go
to
ga.
A
Okay,
if
there
is
no
more
topic
topics,
we
can
finish
this
meeting
today
and
thank
you.
Everybody
have
a
good
week.