►
From YouTube: Kubernetes SIG Node 20201109
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Hello
and
welcome
to
ci
group
of
sig
note
it's
monday,
I
forgot
november
9th
good
morning,
everybody
and
good
day,
whoever,
whatever
time
zone
you
are
in,
let's
get
started.
So
there
is
a
first
agenda
item
with
no
name
on
it.
B
I
can
talk
about
it,
so
basically
the
the
node
conformance
tests
were
failing
for,
like
a
couple
weeks,
so
morgan
figured
out
that
it
was
related
to
changes
made
to
support
insecure
ports
and
it
just
the
very
token,
wasn't
plumbed
through
properly.
B
So
we
basically
morgan
made
that
change
to
plummet
through
properly
and
then
that
fixed
the
test
and
then
at
some
point
the
so
so
the
main
issue
is
resolved.
The
test
output
is
is
a
little
funky.
Basically,
what
usually
happens
is
the
the
the
os
is
usually
inserted
into
the
test
file
name
and
there's
a
parameter
of
the
of
the
call
that
is
not
included
on
the
node
conformance
test,
which
basically
means
that
they
all
kind
of
they're
missing
this,
like
ubuntu
or
whatever
anyway.
B
I
so,
I
kind
of
figured
that
part
out,
but
I
wasn't
able
to
figure
out
how
exactly
that
gets
set.
So
we
can
either
keep
tracking
this
under
a
different
issue
or
but
anyway,
that's
the
main.
The
main
issue
is
solved,
but
the
output
looks.
B
Take
a
step
back
so
basically,
if
you
go
to
test
grid,
you'll
see
that,
like
all
of
the
test
runs
you
know,
has
has
like
the
grid
of
all
the
tests,
and
then
it
has
like
a
random
number
or
not
a
random.
It
has
like
a
number
part
of
like
the
test
name
and
usually
those
all
line
up
properly,
but
in
this
case
they
don't
so
it's
kind
of
like
it
looks
like
there's
a
bunch
of
different
tests
run
and
run
because
they
are
essentially
overriding
each
other
anyway.
B
So
it's
it's,
it's
not
a
huge
deal,
but
it
does
cause
some
problems.
There.
A
Okay,
so
I
mean,
if
you
want
to
track
it
in
the
same
issue.
It's
fine,
I
guess,
are
you
working
on
it.
A
So
continued
g
drops
are
failing
as
well.
I
can
speak
about
it.
A
little
bit
so
derek
from
container
d
changed
the
test
input
from
like
test
yaml
file
from
a
cri
to
test
yaml
and
continuity,
and
that
somehow
broke
the
test.
I
think
it's
just
because
this
yamaha
was
not
downloaded
properly
or
something
so
he
promised
to
take
a
look
but
still
like
two
weeks
passed
and
he
hasn't.
I
will
ping
him
again.
If
not,
I
will
take
it
with
myself.
So
this
is
here.
A
Okay,
so
this
is
this
one?
Oh
it's
the
same.
A
Okay,
anybody
knows
about
this
item
going
once
going
twice:
okay,
let's
take
a
look
at
the
npc,
then,
can
you
update
everybody
on
what's
going
on
sure.
C
Oh,
thank
you
just
want
to
make
sure
so.
Npd
test
seems
to
have
been
failing,
since
I
believe
october
22
23-
something
like
that.
So
it's
been
a
couple
of
weeks
and
there's
been
a
couple
of
changes
trying
to
fix
it
specifically
around
service
account
admission,
webhook,
admission
controller,
sorry,
but
those
don't
seem
to
have
fixed
the
test.
So
one
of
the
latest
fixes
is
for
the
service
count
web
hook,
but
that
didn't
fix
it.
C
If
you
like
scroll
down
in
the
pull
request,
I'm
trying
to
run
it
with
the
patch
and
where
I
got
was
basically
the
test,
does
a
bunch
of
setup
and
then
creates
the
npd
pod,
but
within
that
pod
the
command
it
runs
is
touch
a
file
and
then
run
the
container.
So
it
gets
as
far
as
touching
that
file
creating
that
file
and
then
the
container
just
exits
without
any
logs.
C
There's,
no
there's
no
logs
in
in
the
cubelet
for
why
it's
not
running.
There's
no
logs
in
the
docker
runtime
container
d,
runtime,
there's
no
like
actual
container
logs
on
the
file
system,
weirdly
with
the
same
file
mounts
and
everything
I
can
run
run
the
container
with
docker
d
like
manually,
so
we
know
the
container
should
run.
C
The
pod
doesn't
actually
run
so
this
is
this
is
going
to
block
any
npd.
Node
problem
detector
changes
it's
one
of
the
tests,
one
of
the
pre-submit
tests,
so
I
have,
I
think,
I've
exhausted
all
options
from
my
end,
but
I'm
I'm
hoping
someone
else
can
provide
another
set
of
eyes.
C
It's
possible
there's
something
like
pretty
obvious
that
that
is
wrong
with
it.
C
It's
this
test
only
creates
the
npd
pod,
so
the
pod
runs
because
it
is
creating
the
file
that
it's
supposed
to
like
it's.
Basically,
you
know,
touch
a
file
and
run
npd
in
the
container,
so
the
container
is
being
being
scheduled
and
it's
being
run,
but
then
for
some
reason,
npd
itself
is
exiting
without
any
logs.
D
D
Test
doesn't
run
under
the
separate
lane
like
should
I
run
it,
it
does
not
run
automatically.
C
D
E
D
D
Yeah
so
like
after
roy
provided
me,
some
like
demon,
said
this
configuration
that
should
be
done
to
add
additional
can
arguments
I
tried,
like
I
did
yesterday,
into
enter
entry
mode,
rather
just
to
see
how
I
can
to
inject
it
and
in
the
end
I
saw
that,
like
I
succeeded
to
inject
it,
but
I
felt
like
to
have
really
pneumonia
fake
numerous,
because
in
the
end,
to
enable
fake
pneumonia,
you
need
to
compile
your
kernel
with
additional
configure
methods
like
config
and
config
emulator.
D
So
the
question
for
me,
how
can
I
proceed
and
proceed
from
this
point
like
we
want
to
test
multi-number?
D
We
want
to
run
multi-domain
test
under
the
kubernetes
side
because
you
have
like,
I
think,
two
or
three
components
like
device,
plugin,
topology
manager,
cpu
manager
and
in
the
future
memory
manager
that
it
can
great
to
run
test
on
multi-number,
note
machines,
but,
to
be
honest,
like
I,
don't
have
currently
any
additional
ideas.
How
can
I
to
achieve
it
because,
like
I
did
not
find
any
parameters
related
to
additional
cpu
topology?
D
That
can
be
done
under
the
gc
instance
like,
for
example,
I
know
that
under
the
qmo
and
leverage
you
can
just
to
configure
new
topology
that
will
be
automatically
available,
but
no
no
such
things.
No
such
thing
as
available
under
the
google
cloud
instance
and,
like
I
said
before,
to
use
the
fake
number.
We
need
like
some
image
that
pre-compiled
with
this
config
parameters.
So
if
anyone
has
some
additional
volumes,
how
how
can
how
can
we
test
multi-luma
components?
C
Yeah,
I
think
I
just
wanted
to
add
so
because
the
tests
are
running
on
cos
and
cost
doesn't
have
the
config
command
line
parameter
and
you
can't
change
it
in
runtime.
Pretty
much.
The
only
option
if
you
want
to
go
through
the
go
with
this
path
is
to
talk
to
the
cost
team.
So
I
think
I
saw
internally
we
were
looking
into.
F
Yeah,
I
think
the
I
checked
the
chronology
as
the
conflict
luma
is,
is
already
set.
Yes,
I
just
come
doc,
but
the
only
missing
part
is
the
config
numa
emulation.
Yes,
in
that
one
I
yeah
we
need
to
discuss
the
internally
to
see
whether
it
is
involved
you
we
enabled
that
needed
a
command
line,
change
right.
Basically,
like
the
numa
equal
fix
yeah,
I
would
try
to
persuade
the
team
to
universe
to
be
dispersed
like
the
long
term
like
this,
like
good
for
the
badminton
or
release
it
like
yeah
staff
yeah.
F
The
this
one
is,
I
think
that
we
have.
The
internal
back
is
tracking
yeah
yeah.
If
the
team
agrees
mostly,
we
can
draw
out
the
in
the
future
release
yeah
pretty
soon.
As
that,
that's
one
update
from
us,
but
this
is
you
need
to
change
the
command
line.
I
think
that's
not
the
philosophy,
a
cost
promote,
because
that's
the
endangered
security.
If
you
for
this
option,
if
the
vm
there
is
a
secure
boot,
you
still
cannot
do
the
test
yeah,
but
this
is
required.
A
Roy,
do
you
have
an
idea
like
so
this
may
or
may
not
be
accepted
and
released
in
the
next
version?
So
if
we
know
for
sure
it
would
be
great
to
have
an
update,
if
not,
is
there
any
alternatives
to?
Can
you
think
of
any
how
to
run
tests
in
open
source
kubernetes?
So
we
can?
F
I
think,
without
this
opportunity,
mostly,
you
cannot
fake
the
you
might
use
the
kernel.
Then
you
have
to
write
a
yeah,
that's
what
I
from
my
research,
yeah
yeah.
I
think
with
that.
Luckily,
this
fake
rumor
numa
this
guy,
who
the
david
he's
a
googler.
I
always
contact
him
to
see
whether
there's
what's
there's
some
security
implicating.
F
Yeah,
I
think,
maybe
next,
like
two
weeks
we
should
yeah.
I
already
called
the
bug.
Also.
I
included
the
new
the
google
into
that
email.
Yeah.
A
Yeah,
I
think
our
team
is
easier
for
you
than
just
to
wait
for
for
an
update
here
then
investigate
other
options.
D
G
Hey,
can
you
hear
me.
G
Great
great
awesome
so
contributed
another
fix
which
we
found
internally,
and
I
think
this
fix
also
makes
the
test
more
robust.
The
test
flow
more
robust,
easier
to
follow,
however,
the
I
I
will
not
hide
that
we
are
we
are.
We
are
hitting
on
a
nasty
interaction
between
the
survey
device
plug-in
and
the
cubelet.
I
I
can
I'm
still
trying
to
narrow
down.
I
can't
say
it's
an
environment
is
something
specific
of
the
box
is
something
we
will
really
need
to
track
down.
G
I
I
think
I
shared
another
bugzilla,
which
is
related
in
the
in
the
pr,
so
I
think
overall
the
pr
is
worth
reviewing.
So
there
is
some
benefit,
but
there
is
still
more
to
be
found
in
in
in
this
area.
Also.
I
I
added
like
a
couple
of
lines
like
to
explain
why
you
see
well,
you
can
run
just
find
the
lane
and
it
will
not
file.
G
For
that
reason,
and
the
reason
is
this
part
of
the
tester
is
kept
until
you
run
on
a
box
which
is
virtual,
which
is
real
hardware,
which
doesn't
have
at
least
two
num
nodes
and
some
srv
devices,
because
we
want
to
test
actually
the
college
manager
alignment.
So
I
added
this
explanation.
G
So,
basically,
review
requests.
Please
please
people
when
you
have
time
review,
because
it's,
I
think
it's
good
to
have
it
some
discussion
around
this
pr.
That's
it.
A
Okay,
and
is
it
failing
now
in
test
grid
or
just
improvement.
G
It
is
an
improvement
because
in
test
grid
you
you,
you
don't
hit
this
code,
the.
If
you
look
at
the.
If
you
look
at
the
code,
the
in
the
location,
I
I
linked
here,
you
will
see
that
part
will
be
skipped
in
in
test
grid.
So
tesco
is
going
to
be
green,
but
green,
because
that
part
is
going
to
be
skipped.
G
G
No
you
we
need
any
device
which
is
recognized
by
develop,
which
has
a
device.
Plugin
and
srv
device
happened
to
be
the
easiest
to
to
get
but
any
device.
We,
of
course,
will
need
a
code
change,
but
any
device
which
device
plugin
is
good
enough.
A
Yeah,
I'm
just
curious
about
the
history
of
this
test.
Why
we,
we
keep
the
test
in
kubernetes
repositories
that
we
don't
intend
to
run
on
test
grid.
G
We
want
to
no
no
wait,
wait
wait
this.
This
is
a
part
of
the
test
if
you
well.
I
can't
tell
you
the
word
story,
because
I
was
there
from
the
beginning,
so
the
intention
was
okay.
Eventually,
we
will
have
a
tesla,
and
this
is
a
conversation
that
art
john
started,
which
is
also
my
team.
So
we
wanted
to
add
that,
but
to
get
there
is
taking
some.