►
From YouTube: Kubernetes SIG Node 20210303
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Hello,
it's
signotsia
meeting
it's
march
3rd
2021,
welcome
everybody.
We
don't
have
much
agenda
items
today
again,
so
let's
get
ball
rolling
with
the
container
d-test
changes
and
the
fixes
that
we
need
to
make
so
container
g
tests
are
failing
for
a
long
time
and
igt
was
trying
to
come
up
with
a
plan
how
like
which
test
we
actually
need
to
run
because
right
now
we
are
running
continuity,
1.2
and
1.3,
and
this
is
clearly
not
what
you
want
to
do
so.
A
Jim's
here
are
also
investigated.
Some
failures
in
continuity,
test
for
1.5
right
so
jim's.
If
you
can
give
us
some
context.
B
Okay,
so
well,
the
story
is
goes
like
this:
it's
a
container
d.
We
are
trying
to
get
a
1.5
release
out
and
there
is,
if
you
look
at
the
milestones,
there's
up
plenty
of
plenty
of
things
here.
So
we
need
to
work
through.
B
now.
What
happens
in
continuity
project
is?
There
is
a
pro
job
most
of
the
jobs
are,
you
know,
github
actions,
and
there
is
actually
an
open
lab
thing
too,
but
for
us
the
main
two
jobs
are
the
build
and
the
node
e
to
e.
Now
this
node
e
to
e
fails
right.
That's
the
observation
that
we
saw
and
clicking
into
it
one
of
the
going
through
the
artifacts.
B
One
of
the
things
that
I
ended
up
looking
at
was
I'll
show
you.
There
was
an
issue
with
pulling
the
right
directory
structure.
They
change
the
directory
structure
of
the
build
packaging
thing,
so
there's
an
extra
container
d
in
there
now.
So
I
fixed
that.
Well,
I
was
trying
to
fix
that
one
so,
author
dims
and
then
so
that's
trying
to
fix
the
failure.
B
So
once
I
was
able
to
do
this,
it
got
past
the
problem
where
there
was
a
404
trying
to
pull
the
artifacts
and
then
it
fails
somewhere
else,
and
that's
where
the
problem
is,
I
haven't
been
able
to
track
down
where
it
is
quickly
digging
through
it.
Let
me
show
you
where
exactly
I
ended
up
so
the
serial
log.
B
Is
not
bad
there's
not
not.
One
thing
you
can
see
here
is
failed
to
start
execute.
You
know
the
scripts
right.
So
that's
that's
a
problem
so
going
to
the
system.
Log.
B
B
Okay,
maybe
not
here,
let's
go
back
to
build
log,
okay
and
fail.
B
A
Yeah,
I
remember
somebody
investigating
similar
thing
and
the
issue
is
path
to
script
that
needs
to
be
executed.
I
don't
remember
detail
so
I
I
I
can
try
to
dig
into
that.
B
All
right
there,
I
think
there
was
one
more
thing
possibly,
but
you
know
if
you
do
start
looking
into
it,
then
I'll
I
I
will
take
out
my
notes
and
you
know
try
to
see
if
I
can
see
exactly
the
same
things
that
I
saw
before
in
the
in
the
previous
log,
and
so
one
of
the
things
that
we
I
ended
up.
Trying
was
the
gcp
project
here.
B
Project,
if
you
look
at
this
one
cri
cat
pr,
no
dtv,
this
project
is
used
only
in
the
continuity
continuity
project,
so
the
continuity
scripts.
Sorry,
the
con,
the
ca
jobs
that
we
have
in
kk
that
uses
container
d
run
on
another
gcp
project.
Not
this
one.
B
B
B
So
I
did
see
some
some
activity
from
about
the
continuity
failures
on
in
our
repository,
so
who
was
looking
into
well?
Did
you
look
into
that
sergey
yet,
or
was
it
just
no.
A
B
If
we
fix
one
either
side,
then
the
other
one
will
be
easy.
If
both
sides
are
broken,
then
it's
difficult,
especially
because
none
of
us
outside
google
have
access
to
any
of
these
projects.
As
far
as
I.
A
B
B
Right,
yeah,
you
need
to
ping
the
people
who
have
the
access,
so
one
failure
that
I
was
seeing
was
a
master
ip.
I
believe
must.
Let
me
see
if
I
yeah
see
this
one,
and
there
is
nothing
that
I
can
search
for
in
the
in
the
k
code
base
or
continuity
code
base,
that
that
has
anything
to
do
with
this
specific
resource.
B
B
So
maybe
this
is
where
we
need
to
start
trying
to
figure
out
like
if
there
is
something
wrong
with
the
gcp
project,
and
I
remember
at
one
point:
some
of
the
projects
were
locked
down
because
of
like
a
security
concern
or
something
like
that
and
it
took
took
a
while
for
people
to
figure
out
like
you
know,
that
was
the
problem
why
the
projects
were
not
working.
B
So
I
think
what
would
workers,
if
you
can
tap
aaron
or
other
people,
to
go,
look
at
these
projects
and
see
if
they
are
healthy
and
resurrected
resurrect
them?
If
not.
A
Okay,
I
will
do
and
but
I
I
still
have
a
feeling
that
this
is
caused
by
very
names
and
structure
like
directory
structure
changes,
but
yeah.
I
will
definitely
can
do
that.
B
But
just
to
capture
it
for
posterity.
This
is
what
I
was
looking
at
in
the
main:
build
log
search
for
master
dash,
ip
okay,.
C
Just
a
quick
sorry,
I
I
jim
just
one
quick
point
I
saw.
Did
you
take
a
look
at
like
the
image
config,
because
I'm
just
looking
at
it
right
now
and
I
do
see
that
as
part
of
the
cloud
init
there's
some
container
d
specific,
like
cri
yaml
stuff,
that
it
downloads
and
looks
like
it.
404
is
now
after
the
because
in
container
d15
they
move
the
cri
into
the
main
repo.
C
I
I
mean
there's
so
much:
it's
just
I'm
not
exactly
sure
which
image
config,
but
if
you
just
notice
the
link,
I
posted,
if
you,
if
you
can
find
which
image
config
it's
using,
because
I
do
notice.
That's
part
of
the
metadata
it
links
to
some
e
to
e
node
yaml
in
the
container
d
repository,
and
if
I
go
to
that
now,
if
404s
so
I'm.
C
B
That
could
be
part
of
it,
but
I
remember
fixing
these
references
when
we
moved
the
when
we
collapsed
the
continuity
cri
repository
into
the
main
repository,
but
yes,
I
will
go
look
at
it
after
as
soon
as
we
are
done
with
this
call.
Thank
you,
david.
A
Okay,
let's
switch
to
triage.
We
let
me
share
my
screen.
A
So
I
added
all
the
issues,
all
the
new
issues
and
categorized
them
really
quickly.
So
there
is
not
nothing
that
I
failed
to
triage
and
I
can
understand
where
it
goes
very
fast,
so
most
of
them
goes
into
to
do
and
if
you
work
around
them,
please
yeah,
I
see
very
few
that
assigned
to
people,
so
I
will,
if
you're,
working
on
that,
just
move
it
to
into
in
progress
arjun
you're
here
right.
A
Okay,
let's
move
here.
A
So
in
progress.
A
D
A
So,
let's
go
through
review,
there
are
nine
pr's
that
need
to
review
and
we
assigned
most
of
them
yet
last
week.
So
let
me
go
through
them
again.
A
A
A
A
Okay,
this
is
interesting.
This
is
adding
a
n
minus
two
test
to
what
it
did
that
we
actually
testing
kubernetes
master
of
like
couple
control
plane
for
two
versus
forward
and
not
two
versions
back.
A
D
Yeah,
so
the
problems
that
we
encounter
under
the
sa
is
we
started
the
couplet
and
because
that
e2e
dot
server
also
all
the
time
monitoring
the
complete
process
it
like
it
was
restart
the
kubrick
again
and
it
was
restarts
the
couplet
after
that
we
created
the
pod.
So
it
happens
the
situation
that
we
created
the
port,
so
the
cubelet
receives
the
pod
after
it
was
restarted
and
the
port
dropped
to
the
failed
state
because
it
has
the
policy
do
not
restart
it
and
because
of
it
that
the
test
failed.
D
So
the
one
two
things
that
I
did
under
the
pull
request.
I
do
not
restart
the
complete,
I
just
stop
it
and
after
it
I'm
waiting
until
the
e2e
node
server
will
restart
the
couplet
by
by
himself
and
I'm
waiting
until
the
health
probe
is
succeeded
for
the
kubrick
and
only
after
it
I
will
start
the
pot.
I
will
create
the
pot.
D
A
Yeah
sounds
easy
fix
that
forward
fix
to
a
review.
Anybody
wants
to
review.
D
A
Who
is
this
person
reviewed
it?
Oh
yeah,
cool.
A
Thank
you
yeah.
By
the
way,
one
of
the
things
that
we
will
need
to
do
is
to
start
removing
dynamic
configuration
out
of
tests,
so
we
probably
will
end
up
restarting
kubelet
a
lot
to
pick
up
new
configuration
and
flags.
So
maybe
that
will
give
us
some
idea
how
to
do
it
better.
A
Dynamic
configuration
is
one
of
the
like
features
that
was
introduced
long
time
ago,
and
it's
still
in
either
alpha
or
beta.
I
don't
remember
so
idea
was
that
we
will
have
some
like
subset
of
configurations
that
can
be
changed
dynamically
without
the
build
restart,
but
then
nobody
using
it
and
we
don't
have
like
I
mean
most
people
were
starting
kublet
anyway,
so
their
suggestion
was
that
we
removed
this
feature
instead
of
graduation
into
ga.
D
A
Okay,
this
is
similar
to
what
we
discussed.
A
B
I
reviewed
this
one
this
afternoon
it
was
very
straightforward.
I
think
I
just
put
in
a
small
comment.
D
Yeah,
I'm
I'm
pretty.
I
will
a
little
bit
refactor
it
because
of
the
fix
that
I
had
under
the
huge
pages
test,
so
I
want
to
refactor
it
the
same
way
for
the
memory
manager
just
to
prevent
any
additional
issues
with
the
complete
restart.
So
I
will
just
pin
you
once
I
will.
A
Thanks!
Okay,
we
only
have
four
pr
to
review
now,
once
you
ask
review
that,
we
now
have
a
lot
of
flaky
test
issues
that
I
need
to
do.
We
didn't
create
them
before,
because
we
have
not
to
do
items
but
now
getting
out
of
things
to
do
so
like
there
are
many
things
that
we
can
pick
up
from
this
list
and
if
you're
picking
something
up,
move
it
in
progress
or
we
can
just
move
it
in
progress
next
time
we
meet,
but
yeah.
A
E
Yeah,
I
did
I
reviewed
one
item.
I
still
have
the
the
organization
process
to
join.
The
cube.
The
kubernetes
organization
is
still
ongoing,
so
probably
is
in
the
wrong
state.
I
will
be.
I
did
a
couple
of
reviews,
so
I
think
I
will
pick
up
another
task
from
the
column
in
the
next.
In
the
coming
days,
I
will
at
the
very
least
they
will
assign
to
me
that
I
can
do
right
now,
so
you
will
know
it's
a
bit
confusing,
but
up
until
the
process
to
join
the
org
is
completed,
it
is
what
it
is.
E
A
Oh
no
worries
I
mean
I
was
sick
most
of
last
week,
so
I
also
didn't
do
much
so
yeah.
It's
always
something
happening,
we're
trying
to
do
the
best.
D
Today
I
have
some
small
update
regarding
the
serial
drop,
because
already
one
cpu
manager
fixes
was
meshed
last
week,
so
I
checked
against
the
serial
drop
and
the
thing
that
I
found
that
it,
the
the
job
time
had
time
out
because
it
it
has
some
out
of
the
memory
issues
like
I
found
errors
under
the
kernel
log,
so
my
next
steps
probably
will
be
to
increase
a
little
bit
the
memory
for
the
serial
drop
like
now.
D
I
believe
it's
it
runs
with
one
gigabyte
of
memory,
so
I
want
to
use
different
instances
of
gce
with
two
gigabytes,
so
I,
like,
I
hope
at
least
we
will
have
a
better
picture
after
this.
A
Okay,
great-
and
I
mean
this
just
started
like
I
remember
serial-
wasn't
that
right
before
so.
Do
you
know
if
somebody
changed
the
machine
type
or.