►
From YouTube: Kubernetes SIG Node 20210526
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Okay,
let's
start
and
good
morning,
it's
a
signal.
Ci
subgroup
meeting
welcome
everybody.
It's
may
26
2021.,
so
agenda
today,
francesca
you're,
our
first
item
and
other.
B
B
There
is
some
initial
good
progress,
so
I
have
I
have
a
machine
can
which
can
run
those
tests,
not
24
7,
but
still
it
can
run
so
it
it
failed
after
24
hours
I
collected
a
full
log,
so
we
have
a
backup
plan
to
understand
well
a
failover
to
understand,
what's
actually
going
on
on
ci
and
I'm
sharing
the
logs
and
all
the
the
parameters
for
the
for
the
ci
run.
B
For
the
test
run,
this
machine
could
not
be
put
online,
so
there
is
it's
very
impractical
to
put
it
online
to,
for
example,
to
have
to
whoever
those
logs
easily
accessible.
So
I
need
to
publish
somewhere
and
then
fortunately,
it's
literally
interactive,
but
is
something
and
we
can
build
on
to
understand,
what's
actually
going
on
and
this
is
it
I'm
going
to
you
see,
there's
five
meg
logs
so
need
to
read
it
to
understand.
B
What's
going
on
to
really
debug,
but
I
want
to
share
first,
so
we
can
go
in
parallel
and
iterate
from
there
and
fix
eventually
the
ci
lane.
So
this
is
the
plan.
This
is
basically
what
I'm
up
to
anyone
wants
to
reach
us.
Maybe
try
this.
Try
that
let's
look
leave
a
look
at
the
vlogs,
I'm
all
ears,
I'm
happy
to
help.
It
will
take
some
time
to
understand.
What's
going
on,
there
are
few
suggestions
flying
for
example,
from
benjamin,
but
it's
going
to
be
well.
B
C
Oh
okay,
I
I
saw
that
there
were
a
couple
of
pr's
to
help
with
this
test
up.
There
was
one
to
increase.
The
number
of
I
guess,
like
I
think,
open
file
handles
that
the
api
server
can
have
because
it
was
like
running
over
ssh
or
something
like
that
which
was
causing
it
to
get
artificially
limited
super
low.
C
And
although
I
don't
know,
if
there's
any
way
to
verify
that,
but
there
was
a
claim
there
was
another
one
that
will
catch
like
the
first
sig
int
that
it
receives
so
like
in
ci.
It
will
upload
artifacts.
So
I
had
not.
B
B
Yes,
I
did,
I
reviewed
the,
I
think,
both
of
them,
or
at
least
the
double
sigint
one
and
yes,
odin,
is
helping
a
lot,
but
there
are
pretty
much
shots
in
the
dark
because
we
don't
really
know
what's
going
on
yet
and
for
my
take
is
both
of
them
are
helpful.
I'm
not
super
happy
with
the
one
which
ignores
the
first
segment,
but
odin
is
well
aware
that
yeah
this
is
not
the
definitive
fix,
but
to
get
out
of
the
woods
we
need
to
start
from
somewhere,
and
I
agree
it's
good
enough
first
step.
D
B
D
B
The
moment
I
started
running
the
very
same
command
line,
the
infra
is
doing
the
ci
lane
is
doing
because
I
hope
I
get
the
complete
picture
and
once
that
is
up,
so
we
can
look
at
that.
I
will
start
trying
to
pinpoint
the
failures
and
iterate
from
there.
D
B
Okay,
the
machine
I
say
so,
first
of
all,
a
bit
of
context
to
put
things
in
perspective,
I
have
a
pretty
old
machine,
but
still
is
like
12
chords,
ssds,
few
gigs
of
ram
and
if
it
time
and
it
time
it
out
after
24
hours,
so
there
is
I,
I
honestly
didn't
tell
if
it
stopped
somewhere
and
then
timeout,
and
so
it
was
sitting
idle
for
like
10
hours.
I
don't
know
yet,
but
this
is
the
time
frame
express.
So
I'm
not
sure
six
hours
is
enough
anymore.
A
Perfect,
I
think
mike
was
going
to
try
and
look
into
serial
mike
you're
here.
A
Yes,
I'm
here
yeah,
so
yeah,
please
take
a
look,
looks
take
a
look
at
logs.
Look
at
looks,
yeah,
yeah
I'll,
take
a
look
at
them.
A
B
A
Perfect
any
other
topics
I
know
ilana
you
enabling
some
serial
tests
on
precipe
meat.
I
wonder
how
is
it
going
like
a
long
time
ago,
I
was
like
wondering
there
is
a
discussion,
whether
we
need
to
give
machines
to
people
like
just
to
have
some
machines
for
test
notifications
available,
and
this
discussion
was
never
went
anywhere
because
it's
really
hard
to
understand
who
will
be
having
these
machines
and
how
we
run
them,
and
I
was
always
thinking
like
why
test
info
wouldn't
just
run
like
tests.
C
On
demand,
I
think
it
was,
I
think
it
was
honestly
just
an
oversight,
and
I
submitted
a
pr
to
add
the
jobs
and
then
dims
was
like
well,
I
want
them
to
like
be
on
container
d,
not
docker
and
I'm
like
well.
I
want
this
whole
job
to
be
on
container
d
and
not
docker.
C
I
don't
think
that
we
should
block
it
just
because
the
pre-existing
job
that
we
care
about
and
get
ci
signal
from,
or
at
least
want
to
get
ci
signal
from
happens
to
be
on
docker
right
now,
like
the
flight
we
can
fix
in
the
future,
but
that
shouldn't
block
us
from
making
it
a
pr
able
thing.
So
I
submitted
the
pull
request
for
that
in
test
infra.
I
haven't
like
checked
on
it
in
a
bit.
I
don't
know
if
anybody
asked
me
to
make
any
changes.
C
Yeah
somebody
put
an
lgtm
on
it,
but
it
doesn't
have
an
approved.
C
C
So
I
think
that's
fine
and
then
odin
asked
if
I
could
increase
the
timeout
too,
and
I
would
be
fine
with
that
and
I
think
that
the
we
added
a
cryo
one
for
c
group,
v1
and
v2
and
those
are
failing,
but
we
don't
have
one
for
either
container
d
or
docker,
and
this
was
the
one
that
I
was
trying
to
add
so
and
let
me
share
in
the
notes
the
link
to
the
pr.
C
A
They
don't
know
what
the
timeout
will
be
like
do
you
know
it's.
C
Exactly
the
same
as
the
periodic
right
now,
which
I
guess
is
300
minutes
and
odin
asked
if
I
can
bump
it
to
like
420
minutes
to
see
if
that
helps
at
all.
So
I'm
happy
to
do
that.
C
Thank
you,
but
somebody
needs
to
cancel
the
hold
figure
out
the
docker
situation
versus
container
d
and
we
need
an
approver
as
well.
So
that's
the
that's
the
status
of
that,
but
yeah
happy
to
go
and
bump
the
timeout
on
that
pr.
A
F
C
A
So
the
very
first
file
was
product.
C
C
I
almost
wonder
if
at
some
point
we
should
consider
like,
is
it
written
the
key
difference
between
the
two
boards
that
we
have
is
that
we're
also
triaging
issues
on
the
like?
What's
currently
the
test
board-
and
I
know
people
have
given
feedback
that
it
is
confusing
that
we
have
two
boards,
I
wonder
if
we
should
put
all
the
pr's
on
one
board
and
do
like
one
board
for
issues
and
one
board
for
pr's,
because
I
know
that
we
have
not
been
dealing
with,
at
least
for,
like
you
know
this
sort
of
product.
C
A
I
mean
main
difference:
is
this
one
concentrates
on
tests
so
with
our
affecting
tests
we're
discussing
here.
C
C
So
I
wonder
if
you
know,
maybe
what
we
do
is
we
just
have
a
filter
view
on
the
other
board
for
the
prs,
because
the
I
mean
other
than
you
know
the
theme
like
where
the
the
split
like
they're
very
similar
and
how
they
work
so.
A
C
I
think
it's
not
just
renault
I've
had
other
people
like
ask
me
about
that
and
get
confused
about
it.
So.
A
Okay,
yeah:
we
can.
F
A
Smaller
board
for
tests
specifically,
unless
you
want
to
converge
just
meeting
into
more
than
tests,
we
can,
we
can
merge
in
that
case
yeah,
I
know,
github
is
not
very
flexible
on
how
to
triage
and
how
to
like
merge
things.
C
A
A
Every
pr
has
a
nice
chicken,
this
is
cool,
so
do
you
want
to
go
through
issues?
Anybody
wants
to
get
an.
A
C
That
flaky
test
at
the
bottom
there.
I
think
I
dragged
that
on
the
board
as
a
thing
we
need
to
triage
this
one
I
saw
the
there
was
an
email
to
kdev
about
ci
signal,
and
this
was
one
of
the
ones
that
had
come
up
on
the
report
and
had
not
been
triaged
by
us.
So
I
figured
I'd,
bring
that
in
for
today.
A
F
A
A
Because
one
time
ci
team
created
a
issue
for
flaking
tests
and
it
wasn't
actually
flaking
at
all.
Does
anybody
want
to
take
a
look?
C
On
that
particular
run
that
they
linked,
there
are
a
lot
of
tests
that
failed
like
35,
so
it
could
have
been
an
infrastructure
failure
or
something
like
that
too.
A
Yeah
I
know
that
configmec
was
flaking
on
119
board,
but
flaking
like
once
in
in
a
blue
moon.
So
but
maybe
I
don't
know
whether
this
one
is
coming
from.
I
think
this
master.
A
A
G
Hey
yeah,
I'm
ryan
phillips.
I'm
the
team
lead
at
red
hat
first
for
node.
I
we
don't
we're
not
doing
a
lot
of
container
d
stuff.
So
I
don't
know
if
we're
best
for
this
issue.
F
A
And
take
this
issue
adjustment
like
I
just
looked
who's
on
the
call
and
oh.
E
And
I'm
a
member
of
no
team
and
I
joined
the-
I
become
the
kubernetes
member,
like
maybe
two
months
ago,
yeah
just
first
time
joining
this
meeting
just
out
of
here
to
see
what's
happening
thanks.
A
Right
and
you
both
joined
on
the
memorial
day
holiday-
I
guess
you're
in
us,
so
this
is
nice,
like
most
of
there
are
many
googlers
who
are
currently
on
vacation.
Okay,
let
me
check.
F
Oh
sorry,
hello.
Please
no
worries,
I'm
peter.
I
also
I'm
on
the
note
team,
usually
working
on
cryo
but
hello,
nice
to
meet
y'all.
A
Cool
okay,
let
me
check
with
maybe
our
team
to
see
who
can
take
this.
G
A
A
That
yeah,
I
think
it's
also
vellum.
So
let's
do
specific
text
and
code
just
cool.
I
think
we
need
it.
Anybody
wants
to
start
working
on
organizing
stuff,
yeah,
okay,
yeah.
I
feel
that.
Okay,
let
me
ask
differently:
is
there
anybody
looking
for
task
or
we
just
go
directly
into
product.
F
A
Not
today,
then,
I
think
yeah
we're
done
with
josh.
I
I'm
sorry.
I
was
very
big
and,
like
I
didn't
file
issues
that
I
promised
to
file,
I
will
do
that.
A
I
don't
think
it's
blocking
anybody
yeah,
let's
finish
with
a
test
part
of
it
and
let's
go
to
product
triage
as
usual
disclaimer.
We
are
not
like
the
second
part
of
the
meeting
you
can
like.
This
is
a
good
moment
to
drop
off,
because
you
will
stop
recording
and
we
will
move
into
product
backward
cache.