►
From YouTube: Kubernetes SIG Node 20220420
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Hello,
everybody
and
welcome
to
today's
edition
of
sig
node
ci
sub
project.
It
is
what
is
the
date
today,
wednesday
april
20th
2022.
We
have
some
agenda
items
for
today.
The
first
thing
on
the
agenda.
A
I
want
to
make
a
brief
announcement,
so
I
am
going
to
be
stepping
away
a
bit
for
the
duration
of
the
125
release,
taking
a
little
bit
of
a
kubernetes
break,
and
so,
as
a
result,
some
of
the
stuff
that
I
work
on
in
the
project,
including
leading
this
sub
group,
I'm
like
hoping
to
make
sure
that
those
things
keep
moving
smoothly
in
my
absence,
so
I
would
like
to
step
down
from
co-leading
this
group
effective.
A
Basically
now,
because
you
know
we're
coming
into
the
125
cycle
and
I'd
like
to
nominate
danielle
to
step
up
to
replace
me
to
lead
the
subgroup.
So
danielle
just
became
an
approver
in
the
test.
Infrarepo,
congratulations
and
I
mean
danielle.
I
I
hope
that
this
is
something
that
you're
excited
and
or
willing
to
do.
I
think
you'll
do
a
great
job
and
I
think,
you've
been
very
active,
so
yeah.
That
is,
that
is
my
announcement.
A
A
Thank
you.
Yeah
I've
been
going
through
the
list
of
things
in
my
backlog
and
there's
like
so
many
things,
so
I
am
I'm
trying
to
spin
off
as
many
as
I
feasibly
can
to
make
sure
that
they
don't
get
dropped.
So
I
guess
I've
been
doing
this
for
a
little
over
a
year
and
I'm
very
excited
to
be
able
to
like
pass
on
the
hat
and
make
sure
that
stuff
keeps
going
and
not
just
like
you
know,
sit
here
and
and
do
all
the
work.
C
Thank
you
awana.
Thank
you.
I
might
be,
I'm
not
ready
to
make
any
announce,
but
I
might
be
taking
some
time
off
as
well
during
this
time
125
for
personal
reasons,
but
yeah
welcome
danielle
in
the
new
capacity.
A
I
don't
have
anything
I'm
gonna
maintain,
probably
like
I
won't
go
approver
emeritus
or
anything
like
that.
This
is
mostly
just
some
project
leads
are
weird
like
it's
kind
of
a
very
informal
thing
and
so
yeah.
I
just
wanted
to
to
make
sure
that
that
doesn't
get
dropped
and
I'm
very
excited
to
be
able
to
see
others
grow
and
get
the
opportunity
to
lead.
C
Cool
okay
next
topic:
this
is
from
james
jim,
started
this
effort
of
analyzing
all
the
ci
jobs
across
all
the
like
everything
related
to
kubernetes,
and
he
published
this
document
with,
like
many
many
jobs
that
are
failing
for
many
days.
Some
of
them
are
deceptive,
like
15
days,
but
it
may
be
like
they're
failing
all
the
time.
It's
just.
We
only
have
results
for
15
days,
so
it's
not
surprising
list.
C
I
think
we're
tracking
all
the
signal
related
tests
already,
and
we
know
about
those.
You
know
like
reasons
and
like
why
nobody
looking
at
that
or
like
who
is
playing
to
look
into
that
that
kind
of
things
but
yeah.
I
think
we
found
this
couple.
That
is
surprising
in
this
list.
I
I
think
I
pointed
brian
brian
hi,
brian
yo
mcqueen,
right
on
slack,
okay,
so
yeah,
I
think
one
of
the
jobs
I
I
found
interesting
here
also
was
a
making
container.
These
are
in
the
name.
C
It's
surprisingly,
I
mean
it's
not
a
signature,
but
it's
somehow
related.
So
maybe
interesting,
I
don't
know
you
want
to.
D
C
E
Yeah,
I
just
want
to
clarify
the
a4
here.
Basically,
the
the
initial
focus
of
that
initiative
is
basically
focus
on
job
failing
for
more
than
a
year,
because
we
have
those
kind
of
we
have
jobs
failing
for
three
years
non-stop,
so
the
main
focus
is
basically
all
the
job
failing
for
more
than
a
year.
Everything
under
19
days
is
not
really
relevant
in
that
effort,
so
we
don't
want
to
spread
the
a4.
We
want
to
focus
on
long-term
or
perma
failing
jobs.
C
Is
there
any
tracking
of
past
failures
of
the
jobs.
E
E
So
basically,
what
dim
is
trying
to
do
here?
And
basically
this
is
my
understanding-
is
reach
out
to
the
different
sig
and
ask
them
to
care
about
the
jobs
like
you
need
to
basically
go
through
the
test
grid,
dashboard
check
your
jobs
and
see
what's
failing
for
more
than
for
more
than
90
days,.
C
C
I
see
okay,
so
I
think
we
have
most
of
the
jobs
tracked
already,
so
signature
is
in
a
good
shape,
but
yeah
if
you're
interested.
I
I
started
systematizing
this
test,
but,
like
I
mean
some
like
thumbs
and
windows,
something
like
azure
and
such,
but
I
I
haven't
finished
it
yet.
So
maybe
when
I
have
like
more
systematic
list
of
extra
jobs
that
may
be
relevant
to
signal
group,
I
can
publish
it
somewhere
somewhere
on
slack.
E
E
C
Make
sense:
okay,
so
yeah
this
is
it
so
if
you're
interested
in
like
looking
at
the
some
extra
jobs,
please
do,
but
I
think
on
our
board,
we're
already
taking
quite
a
few
failing
tests.
So
I
mean,
if
you
interview
signal
specifically,
you
can
look
at
existing
issues
as
well.
C
Okay,
next
one
yeah.
This
is
interesting.
I
can
see
right
tools,
I
don't
think
we
own
cryptos
as
a.
I
think,
a
signal
we
own
sierra
tools.
We
never
discussed
it
in
this
group
before
so.
Apparently,
all
the
ci
jobs
were
fading
for
a
long
time,
and
I
wonder-
and
this
is
not
part
of
pro
like
this-
all
the
jobs
here
are
controlled
by
like
this
github
actions,
it's
a
github
workflows.
C
So
I
wonder
if
we
want
to
bring
this
back
into
this
group
at
some
point
to
extend
the
scope
same
way
as
we
have
some
interest
in
c
advisor,
we
have
some
interested
in
npgs
some
interest
and
see
right
tools.
Maybe
so
this
is
something
that
we
may
consider
to
add
back.
C
No
not
much
interest.
Okay,
let's
see
like
I,
I
will.
I
will
double
check
with
the
maintenance
of
this
see
right
tools
like
I
don't
see
much
activity
there
and
it
doesn't
break
too
often.
So
maybe
we
don't
have
to
discusses
in
this
white
forum,
not
everybody
interested
in
sierra
leone.
F
So
sasha
is
on
our
team
at
red
hat,
and
so
I
can
talk
to
him
about
this
one.
C
Yeah-
and
my
question
was
like:
do
you
want
to
bring
it
to
this
group
and
track
it
here,
but
I
think
if
there
is
not
much
interest,
you
can
just
do
it
on
like
maintainers.
For
this
group,
yeah
sounds
good
okay.
So
what
about
so?
This
one
is:
what
is
this
okay?
So
this
is
release
block
and
back
just
to
give
a
small
update
here
we
have
a
release
blocking
bug
with
summary
api
when
networking
network
stats
are
empty
and
it's
flaking,
so
we
try
to
reproduce
it.
C
We
have
quite
a
easy
way
to
reproduce
it
on
regular
prs
like
if
you
just
open
mtpr,
and
you
keep
doing
like
testing
this
job,
it
will
fail
at
some
point
on
pr
with
locks
like
with
a
little
bit
of
locks.
It's
also
failing
with
pr
extensive
locks.
It
never
failed
yet.
So
I
think
we
need
to
start
narrowing
down
and
like
removing
locks
out
of
it
so
far
it
doesn't.
C
I
mean
it's
very
unclear
why
it's
happening
and,
like
code
analysis,
didn't
give
any
good
results
yet
so
something
that
something
bad
is
infrastructure.
It
seems
because
it
seems
that
code
is
working
as
expected,
but
yeah.
I
think
we
need
to
start
breaking
down
these
flock
statements
into
smaller
log
statements
and
try
to
reproduce
it.
I
remember
mike
say
I
was
saying
that
he
was
able
to
reproduce
it
locally
mike.
Did
you
try
it
again?
C
I
tried
it
again,
but
I
wasn't
able
to
reproduce
it
again.
Yeah
this
stuff
is
weird.
I
can
give
it
another
shot.
C
Yeah,
it
seems
that
it
represents
quite
I
mean
not
easily,
but
like
at
least
like
one
time
out
of
10
it's.
It
will
fail
for
sure.
C
Okay
and
yeah-
and
this
is
daniella-
going
to
approve
her
yay.
Congratulations.
C
Yeah,
I
I
I
I
put
a
hole
for
derek
just
in
case.
I
don't
expect
any
issues
with
that.
I
wanna,
if
you
can
ping
like
poke
there,.
A
Yeah
I'll
I'll
see
who
I
could
put
mirnal's
on
leave
this
week,
so
I
think
he's
coming
back
next
week
in
case
we're
waiting
to
hear
from.
Like
you
know,
every.
If
we're
waiting
on
every
single
approver,
then
it
might
take
a
little
bit.
C
G
Oh
yeah,
I
filled
up
this
this
week,
so,
let's
see
for
for
the
couplet
no
couple
tasks.
This
same
last
week,
the
ubuntu
one
got
fixed
with
one
of
the
round
pr,
but
the
the
the
pr
was
supposed
to
fix
both
ubuntu
and
fedora.
I
think
a
new
issue
came
to
the
federal
or
showing
our
assets.
Ssh
error.
You
can
take
a
look
at
that
pro
request,
so
it's
still
failing
for
a
federal
case.
G
Yeah
see
the
maybe
we
should
create
a
new
issue
to
track
the
ssh
driver
problem.
G
Do
that,
okay
and
for
community
test
it's
same
as
the
last
week.
I
just
put
the
link
here,
I'm
I'm.
I
was
looking
into
the
the
fix
for
performance
tests,
as
you
mentioned
before,
for
the
the
cni
innate
error
yeah,
but
I
I
I
still
working
progress.
I
I
tried
to
make.
C
Yeah
I
submitted
pr
for
cni
issue
switching
to
meister
and
I
broke
performance
tab
completely,
so
yeah.
I
will
need
to
double
check
and
fix
it
up.
So
I
think
that
I
forgot
to
pull
container
team
sources,
but
I
will
double
check
and
see
and
for
performance
tests.
I
also
find
an
issue.
C
Let
me
see
so
it's
it's
beyond
cni,
so
c9
is
something
that
we
need
to
fix
across
the
board.
So
just
to
give
a
some
update
here
is
this,
like
we
have
different
ways
how
we
use
container
d,
some
tests
use
meister
contingency.
Some
tests
use
specific
version
of
continuity.
In
this
case
we
pull
sources
of
content
id
and
build,
build
it
locally
and
configure
it
on
gc
nodes
and
some
tests
using
this
image.
C
Config
file,
they
use
pre-installed
container
d
on
ubuntu
course,
and
this
pre-installed
in,
like
this
image
file,
defines
some
configuration
settings
to
either
write
a
configuration
of
this
pre-installed
candidate
versions,
and
this
configuration
seems
to
be
wrong
so
at
the
minimum
with
I
mean
for
some
reason.
After
this
configuration
steps,
we
have
unconfigured
cni,
which
is
which
is
bad
like
some
test.
Doesn't
care
about
it
sometimes
just
like
passive
and
without
cni,
but
some
tests
fail
horribly
without
that,
so
we
need
to
fix
that.
C
C
So
yeah,
you
know,
maybe
you
can
sync
up
later
and
understand
what
maybe
the
policy
here
like.
Maybe
we
can
write
it
down
and
decide
when
we
use
pre-installed
version
when
we
use
compiled
version?
How?
How
do
we
decide.
G
Sure
yeah
yeah-
I
just
don't
want
to
understand
why
the
ci
config
is
missing
in
either
case.
Okay,
yeah
I'll
follow
up.
C
Yeah
another
problem
with
this
test
is
our
image.
Config
doesn't
include
this
like
like
cloud.
The
need
logs
from
user
scripts,
are
not
written
into
cloud
init
log
file.
I
think
we're
missing
some
statement
in
image
config,
but
I'm
not
sure
so.
We
cannot
even
check
logs
if
everything
just
worked
correctly
as
we
expected.
G
Okay,
yeah,
okay,
I'll
go
back
to
the
dog.
Let's
see
cryo,
it
seemed
last
week
and
also
a
new.
I
wanna
call
out
there's
a
new
failure
for
the
serial
kryol.
It
has
been
failing,
I
think,
for
a
while,
but
we
lost
track
for
it
yeah.
Let's,
let
me
create
a
new
issue
for
that.
G
G
It's
also
the
same
as
last
week,
and
I
I
don't
know
I
cut.
I
couldn't
find
the
actual
issue
for
the
cost
flaky
one.
So
the
previous
issue
seems
close
and
a
little
unrelated.
C
New
one,
since
we
are
currently
in
this
investigation
stage,
started
by
section.
G
Yeah
foster
advisor
there
is
a
new
test
failure.
Since
april
15th.
I
created
an
issue
for
that
and
I'm
looking
at
that.
I
will
work
with
david
for
debugging
the
passphrase
yeah
the
field.
There
are
a
few
failed
test
case.
G
G
C
C
C
Okay,
and
do
we
want
to
go
to
box,
I
do
want
to
take
a
last
day
of
your
and.
H
A
I
A
I
feel
like
we
had
some
sort
of
change
for
this
behavior
that
went
in
this
release
with
a
possible
configuration
thing
for
what
the
terminal
state
should
be,
but
I
don't
think
that
applies
to.
Oh,
these
are
looking
at
those
are
demon
set.
Pods
does
anybody
know
if
demons
have
pods
actually
get
like
evicted
or
whatever.
A
F
C
A
If
you
could
send
me
a
link,
then
I
can
just
close
this.
A
Do
you
have
the
link,
or
should
I
just
ccu,
to
confirm
no
patient,
no.
I
A
Great
c
note
in
not
a
bug.
A
Thanks
for
having
that
handy,
okay,
when
node
in
memory
pressure,
static,
pod
enters
error
status
after
node
reboot.
Oh,
that
just
sounds
like
that
sounds
like
a
bug
that
I've
fixed.
What
version
is
this.
H
A
Okay,
next
create
static
pod,
manifest
path
has
been
deprecated.
A
What
do
we
do
about
this?
One
because,
like
we
haven't,
really
made
a
decision
as
a
sig
as
to
what
we're
doing
with
like
deprecation
of
command
line
flags
like
we've
discussed
it,
but
then
we
just
really
haven't
taken
a
stance
on
it,
because
we
don't
really
have
the
resources
to
migrate.
Everything.
A
A
A
A
I
don't
know
I
mean
it's
a
config
api
so,
like
it
matters
kind,
a
little
bit
less
because,
like
config
apis
aren't
like
serialized
and
like
are
never
going
to
support
like
multiple
formats
simultaneously,
usually
so.
Okay,
this
previously
discussed
in
the
sig
migrating
off
of
command
line
flags
is
a
backlog
item.
A
Other
components
have
a
similar
problem
for
the
foreseeable
future.
The
command
line
flags
will
definitely
work
and
continue
to
be
supported.
A
We
are
ensuring
any
new
command
line.
Flags
are
added
as
config
options
and
attempting
to
migrate
over
any
flags
that
are
not
currently.
A
Okay
last
one
inconsistent
pod
status,
reporting
with
cube
cool,
get
and
describe
boy
pod
in
the
cluster
pod
image.
Download
take
some
time
I
check
pod
with
cubicle
get
pod
status
is
container,
creating
sure
yeah.
When
I
check
the
pod
with
describes,
pod
status
is
pending.
A
Oh,
I
see
what
this
is.
This
is
like
a
six
cli
thing.
A
We've
previously
had
some
similar
issues
with
this
display
for
cat
pod,
see
also.
I
I
H
I
A
A
Okay,
that's
all
folks,
I
think
oh.
H
H
H
A
That
won't
look
like
you
can't
form,
so
I
don't
want
to
open
it.
Stillbot
stillbot
stalebot
and
the
email.
A
A
Like
weird
use
case,
okay,
anything
else
we
want
to
go
over.
C
D
A
Yeah
we're
delighted
to
have
you
and
there
are
no
dumb
questions.
So
anything
you
want
to
ask.
We
are
happy
to
help
you
out
with.
D
A
You
I'll
I'll
still
come
to
meetings
eric's,
often
just
not
probably
over
the
next
release,
because
I
am
taking
a
well-earned
vacation
from
kubernetes.
I
think,
but
you
know
project's
gonna
go
on
I've
been
doing
this
for
four
years
or
something
like
that.
Now
it's
been
a
long
time
not
with
node,
specifically
but
upstream
so
yeah
and
I'm
very
excited
to
see
danielle
step
up.
C
Hey
then,
let's
change
the
meeting.
A
Sounds
like
that's
a
wrap.
I
will
see
y'all
sometime,
but
I
hope
you
have
a
great
rest
of
your
week.
Peace.