►
From YouTube: Kubernetes SIG Node 20220112
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Good
day,
or
whatever
time
of
the
day,
you
have
it's
january,
12th
2022.,
it's
the
first.
I
think
it's
first
meeting
this
year.
Second
bye,
a
second
welcome
everybody.
It's
the
signaled,
ci
subgroup
meeting
we
are
here.
We
have
some
agenda.
Let's
kick
it
off.
I
don't
know
who
put
this
first
name
item
to
agenda.
A
A
B
Yeah
I
know
that
ben
had
talked
about.
He
definitely
did
not
want
to
see
us
include
os
name
in
the
test,
job
names,
and
I
think
that's
fine.
That
makes
sense
to
me,
but
I
don't
know
necessarily
about
taking
that
as
far
as
not
including
the
container
runtimes,
I
think
that'll
just
be
really
confusing.
B
Like
you
know,
I
can
definitely
say
we
don't
need
to
say,
like
you
know,
fedora
system
d
driver
cryo
test
but,
like
maybe
at
least
put
like
cryo
in
there.
No,
it's
not
container
d,
because
I
think
at
a
glance
that
sort
of
thing
is
going
to
be
useful
to
be
able
to
see-
and
I
think,
like,
I
think,
we're
only
running
tests
on
container
d
for
the
majority
of
our
coverage
and
then
cryo
to
ensure
we
get
system
d
driver
coverage.
So,
like
our
goal
is
not.
B
We
want
to
test
every
single
container,
runtime
and
every
single
platform.
It's
we
want
at
least
two
container
runtimes.
We
want
to
ensure
that,
like
I
guess,
the
like
standard
c
group
driver
and
the
system
dc
group
that
driver
both
get
tested
and
we're
kind
of
splitting
that
across
container
d
and
cryo
right
now.
A
Yeah
I
have
yeah,
my
only
reservation
is.
If
you
have
two
then
like,
what
do
we
do
about
the
ci
degree,
do
we
need
to
test
it
as
open
source
community
or
like.
B
That's
almost
certainly
not
in
our
purview
right,
so
in
theory
like
they
could
do
the
fun
thing
where,
like
there's
a
there's
a
way
for
them
to
like
get
a
bucket,
add
the
bucket
to
the
test
grid
config,
like
yeet,
test
results
into
the
bucket,
and
then
our
test
grid
will
display
them,
but
like
we're
not
responsible
for
running
those
tests,
those
are
the
responsibility
to
downstream
project.
There's
some,
like,
I
guess,
there's
like
a
little
bit
of
weirdness
over
time.
B
For
example,
like
c
advisor,
which
is
like
a
google
project,
it's
not
a
sig
node
project,
although
it
has
a
lot
of
involvement
from
sync
node.
I
believe,
like
c
advisor
is
using
kubernetes
ci
right
now,
but
like
I
think
that
sig
testing
wanted
that
to
change.
So
there's
like
some
weird
cases
like
that,
but
generally
like
kubernetes
ci,
is
for
kubernetes
projects
only
and
like
anything
else,
there
are
ways
for
them
to
like
get
that
into
our
test
grids
like
via
that
weird
buckety
approach
that
I
previously
mentioned
so.
A
I
mean
obviously,
all
the
tests,
all
the
tests
are
defined
by
people
willing
to
contribute
and
maintain.
The
stress
because,
like
if,
like
some
other
on
time,
will
come
and
say
they
want
to
be
tested,
and
there
is
enough
justification
enough
contributions.
Maybe
it's
it's
a
good
thing
to
do
the
worst
cases.
I
I
think
because
it
would
happen
with
some
os,
it's
like
if
nobody
supports
and
like
that,
get
rid,
and
nobody
pays
attention
that
we
need
to
just
remove
it
and
assume
that
nobody
cares
about
this
platform.
A
I
think
cri
and
kinetic
is
good
enough.
One
question
would
be,
I
think,
release
blocking
tests
they
need
to
be.
I
don't
know
whether
we
need
to
include
the
container
on
time
and
release
blocking
names.
I
think
release
team
may
just
have
the
generic
names,
and
maybe
we
on
signal
tab.
We
can
have
containers,
declare
container
name
in
a
test.
B
Yeah
I
mean-
maybe
maybe
we
should
talk
about
like
in
those
cases
like
it's
not
so
much.
This
is
the
specific
run
time
we're
testing,
but
rather
like
this
one
tests,
the
c
group
driver
this
one
test,
the
system
dc
group
driver
or,
like
things
like
that,
which
are
more
like.
I
think
I
assume
that
like
saying
release
might
care
more
about
coverage
than
they
do
about
specific
container
run
time.
Another
question
that
I
would
have,
I
think
we
have
some
data
on
this
right,
sergey
where
we
asked.
B
How
do
we
decide
which
two
to
test?
We
have
some
data
in
terms
of
like
what
run
times,
people
are
using
right
and
I
think
it's
like
majority
container
d
like
not
including
docker
ship
majority
container
d,
followed
by
cryo
and
then
everything
else
is
like
teeny
tiny
buckets.
So
I
mean
that's
potentially
also
a
data
point
to
consider
there
as
well
like
we
don't
want
to.
B
You
know
test
teeny
tiny,
like
one
percent
of
our
users,
use
this
runtime
in
like
upstream
kubernetes
ci,
because
it's
not
going
to
come
up
in
the
majority
of
ci
scenarios,
like
I'm
thinking
of
I
don't
know
like.
I
would
assume
a
project
like
cata,
for
example,
or
like
k
native
they've,
got
their
own
ci
and
they're
going
to
be
testing
on
their
own.
Like
that's,
not
going
to
be
happening
as
part
of
upstream
kubernetes,
repost.
A
A
I
mean
when
we've
been
migrating
from
the
custom.
It
was
always
a
challenge
to
like,
if
you
see
a
name
over
there,
that
is
generic
enough
and
you
never
know
which
category.
A
I
think
it
may
be
may
have
been
a
temporary
issue
because
we've
been
removing
one
of
the
runtimes
okay,
so
other
things
we
need
to
change
names.
A
One
question
would
be:
do
we
need
to
distinguish
sig
naught
and
to
enter
s,
and
I
mean
folders
e
to
e
slash
node
and
into
it
underscore.
Node
would
be
something
that
you
need
to
do
in
test
names.
B
B
One
of
the
things
I
know,
I
think
that
one
of
the
things
that
actually
brought
this
up
last
meeting
was
like
just
the
scheme
of
the
names
for
each
of
the
jobs
in
node
ci,
like
they're
all
kind
of
subtly
different,
and
I
think
that
that
was
a
thing
that
we
wanted
to
reconsider,
like
what
the
format
of
each
test
name
is
supposed
to
look
like
and
how
to
make
that
consistent.
B
Because
right
now
you
know,
there's
container
d
on
core
os
and
like
like
at
the
sorry
container,
os
like
the
cos
stuff
and
like
they
have
one
naming
scheme
and
then
there's
like
all
the
different
versions
of
container
d
that
have
a
different
naming
scheme
and
then
there's
a
cryo
test
which
have
a
different
naming
scheme
and
so
on
and
so
forth.
A
I
think
we
need
to
list
all
the
small
individual
pieces
of
the
name,
and
then
we
can
come
up
with
a
schema,
so
we
have
os
container
tank
and
then
our
time
version
we
have
what
being
tested
like
features
serial.
B
So
I
guess
here's
a
question
I
feel
like
at
least
right
now.
Only
container
d
is
doing
the
thing
where,
in
the
upstream
kubernetes
test
we're
like
testing
a
big
version,
skew
of
container
d.
Is
that
something
that
we
want
to
do
or
because,
like
the
point
is
not
does
every
single
you
know
matrix
config
work?
It's
just
you
know.
We
need
to
make
sure
that
it
works
on
some
version.
B
A
It's
interesting
question
because
I
think
if
you
look
at
the
statistics
of
usage
of
different
container
runtimes
and
individual
container
diversions
may
beat
cryo,
so
it
may
make
sense
to
cover
some
of
them
for
our
end
users,
because
it
doesn't
take
too
much
effort
generally,
but
we
definitely
need
to
limit
it
to
specific
to
supported
versions.
Only
because
continuously
is
also
quite
aggressive
in
not
supports
run
times,
and
you
don't
need
to
go
beyond
what
supported
by
continuity
itself.
B
I
mean
to
be
clear
like
for
from
my
perspective,
like
you
know,
I
think
that
the
the
percentage
of
users
that
we
have
on
a
given
platform-
that's
not
the
only
signal.
We're
looking
at
right,
like
container
d,
1.4
versus
1.5
are
not
very
different
compared
to
say,
like
container
d
and
like
a
version
of
cryo,
or
something
like
that.
B
If
we're
focused
on
coverage
as
opposed
to
like
not
trying
to
anticipate
every
single
version,
skew
kind
of
thing
then
like
I
would
expect
that
we
wouldn't
be
doing
that
in
the
upstream
kubernetes
tests.
So
that's
mostly
why
I
bring
that
up.
B
Super
good
to
have
those
examples,
because
I
did
not
know
that
I,
as
far
as
I'm
aware,
have
not
seen
those
tests
necessarily
catch
anything
that
was
not
like
that
was
like
kubernetes
specific,
like
a
kubernetes
bug.
So.
B
A
Yeah
again,
there
is
a
fine
line
between
like
whether
container
this
shim
is
xc.
Rushing
in
continuity
is
something
that
you
want
to
test
and
your
own
or
its
container
d
owns
it,
because
I
think,
philosophically,
when
everything
started
here,
I
and
cri
streaming
contingency
was
owned
by
sig
node.
Now
it's
changed
because
we
have
less
intersex
kind
of
contributors.
A
A
Yeah
me
too,
that's
that's
the
problem
like
I
I
I
wonder
whether
we
need
to
include
it
into
the
test
name,
so
it
will
be
easier
to
understand.
What's
going
on.
B
I
and
honestly
I
mean
the
the
test
naming
obviously
intrigue
is
very
confusing
as
well.
I
would
maybe
recommend
that
we
call
it
like
node
only
or
something
like
that,
given
that,
like
the
node
e3
test,
don't
spin
up
a
full
cluster.
D
B
Some
of
them
might
be
a
it
might
just
be
a
display
thing
like.
I
think
that
we
typically
use
ci
dash
in
the
the
actual
spec
like
when
we
set
up
a
periodic
job.
We
normally
prefix
them
ci
for
for
periodics
and
then
pull
for
or
pr
sometimes
for
the
pre-submits,
but
sometimes
in
test
grid.
We
don't
actually
display
that
because
it
gets
super
long
so
we'll
like
rename
the
tab.
B
And
I
think
that
this
is
like
this
is
a
really
good
discussion.
This
might
be
something
that
we
want
to
get
wider
feedback
than
just
the
folks
who
attend
this
meeting.
Is
this
something
that
we
want
to
attend
or
we
want
to
send
an
email
to
like
the
whole
signaled
mailing
list,
or
maybe
just
like
the
signo
test
mailing
list
like
I
don't
want
to
make
big
changes.
C
A
If
we
do
that,
we
like
how
do
we
distinguish
these
two
tests,
so
os
name
will
be
just
the
feature.
Then
right.
B
Like
I
know
that
there
are
at
least
two
rel
tabs
that
I
need
to
delete
because
they're
just
like
not.
B
It's
not
that
one
but
yeah
it's
something
else.
B
Unlabeled
they're
not
abandoned
they're,
not
orphans,
they
have
parents
like
good
god.
It's
such
a
depressing
thing
to
call
a
test
like
very
insensitive.
I
think
I
don't
know
who
came
up
with
that
naming
scheme
anyways.
I
called
them
unlabeled,
because
that
is
like
the
most
accurate
and
descriptive
way
to
say
what
the
state
of
those
tests
are.
They're
missing
labels,
so
they're
not
getting
pulled
in
by
other
dashboards,
so
yeah,
I
call
them
unlabeled
there.
They
are
they're
beside
container
de-eviction.
A
B
A
B
A
Up
next
tao,
how
do
you
want
to
speak
up?
Tell
us
about
this.
E
Yes,
that's
the
way
we
we,
we
are
using
the
doc
and
with
a
lot
of
building
points,
to
check
those
test
figures
and
I
think
it's
a
little
bit
toyo,
and
so
I
created
this
spreadsheet
to
help
us
check
those
test
results
and
for
instruction
to
use
it
every
week
we
could
create
a
new
tab
and
we
go
to
each
row
of
those
test
test
grid
and
confirm
the
test
is
passing
and
if
not,
we
can
compare
if
it's
caused
by
the
known
issue
or
if
or
it's
not,
and
if
it's
not
caused
by
unknown
issue,
we
can
create
a
issue
to
track
it
and
that's
a
proposal.
A
Yeah,
I
think
it's
really
really
good
idea.
I
like
I
really
want
to
see
continuation
of
our
effort
because
we
look
at
test
grid
every
week
I
mean
not.
We
didn't
look
at
test
grid
before
during
the
meeting,
but
now
we
start
looking
at
it
more
often
to
track
the
status,
and
I
want
to
see
continuation
here.
Do
you
think
what
do
you
think
of
like
merging
the
tops
into
a
single
one
and
like
see
green
thoughts
accumulating
over
time?
So
date
will
be
here
here.
B
One
thing
I
would
add
to
this
is,
I
think,
like
in
theory,
we
shouldn't
have
to
be
like
manually
chasing
down
all
of
these
reds
right
and
the
main
reason
that
we
have
to
do
that
right
now
is
just
because
we
have
so
many
test
failures,
but
like
in
theory,
you
know,
testgrid
can
alert
us
and
like
it
sends
us
emails
to
the
signo
test
failures
mailing
list.
B
I
don't
know
that
everything
is
currently
set
up
to
properly
alert
right
now,
but
in
theory
like,
I
think
that
is
the
goal
of
those
alerts.
B
So
there
may
also
be
some
work
here
like
with
test
grid
and
prow
like
we
may
want
to
reach
out
to
sig
testing
and
ask
them
for
help,
because
we
just
don't
necessarily
know
how
to
use
this
software
to
the
best
of
its
like
abilities,
there's
so
much
documentation
for
it,
but,
like
I
know,
it's
definitely
sending
us
emails
right
now
about
some
failing
tests
and
like
if
we're
already
talking
about
like
well,
let's
write
some
automation
to
help
us
to
like
or
like,
let's,
you
know
put
this
in
a
spreadsheet
to
help
us
track
this,
because
there's
a
lot
of
toil,
I
would
say
well
hold
on
the
platform
in
theory,
also
has
some
stuff
to
reduce
that
toil.
E
F
This
tracker
would
be.
We
can
keep
track
of
issues
that
we
create
for
these
failures
and
we.
A
Yeah,
one
of
the
point
here
was
continuation
over
the
year
like
how
long
it
took
us
to
resolve
issues
like
once.
Greed
became
red
and
we
have
an
issue
created.
How
long
do
we
keep
this
issue
around,
and
this
will
help
us
understand
whether
job
is
abundant?
If
nobody
cares
about
this
failure,
then
we
can
make
a
decision
and
like
especially
before
release,
we
can
understand
whether
it
was
broken
all
that
way
or
it
was
just
recently
started.
Failing.
H
Oh
yeah,
I
just
want
to
say
it's
also
nothing
very
valuable
just
because
even
if
we
get
alerts
or
something
like
that,
I
think
the
the
problem's
not
necessarily
knowing,
if
the
test
failing
or
not,
but
rather
like,
if
someone
is
doing
something
about
it
or
what's
the
status
of
it
right
now
that
kind
of
requires
going
into
the
test
grid.
You
know
searching
the
job
name,
hoping
that
the
github
issue
title
kind
of
matches,
the
job
name
right.
So
if
we
just
have
a
single
place,
we
can
look
at
that.
H
Not
always,
I
don't
know
like,
I
think,
if
you
could
tie
back
into
test
grid
and
exact
job
names,
that
would
be
helpful
yeah
I
mean,
I
don't
think
it
matters
as
much
if
it's
like
a
single
sheet
or
you
know,
on
github,
we
have
a
board
just
as
long
as
everyone's
using
the
same
thing
and
that
we
have
a
single
place
to
go.
I
think
that's
the
most
important
thing.
B
Okay,
yep
sounds
good.
I
just
you
know:
wanna
wanna
encourage
us
to
to
use
the
things
that
people
so
painstakingly
built,
and
I
know
that
there's
a
lot
of
folks
at
google.
I
think
working
on
that
stuff,
so
yeah.
E
Do
you
think
actually.
A
E
E
B
It
certainly
is
yeah,
I
guess
the
the
only
thing
that
I
would
recommend
here
is
maybe
engage
a
little
bit
with
sig
testing
on
this,
rather
than
just
kind
of
setting
out
in
our
own
direction.
F
F
A
There
is
a
war
group
reliability
that
voidic
is
driving,
and
this
work
group
has
a
document
describing
like
all
the
like,
all
the
improvements
we
may
do
across
that's
great
to
help
us
and
the
mapping
of
issues
to
test
grid
plus
creating
issues
automatically
is
one
of
the
ideas
they
pursued.
I
think
somebody
started
working
on
that.
I
I
believe
somebody
from
intel,
but
then
they
stopped
so
I
mean.
Ideally,
it
should
be
integrated
into
test
grid,
and
I
know
that
test
grid
has
this
capabilities.
A
It's
otherwise
like
I.
I
posted
this
list
of
issues
from
last
time
and
it
would
be
interesting.
Like
I
mean
it
would
be
much
nicer
if,
instead
of
tasting
these
grids
after
issue
is
solved.
We
just
see
green
now
and,
like
we
see
like
so
period
of
time,
when
it's
red,
like
maybe
it
was
one
week,
is
red
and
next
week
is
already
fixed.
So
it
would
be
good
indications
that
so
the
failure
created
the
issue
it
will
got
resolved
I'll,
be
back,
get
back
to
green.
I
Update
this
document
now
is
like
this
will
work
for
someone,
because
I
know
that
for
a
lot
of
jobs,
we
just
got
like
we
get
the
email
notifications
regarding
at
least
I'm
looking
for
ones
related
to
resource
managers
and
huge
pages
like.
If
it's
have
more
than
three
feathers
in
the
row.
It
will
notify
you
via
email,
but
you
have
such
thing.
B
I
think
this
is
a
really
good
discussion,
and
I
want
to
thank
tao
for
like
taking
the
initiative
to
try
to
put
together
something
better
to
track
this
stuff.
This
is
to
be
clear,
like
this
test
grid
monitoring.
B
I
think
this
is
a
new
initiative
this
year,
and
I
also
want
to
thank
sergey
for
kicking
it
off
and
putting
it
together,
because
previously,
I
think
most
of
our
test
grid
monitoring
has
been
very,
like
oh
alana,
looked
at
test
grid
today
and
found
something
failing
or
matthias
looked
at
test
grid
today
or
mike
looked
at
test
grid
today.
So
this
is
definitely
better,
and
I
appreciate
everyone
coming
together
and
trying
to
work
on
this
as
a
group,
and
I
definitely
hope
that
we
can
get
more
of
our
tests
green.
A
A
Anyway,
okay,
next
one
is
somebody
reported
two
more
test:
creed
test,
jobs
that
are
using
still
using
docker.
It's
in
release,
monster
and
forming.
A
Once
this
gpu
this
device
plug-in,
I
think
we
I'm
not
sure
what
this
does,
because
we
supposedly
removed
the
gpu
test.
I
think
that's
this
entire.
B
A
Maybe,
instead
of
fixing
to
continue,
we
can
just
remove
this
entire
thing
to
all
together
just
needs
to
look
in
details
and
this
one.
I
think
it
just
needs
to
be
changed
to
container
d,
but
I
don't
know,
does
anybody
know
the
reason
this
test
is
here?
Is
it
end-to-end
like
no
test.
A
A
B
Reach
out
to
lumiere
about
it
or
something
I
assume
that
that
might
be
something
in
their
sake
yeah,
but.
A
Okay,
does
anybody
want
to
take.
A
B
A
Okay,
let's
see
if
cartwright
will
take
it
and
we're
moving
to
the
next
item
mike
you're
up.
F
F
B
F
Yeah,
so
there
was
an
enough
armor
failure,
which
was
related
to
a
change
somebody
submitted
without
proper
testing
update,
but
this
one
is
different.
This
is
yeah,
that's
the
same
one!
Well,
that's
that's
for
fedora.
This
is
for
ubuntu.
B
B
Ask
this
because
that
test,
for
whatever
reason,
is
very
flaky
and
I
think
like
david,
can
attest,
we
will
often
fix
like
it's
failing
on
a
job
and
it's
failing
for
a
particular
reason
that
we
look
at
another
job,
it's
failing
for
a
different
reason
and
so
on
and
so
forth.
So
I
just
want
to
make
sure
we
confirm
that
the
reason
that
it's
failing
is
the
same.
F
A
Do
you
have
any
more
details
on
what
needs
to
happen.
F
F
H
Yeah
for
the
stats
summary
stuff
we've
been
hunting
down
those
errors
for
a
while
now,
but
I
think
it's
been
more
frequent
for
some
reason
than
a
swap
job
for
whatever
reason
like
I
have
seen
other
jobs,
but
I
think
we
did
some
improvements
to
the
test
and
I
haven't
seen
it
as
much.
But
the
swap
job
for
some
reason
is
especially
sensitive
too.
H
Perhaps
yeah
yeah,
maybe
what
would
be
a
good
good
thing
to
try,
is
try
to
bump
the
timeout
and
run
this
test
locally.
I
don't
know,
say
10-15
times
or
something
and
see
if
it
flakes.
I
think
just
I
mean
you
know
if
that
helps
it's
worth
doing.
That's
just
an
experiment.
A
H
Yeah,
it
makes
sense,
makes
sense.
I
guess
my
concern
was
just
like
we're.
We
would
be
raising
attention
for
all
tests
everywhere,
except
it's
actually
not
the
issue.
You
know,
what's
the
point
of
increasing
the
timeout,
but
that's
another
solution,
we
can
track
yeah,
not
against
it.
A
So
you
think
that
what
will
help
us
to
repair
locally
and
have
some
do
we
need
do
we
need
more
logs.
So
what
do
we
need
to
investigate.
H
Yeah
I
mean
it
would
be
good
to
understand
why,
after
you
know
like,
maybe
perhaps
we
can
take
a
look
at
the
advisor
logs
like
in
the
kublet.
I
think
the
real
issue
is
that
the
verbosity
is
kind
of
low
in
the
logs.
So
if
somebody
could
reap
through
it,
if
we
could
run
it
locally
with
higher
verbosity
logs,
maybe
we
could
actually
see
what's
the
issue,
maybe
it's
if
it's
actually
time
out
or
maybe
there's
actually
some
something
related
to
swap
that's
causing
an
issue.
H
A
F
B
If
you've
been
unable
to
reproduce
it
locally
and
it
passes
on
every
other
thing,
I
have
a
sneaking
suspicion
that
it
may
just
be
a
performance
thing
because
locally,
you
probably
have
pretty
fast
storage.
H
A
Ssh
now
people
just
running
different
machines.
A
Okay,
my
guess
is
still
assigned
to
you.
Do
you
want
to
be
assigned
to
you.
G
Yeah,
I
think
this
one
is
quick.
So
basically,
when
I
it's
about
like
removing
from
the
pre-submit
the
the
docker-based
jobs
and-
and
I
realized
that
danielle
moved
already-
all
of
them
in
just
one
job,
so
I
removed
it
and
then
I
put
elena
in
cc
and
suddenly
it
was
merged
before
anybody
could
ever
look
so.
G
B
B
That's
not
remote,
basically,
because
it'll
default
to
docker
yeah,
those
were
specifically.
We
had
split
out
some
of
the
serial
tests
that
were
like
docker
specific
last
release
so
that
we
could
just
get
rid
of
them
yeah
so
that
one
definitely
no
issue
with
that
being
merged.
But
I
think
that's
not
all
of.
G
C
G
A
And
what
you
can
do
is
to
just
run
all
preset,
meets
and
like
sometime
in
pr
and
run
all
the
preset
needs
and
see
how
how
they
do.
A
C
A
Okay,
that's
great,
so
apartment
issue
was
fixed.
You
can
check
a
couple
of.
B
Yes,
I
think
I
had
a
pr
open
for
this,
but
I
had
like
put
a
hold
on
it,
just
to
make
sure
that
it
didn't
get
merged
prematurely.
Just
because
I
was
working
on
it
going
out
into
the
break.
I
haven't
had
a
chance
to
look
at
it
since
last
week.
Sorry,
I
was
out
sick.
The
last
two
days
elastic.
A
Was
crazy
for
me
see
group
we
do
what
was
happening
there.
A
H
Taking
a
look
into
this
one
to
see,
I
have
a
possible
idea
of
what's
wrong,
but
still
debugging.
It.
A
A
Yeah
I
looked
at
this
comment
and
I
didn't
dig
deep.
Deeper
doesn't
mean
that
we
detect
container
d
speed
improperly
on
all
tests
or
just
specific
to
this
one.
H
H
H
A
An
eviction
this.
A
Okay,
we
probably
need
to
go
to
the
port
triage.
So
mike
you
said
you
when
you
looked
at
this
document,
you
said
that
you
looked
at
all
the
test
grid
and
created
this
new
issue.
Is
there
anything
else
that
needs
to
be
taken
care
of.
F
A
Okay,
the
new
ones
and
I
wouldn't
go
through
the
board
again,
thank
you
for
watching
it
for
us
sure.
Oh
this
yeah,
I
haven't
seen
updates
on
this
issues
as
well.
B
A
A
Okay
say
like
ben
triplette
back
that
it
needs
more
investigation.
A
D
A
H
A
H
B
I
I'm
not
sure
so
there's
I
think
there
was
also
a
pr
open
in
test
infra
for
this
one
I
mean,
I
think
it's
good
to
do
both
like
it's
good
to
both
skip
it
if
it's
not
supported,
but
also
like
the
fedora
test,
should
just
support
this.
So
I
think
peter
maybe
had
a
pr
open
to
fix
this
in
test
infra,
I'm
not
sure
if
we
want
to
take
both
fixes
or
just
the
one,
I'm
like,
I'm,
not
sure
why
these
are
failing.
B
D
Yeah,
I
I
did
have
a
pr
open
a
while
ago
for
some
well.
It
was
to
fix
to
turn
on
this
system
dc
group
test
with
cryo.
D
I
don't
know
if
that
would
then
venn
diagram
with
this,
because
I
think
it's
they're
still
not
started.
A
I
think
it's
it's
a
good
idea
to
at
least
keep
it
for
now,
so
they're,
not
red.
Is
there
a
bug
that
it
fixes
yeah,
especially
when
there
is
a
bug,
and
if
you
want
to
improve
it,
we
can
do
it
later.
A
A
Do
we
want
to
keep
or
create
a
cut,
a
new
bug
to
fix
it
and
switched
it
as
a
tool
to
send
signals.
H
A
B
We
have
two
minutes
left,
which
I
suspect
means
we
probably
won't
have
time
to
get
to
bugs.
There
was
that
fellow
who
brought
up
a
bug
yesterday
in
the
node
meeting.
Do
we
want
to
look
at
that?
One,
real,
quick.
A
B
I'm
a
little
confused
because
the
directory
listing
said
total
24k,
but
the
actual
files
themselves
didn't
add
up
to
anything
close
to
that.
A
Okay,
so
it's
cry
specific
calculations
that
is
incorrect.
Let's
say
advisor.
A
G
A
A
Okay,
thank
you
for
bringing
it
up,
and
I
really
glad
that
we
have
such
a
good
attendance.
Thank
you.
Everybody
attention
and
happy
rest
of
the
week.