►
From YouTube: Kubernetes SIG Node 20210721
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Okay,
I
started
recording
it's
a
july
21st
2021
signal.
Ci
sub
group
meeting
welcome
everybody
today,
yeah,
let's
go
through
agenda
first,
get
bigger.
B
Peter
hey
so
so,
for
a
bit
of
context,
we've
been
working
on
trying
to
add
some
pre-submitted
release
blocking
jobs
to
cover
some
of
the
node
conformance
like
suite
that
isn't
currently
covered
by
the
docker
shimmer
continuity
test,
specifically
for
the
system
d
c
group
driver
and
then
also
eventually
see
your
v2.
B
So
I
just
wanted
to
go
over
kind
of
some
goals,
like
some
goals
that
we
have
and
kind
of
see
what
people
think
about
our
plan.
So
I
have
this
hack
md
here
I'll
put
that
in
the
notes
as
well.
I
don't
have
them
right
now.
B
Let
me
see
the,
but
basically
the
the
our
goals
are
to
you
know
cover
more
of
the
c
group
coverage
like
with
the
system
d
drive
and
then
also
cg
v2
regret
prevent
some
regressions
due
to
the
lack
of
pre-submit
coverage
and
also
expand
the
number
of
cri
implementations
that
we're
testing
against.
So
we're
not
writing
continuous
d
specific
code,
especially
as
docker
shim
is
you
know
the
supporters
dropped
for
it.
B
So
the
proposal
is
basically
adding
some
periodic
tests
to
node
conformance
that
you
see,
group
v1
and
the
system
dc
group
driver
to
the
sig
release
master
blocking
test
grid
tab
so
that
sig
release
uses
that
to
inform
you
know
blocking
on
or
changes
and
then,
after
that
add
a
pre-submit
test
for
seeger
b1.
B
Once
we
verify
that
you
know
it
has
it's
a
fairly
stable
test
to
the
pre-submits
kubernetes
blocking
passcode
tab
so
that
it
blocks
on
pr's
and
then
eventually
to
do
the
same
thing
with
cjp2.
B
But
you
know
only
one
secret
v2
is
you
know
a
little
bit
fuller
in
support
so
yeah?
I
just
wanted
to
bring
that
up
and
see
what
folks
think
or
if
there's
any
issues
with
that.
B
Yeah
yeah
pretty
much
there.
Harshal
has
been
also
working
through
some
of
this,
but
we
we've
had
some
confusion
on
like
what
the
correct
route
is
supposed
to
happen,
and
I
think
this
document
is
just
a
little
bit
clearer
about
the
sequence
and,
like
you
know
like
before,
we
were
trying
to
get
a
periodic
job
to
be
released,
blocking
which
is
actually
not
even
allowed.
C
Or
you
know,
you're
trying
to
make
a
pr
job
release
blocking
which
isn't
allowed
periodics
are
the
only
ones
that
can
be
released
blocking
so.
C
Well,
I
mean
I'm
in
support
of
this.
I
think
the
plan
is
reasonable.
C
I
know
that
so
there's
sort
of
two
things
in
flight,
so
one
is
that
like
ben
and
had
asked
us,
if,
maybe
rather
than
adding
a
separate
pre-submit
job,
if
we
could
just
modify
the
existing
node
e
to
e
to
add
a
bunch
of
cryo
nodes,
and
so
that
might
be
something
that
we
should
look
into
rather
than
adding
a
separate
job.
Just
because
I
understand
that
there
are
some
like
cost
things
associated
in
terms
of
making
like
new
jobs
and
possible
flakes,
and
that
kind
of
thing.
C
So
that
was
one
thing
that
was
suggested.
Another
is
that,
right
now
there
basically
isn't
a
process
for
at
least
like
from
sig
testing's
perspective,
for
turning
a
job
into
a
blocking
job.
So
aaron
has
said
he
sort
of
wants
to
pilot
a
thing
for
123
and
like
actually
get
some
docs
written
and
whatnot,
and
so
this
might
be
a
good
test
case
for
that
sort
of
work.
C
If,
like
you
know,
we
go
through,
we,
we
figure
out
the
process
so
that
we
write
it
down
and
then
everybody
else
knows
what
the
expectations
are.
So
I
think
that's
the
only
other
stuff.
Maybe
in
flight
here
for
sig
release,
I
think
it's
a
little
bit
similar.
It's
not
super
clear,
but
that
would
be.
We
need
to
work
with
sig
release
for
the
release
blocking
ones.
B
Another
thing
that's
worth
mentioning
on
that
point
is
like.
Originally,
our
plan
was
to
use
cryo
as,
like
you
know,
the
rolling
version
of
cryo
with
fedora
core
os,
the
fedora
core
os,
because
it
has
like
fedora,
has
like
first-class
tv
or
c2
support
before
other
distributions
did,
even
though
support
or
core
arrests
kind
of
lag
behind
that,
and
then
the
rolling
cryo
was
like
it
was.
We
were
kind
of
like
overloading
the
job
to
also
you
know
catch.
B
C
Yeah,
I
think
that
was
the
correct
idea,
not
necessarily
as
critical
for
pre-submits,
although
it's
a
blocking
pre-submit,
maybe
but
all
of
the
container
d
jobs
right
now
they
pin
the
version
of
ubuntu
and
they
pin
the
version
of
container
d.
None
of
the
cryo
tests
do
this
so
like
at
least
from
a
sig
release
perspective.
If
we
want
it
to
be
release
blocking,
we
can't
have
like
upstream
releases
change
out
from
underneath
us,
so
we'd
have
to
pin
those
and
then
start
doing
basically
the
bumps
on
a
regular
cycle
and
test
infra.
C
But
I
think
that's
that's
like
pretty
straightforward
and
that's
that's
one
of
those
like
sort
of
requirements
that
is
like
there,
but
very
unstated.
So
that's
the
sort
of
thing
that
would
be
useful
to
have
called
out
in
some
sort
of
documentation.
A
A
C
D
D
C
C
Yeah,
I
think
I
was
chatting
with
my
team
in
a
retro
today
and
we
are
hoping
to
allocate
like
to
write
up
an
epic
and
have
somebody
specifically
just
having
driving
down
some
of
the
tech
debt
upstream
in
terms
of
like
upstream
tests,
because
I
think
we've
seen
things
are
a
lot
better
now
than
they
were
say
like
a
year
ago,
but
we
still
have
a
long
way
to
go
and
there's
a
lot
of
work.
That
needs
to
be
done
in
order
to
try
to
improve
the
signal
that
we're
getting
especially
like.
C
What's
going
on
with
the
cubelet
serial
tests,
where,
like
in
theory,
I
think
they're
supposed
to
be
release
and
forming
or
something
like
that,
but
like
they're
so
broken
that,
like
we
can't
get
any
signal
out
of
them.
So
I
think
once
we
get
those
into
like
a
better
green
good
maintained
state,
then
I
think
we'll
be
on
a
better
track
for
that.
D
C
A
A
And
I
think
the
most
question
should
be
from
test
seek
what
like
how
to
run
it
like
what
kind
of
constraints
and
resources
we
may
have
and
like
why
we
want
to
like
combine
them
together.
Those
kind
of
questions
that
I
don't
know
was
a
we
can
answer
in
this
group,
but
I
think
I
mean
I'm
all
for
covering
more
cries.
This
is
this
will
be
great
awesome.
A
I
also
wonder,
like
you
remember
this
document,
that
I
brought
last
time
that
talks
about
cria,
not
con
like
here,
I
conformance
test,
result
uploads.
A
I
wonder
what
was
the
plan
then
like
where
the
plan
was
to
just
make
every
run,
not
conformance
tests
and
upload
results
so,
like
kubernetes
is
not
responsible
for
that
yeah.
It's
very
I
I
need
to
dig
into
history
like
ask
around
like
what
was
the
history.
Why?
What
was
the
idea
behind
it?
Because
now
it
seems
like
we
we're
putting
more
runtimes
into
kubernetes
ci,
which
is
opposite
directions,
and
this
document
suggested.
A
Yeah,
I
I
mean
every
time
you
don't
want
to
make
mistakes
have
the
past
right,
but
it
may
well
be
that
they
suggest
to
do
not
conformance
as
part
of
like,
like
head
over
run
time
validation
rather
than
like
pin
version
gradation
anyway,
I'm
francesca
you're
you're
up.
E
Yeah,
that's
me,
so
I
added
a
few
notes
about
what
I'm
about
to
explain
about
what
what
this
device
plugin
things
is
about.
So
let
me
start
from
the
beginning:
I'm
going
to
talk
for
a
little
while.
So
just
interrupt
me
if
you
have
questions
or
anything
so
right
now
in
the
end-to-end
test
for
node.
Of
course,
we
have
quite
some
tests
which
want
to
consume
device
manager
and
this
needs
device
plugins.
E
So
what
is
needed?
There
is
device
measure
either
to
test
device
manager,
but
most
likely
to
test
other
things
which
depend
on
the
best
manager
like
topology
manager,
padres
sources,
you
name
it
so
we
need
device.
We
need
some
device
plugins
and
this
actually
started
when
writing
the
end-to-end
test
for
the
topology
manager
and
the
decision
kind
of
stuck
and
the
decision
was
like:
hey,
let's
use
real
device,
plugin
real
device
plugin
mean
not
mock,
not
fake
them,
some
a
real
component,
the
real
port.
E
E
E
The
real
question
is:
what
do
we
do
about
that?
Meaning
we
keep
it
this
way
we
revisit
this
decision,
or
maybe
it
was
maybe,
for
example,
the
sample
device.
Plugin
is
good
enough.
I
just
don't
know-
and
just
I
just
feel
it's
time
to
ask
this
question,
because
more
and
more
testers
coming
and
a
bunch
of
them
are
skipping.
So
it's
time
to
discuss
again
this
this
state
of
things.
So
again,
few
options
are:
do
nothing
use
a
sample
device
plugin
or
do
we
want
like
more
real
thing
like
we?
E
We
can
make
changes.
Well,
this
is
far-fetched,
but
still
the
device,
the
the
distributed
device
plugin
can
be
announced
in
some
far
future,
but
still
to
fake
the
devices.
So
the
selling
point
is
you
consume
a
real
device.
It's
not
like
a
device
plugin.
Sorry,
it's
not
like
a
sample
plug-in.
So
the
distinction
I'm
trying
to
make
is
fully
fake,
fully
testing
device,
plug-in
or
real
device
plugging
but
with
let's
say
with
fake
data.
E
So
you
see
it's
a
layer
of
faking
or
we
just
need
real
things,
so
maybe,
for
example,
use
gpus
because
we
have
a
machine
which
exposed
gpus
or
even
bump
the
spec
of
the
same
machine.
I
I
expect
no,
but
still
I'm
mentioning
for
the
competence
sake.
So
wrapping
up,
we
need
device
plug-in.
What
do
we
do
about
that?
Do
we
have
any
option
in
this
group
I'm
going
to
raise
this
point
to
the
larger
signal
team,
but
I
wanted
to
talk
with
with
you
folks.
First
thanks.
E
Right
now
we
will
need
a
bunch
of
fixes
to
the
sub
to
some
test
to
to
actually
consume
the
gpus
because
yeah
the
survey
we
we
explicitly
triggered
the
survey
device
plugin.
That
said,
I
don't
know
I
I
just
don't
know
how
how
much
we
can
use
the
gpu
stuff.
It's
something
I
was
exploring,
but
I
don't
know.
A
C
Yeah,
I
guess
my
worry
would
be
if
we
use
a
real
device
and
then
we
fix
things
to
fix
things
for
that
device.
Well,
first
of
all,
there's
a
cost
associated,
but
then
second,
who
knows,
if
we
end
up
just
you,
know,
sort
of
writing
tests
to
those
devices
and
miss
other
potential
use
cases.
I
would.
F
C
This
might
be
a
good
question
for
sick
testing
because
they're
the
ones
who
can
tell
us,
like
you,
can
use
these
things
or
you
can't
use
these
things
in
terms
of
our
infrastructure.
G
What
really
is
laying
under
like
under
the
device
plug-in,
I
think
again,
like
you-
want
to
test
the
device
measure
is
what
is
working
correctly
from
the
manager
perspective
from
the
like
get
the
bulge
of
his
perspective
and
stuff.
Like
this,
I
don't
think
that
we
are
really
testing
like
every
device
plugin,
so
it
shouldn't.
E
Be
a
problem,
I
agree
with
you.
The
only
point
that
I
think
is
relevant
here
is
that,
for
example,
the
survey
device
plugin
or
the
gpu
plugin
for
what
method
is
actively
developed,
so
it's
trustworthy
on
the
other
end.
If
we
like
a
fake
device
plugin
and
you
hit
the
bug,
is
it
because
the
device
plugin
the
effectiveness?
E
Plugin
is
out
of
date
or
it's
a
real
bug,
so
you
have
another
another
layer
of
uncertainty
on
the
other
end,
if
you
consume
a
real
device
plug
in
something
people
deploy
in
their
cluster,
you
can
trust
this
part
and
okay.
I
I'm
going
to
check
the
device
plug
in
last
because
I'm
pretty
confident
it's
doing
the
correct
thing.
So
this
is
the
only
thing
why
I
think
we
benefit
from
real
device,
plugin
and
not
say
fake
device,
plug-in
custom
built.
D
Part
of
this
is
also
if
we're
testing
like
gpu
plug-ins,
for
whatever
like
that,
is
something
that
people
probably
reasonably
assume
just
works
when
they
upgrade
their
cluster.
If
someone
is
deploying
some
other
weird
device
like
if
they
care
about
it,
working
they're
probably
doing
a
lot
more
preemptive
testing
for
upgrading
clusters
and
can
report
stuff.
That
way
like
having
good,
solid
testing
for
99
of
use,
cases
seems
pretty
good.
D
Fake
device
plugins
just
always
end
up
causing
issues
long
term.
They
did
for
us
in
nomad.
Quite
a
lot
yeah
I
agree.
E
A
E
E
E
Just
a
note:
I'm
I'm
not
while
I
will
be
still
a
net
benefit.
I'm
not
sure
that
gpus
are
more,
are
actually
more
widespread
than
necessary,
because
you
know
it's
a
very
specific
workload:
a
service,
more
hyper
high
performance
network,
but
yeah
yeah.
The
point
still
totally
makes
sense.
So
yeah.
I
will
get
the
numbers
and
prepare
the
numbers.
E
Okay,
so
I
guess
we
need
to
bring
this
conversation
to
involve
voter
6.
In
this
conversation,
that's
the
takeaway.
A
E
No,
no!
No!
No!
I
don't
really
well.
I
I
care
about
the
survey,
but
from
from
project
perspective.
Gpus
is
totally
better
than
what
we
have
now
so,
and
that
means
we
could
have
a
separate
test
suite.
So
gpus
are
okay.
From
that
perspective
and
okay,
I
guess
yeah,
I'm
okay,
that
that's
an
improvement.
From
my
perspective.
A
Yeah,
I
think,
if
you
can,
I
mean
it
may
not
be
part
of
every
preset
meet.
We
can
just
have
it
like
explicitly
called
but
yeah
I
mean.
I
think
it's
if
it's
a
reasonable
amount
of
tests
that
maybe
just
go
with
gpus
I'll.
A
A
F
Okay,
this
item
is
already
solved
I'll
just
go
briefly
quickly
through
it.
Just
to
give
you
the
context.
Basically,
the
cisco
test
is
marked
as
conformance
test.
However,
it
does
not
respect
two
of
the
requirements,
but
I
already
I
realized
that
paco
is
already
working
on
the
multi-in
this
test.
C
I
I
had
some
questions
about
this,
so
I
don't
know
if
they
have
been
answered.
C
Oh
actually,
this
is
this
sort
of
goes
into
some
of
the
burn
down
stuff.
This
one,
I
think,
is
marked
as
part
of
the
122
milestone
and
I
think
that
we
should,
as
a
group
today,
try
to
go
through
everything
and
make
a
call
if
it's
in
or
out,
because
there's
some
stuff
in
here
I
think
it's
just
not
gonna
get
done,
and
I
think
that's.
Okay,.
A
It's
a
good
thing
that
it's
this
agenda,
I
don't
know
so,
do
you
want
to
go
through.
C
That's
fine,
it
works
for
me.
So
this
definitely
we
need
to
get
fixed
because
there's
a
release
blocker
regression.
So
I
don't
think
there's
any
discussion
on
this
one.
I
think
we're
just
waiting
on
dims
to
take
a
look.
D
I
took
a
look
at
the
failing
tests
earlier.
It
doesn't
seem
like
anything's
new.
C
D
But
yeah
like
I
went
through,
I
think
two
of
the
failing
serial
jobs
and
there
was
nothing
that
I
didn't
expect
to
fail
in
there.
So
I
think
that's
about
ready
for
an
at
least
an
lgtm.
C
Feel
free
to
lgtm
it
it'll
need
a
depth
approver.
C
So
it
sounds
like
that
one's
under
control-
this
one
is
very
new
michelle.
This
is
the
one
that
I
mentioned.
Michelle
was
poking
me
about
right
when
this
meeting
was
starting.
So
I
haven't
had
a
chance
to
look
at
this
one,
but
I
think
this
one
is
legit
has
to
do
with
clayton's
refactor,
possibly
so
somebody
needs
to
dig
into
this
a
little
bit
more
today.
C
C
Yeah,
I
think
that
this
one
we
don't
need
anymore,
I
don't
know
because
we
merged
something
that
fixed
the
actual
failing
test.
So
it's
unclear
to
me
what
we
need
this
one
for.
C
D
Yeah,
I
don't
think
this
needs
to
make
it
in
122.
like
it's
ready
to
go,
it
just
doesn't
seem.
C
That's
what
I
thought
too,
so
I
think
we
can
pull
the
milestone
from
that
one,
and
I
will
do
that
now,
but
I
wanted.
A
A
C
C
Yeah,
I
don't
think
that
that
one's
release
critical-
I
mean
it's
well,
I
guess
here's
the
problem.
If
something
is
failing,
the
conformance
test,
it
means
that
it's
not
kubernetes
conformant,
but
this
is
like
a
non-trivial
change,
this
lathe
in
the
cycle
and
it
will
need
a
conformance,
reviewer,
slash
approver,
to
look
at
it.
So
maybe
what
we
should
do
is
we
should
ask
for
feedback.
C
Okay,
yeah,
I
think
that
makes
sense
mike
do
you?
Maybe
you
want
to
do
that,
because
I
know
that
you've
been
looking
at
this
one.
I
think
all
we
really
need
to
do
is
remove
this
one
from
the
conformance
suite.
A
David
on
that,
he
said
that
he
need
more
time
to
investigate
it's.
It's
not
that
easy
to
reapproach.
C
A
C
None
of
these
are
regressions,
so
that's
maybe
something
to
keep
in
mind.
I
don't
know
so
this
one
in
particular,
I'm
worried
about
just
because
I'm
seeing
it
flaking
so
much,
and
I
think
I've
also
been
seeing
bug
reports
of
it.
But
since
it's
not
a
regression,
I
don't
think
it's
release
critical
like.
I
think
we
can
get
away
with
not
fixing
this.
C
I
would
almost
be
tempted
to
punt
every
single
one
of
our
flakes
at
this
point
because
we're
what
like
two
weeks
out,
basically
from
release,
I
don't
know
like
if
we
managed
to
fix
one
that
would
be
fine.
It's
weird
the
freeze
is
so
long.
I
would
expect
like
us
to
be
cutting
a
release
next
week,
but
we're
not.
C
A
C
C
C
Yeah,
I
I
I
really
don't
think
it's
release
blocking
and
also
like
structured
logging
is
still
in
alpha.
It's
not
in
beta,
as
far
as
I
know,
but
structured
logging
needs
to
make
the
call
on
this
one.
It's
not
us.
C
C
No
matisse
is
on
vacation,
so.
A
C
B
C
A
C
C
I
just
think
that
this
test
wasn't
written
very
well
and
that's
part
of
like
the
I
had
this.
I
was
inspired
by
seeing
these
flakes
that
when
I
went
and
added
the
end
to
ends
for
the
the
termination.
C
D
This
is
the
problem
with
a
lot
of
the
couplet
tests
like
trying
to
find
the
eviction
bugs
like
I've,
been
bisexing
the
code
base
and
like
one
time
out
of
like
three
it'll
fail
for
some
like
random
reason
or
the
cupola
will
fail
to
like
start
like.
C
It's
because
it's
all
super
racy,
like
that's,
the
cubelet,
is
literally
written
as
one
giant
race
condition
and
we
can't
use
fake
clocks
because
there's
no
coordination.
So
it's
not
great.
If
I,
if
we
end
up
rewriting
the
cubelet,
it's
something
we
could
fix
in
the
architecture
of
cubelet
2.0,
but
like
borrowing.
D
A
Yeah,
okay,
maybe
one
day.
So
this
is
what
we
just
discussed
right.
A
And
the
light
is
at
the
country-
oh
yeah,
like
yeah
artem,
just
removing
it
from
so
maybe
to
apply
milestone
here
and
take
it,
because
this
is
straightforward.
C
Yes,
that
makes
sense.
I
can
take
a
look
at
that.
I
don't
think
I
have
approver.
I
don't
think
we
need
milestone
for
testing
for
our
prs.
A
C
And
also
yeah
and
mourinho
recently
got
it
added.
C
Yeah,
that
makes
sense
to
me
most
of
those
prs
all
merged,
so
I'm
really
not
sure.
What's
going
on
with
it.
A
Okay,
be
done
with
this
item.
We
went
through
everything
great.
A
And
yeah
last
one
I
wanted
to
highlight
this
parity
gradation
run
c
bump
fixed
a
little
bit
of
degradation
between
121
and
120,
so
we
used
to
see
25
percent
increase
on
cpu.
A
Now
we
see
on
the
test,
10
15
percent
of
degradation,
same
cpu
and
memory,
and
I
haven't
run
any
any
pair
of
jumps
yet
so
something
that
I
may
need
to
do
in
future
or
if
somebody
want
to
take
a
look
and
see
the
same
degradation.
Please
feel
welcome
to.
I.
A
Yeah,
it
is
an
interesting
thing
that
degradation
is
higher
on
now
bigger
number
of
pods,
so
you
need
to
run
some
tests
with
a
high
number
of
pods
to
discover
that.
C
Yeah,
so
the
commands
that
I
wrote
there,
presumably
you'll,
need
to
like
proxy
to
the
clusters
unless
you're
running
it
locally,
in
which
case
you
are
a
wizard.
I
have
not
been
able
to
figure
out
how
to
do
that
very
well,
and
then
you
can
use
this
magical
goprof
tool.
If
you
pass
the
dash
http
flag,
it
will
give
you
an
interactive
browser,
it'll
launch
on
the
port
that
you
specify.
So
in
this
case
I
said
like
localhost
88888
or
something
like
that,
and
then
you
just
give
it
the
binary.
C
You
want
a
profile
and
the
end
point
with
the
p
prof
and
it
will
go,
and
I
strongly
suggest
like
the
default
is
60
seconds.
It's
not
a
long
enough
sample
to
figure
out
cubelet
performance
degradation,
so
the
command
that
I
put
in
there
has
an
1800
second
window,
which
is
30
minutes.
So
you'll
have
to
sit
there
and
let
it
run
like
go
and
run
some
end-to-end
tests,
fire
up
a
bunch
of
pods
on
the
cube,
lid
or
whatever.
But
then
you'll
get
a
bunch
of
data.
C
And
if
you
do
this
on
a
120
cluster
121
cluster,
then
you
can
look
at
the
flame
graphs
and
I
have
linked
some
screenshots
of
like
what
this
look
like
and
the
comment
on
that
bugzilla
and
basically
wide
is
bad
yeah.
If
you
click
on
the
attachment
there.
C
It
should,
if
it's
a
run
c
thing,
but
yes,
it
definitely
does
because
the
scalability
tests
run
with
container
d
and
we
are
seeing
the
regression
in
the
upstream
scalability
tests.
So
we
saw
it
in
both.
D
C
And
container
d
yeah,
if
you
go
and
look
at
the
pictures,
if
if
they
launched
sergey
it
weren't
in
your
browser,
so
we
couldn't
see
it.
A
Yeah
because
we
might
download
folder,
I
mean
everybody
can
make
an
exercise
of
going
and
clicking
this
game.
C
That's
like
I
use
firefox
it'll
show
you
inline
images
instead
of
just
downloading.
A
C
But
yeah,
basically,
you
will
see
tall
on
the
flame.
Chart
means
that
there
are
a
lot
of
nested
calls
wide
means
lots
of
cpu
time,
so
the
wide
bars
are
like.
I
don't
know
this
is
I
got
to
learn
all
of
this
when
I
was
debugging
this
the
last
time,
there's
not
a
lot
of
great
resources
on
how
to
read
flame
craft.
So,
basically
like
low-hanging
fruit
is
like
when
the
the
calls
are
very
wide.
That
tells
you
that
a
lot
of
cpu
times
being
spent
there.
D
A
Okay
and
maybe
you'll
use
your
offer
okay,
so
we
a
lot
of
agenda
items
and
we
had
46
minutes.
We
have
a
little
bit
of
time
to
look
at
sure
like
triage
board,
but
I
don't
think
there
is
anything
do
notable
here
so
yeah.
I
was
surprised
by
this
cherry
peak.
Are
we
already
closing
master
like
opening
in
back
for
non
22
work.
C
A
Okay,
let's
review
here
and
have
a
sign
you
know.
A
Yeah
this
is
interesting.
Somebody
made
some
analysis
and
I
don't
know
what
the
tests
are
actually
failing,
but
there
are
some
race
races
that
were
fixed
yeah.
Does
anybody
want
to
take
a
look?
It's
a
unit
tests
should
be
quite
straightforward.
A
Thank
you,
okay,
all
right,
so
one
thing
we
can
start
doing
is
to
looking
at
issues
that
are
not
assigned
to
anybody.
I
think
they
are
all
assigned
so.
A
The
sock
test
yeah,
I
remember
commenting
on
that.
No.
A
Yeah
this
is
interesting.
I
I
I
found
a
blog
post
about
this
dashboard.
It
was
introduced
in
110
or
something
and
now
it's
not
working.
So
that
is
interesting
dashboard,
if
only
it
had
data.
C
We
should
just
get
rid
of
it.
I
have
no
idea
what
the
history
is.
I
think
that
at
some
point
we
used
to
use
this,
but
now
six
scalability,
so
we
didn't
used
to
have
cubelet
stats
in
the
sixth
scalability
dashboard.
Now
we
do
so.
We
can
probably
just
get
rid
of
this
thing,
but
I
have
no
idea
who
or
what
runs
this
clearly
there's
no
data
populating
it
so.
A
A
Oh
yeah,
this
is
what
I'm
working
on.
Oh,
it's
not
conformance.
I
have
an
action
item
from
last
week.
I
I
have
had
this
document
last
week
and
I
didn't
update
it
to
send
to
conformance
and
signal
people,
so
I
will
do
it
now.
Take
this
one
as
well.
A
A
A
D
So
eviction
tests
were
failing
for
a
few
different
reasons.
One
was
the
like
file
handles
issue.
The
other
one
was
an
actual
bug
as
part
of
the
life
cycle,
refactoring
that
both
now
merged
there's
a
bunch
still
failing
for
as
of
yet
unknown
issues,
because
I'm
planning
on
looking
at
them
tomorrow.
D
A
Great-
and
we
only
have
five
minutes
left
good
thing-
lana
you
reminded
about
this
performance
that
we
now
has.
I
just
want
to.
I
added
the
link
into
triage
section
of
the
document.
A
So
this
introduction
of
this
120
chart
helped
help
me
compare
120
and
121
in
this
environment.
I
mean
we
have
some
tests
internally,
but
the
only
on
very
late
stage,
so
yeah
we
can
compare
these
results.
Yeah.
C
I'm
really
glad
that
antonio
added
the
support
for
this,
because
previously
it
was
just
like
well,
we've
got
this
upstream
in
openshift
sort
of,
but
I
can't
really
help
with
downstream
tests,
because
cubelet
was
missing.