►
From YouTube: Kubernetes SIG Node 20210929
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Good
morning
it's
a
good
day,
it's
september,
29th
2021
and
it's
subgroup
meeting.
Let
me
share
my
screen,
so
we
can
go
through
agenda.
B
A
But
wait:
I
will
need
to
reach
your
resume.
Okay,
it
paused
for
some
time
for
some
reason
anyway,
yeah
not
conformance
update.
I
like
I
started,
writing
everything
down
and
then
I
realized
I
it's
just
too
many
edge
cases.
So
I
wrote
a
small
program
that
takes
all
the
test
definitions
and
I'm
analyzing
it
now.
A
I
was
surprised,
like
some
tests
that
I
found
like
I'm
trying
to
run
this
test
like
each
test
has
not
conformed,
but
don't
have
conformance
that
kind
of
queries
and
it
feels
like
I
mean
to
clean
everything
up.
It
may
require
some
effort,
so
I
just
I
will
create
a
list
yeah.
It's.
A
I
will
I
will
yeah,
I
want
to
make
it
as
friendship
right
now.
It's
just
a
little
go
program
that
like
so
I
run
this.
Let
me
show.
A
Then
I
give
some
output
and
then
I
wrote
a
small
go
program
that
analyze
like
parses
out
all
the
tests,
so
I
will
be
yeah
yeah.
I
I
I
I
put
it
in
csv
but
then,
like
I
didn't
like
how
it's
like,
because
I
didn't
put
all
the
fields
separately
so
now
I
will
change
it
and
like
just
make
a
column
for
every
tag
and
it
will
be
easy
to
analyze,
yeah
so
still
ongoing.
A
I
I
didn't
realize
how
many
permutations
we
have
and
like
it's
a
little
bit
more
involved
that
I
I
expected
so
yeah.
I
I'll
have
updates
next
time
and
I
will
share
spreadsheet
as
long
as
I
have
as
soon
as
I
have
it.
So
everybody
can
play
around.
A
Yeah,
that's
all
I
have
today
for
not
conformance,
unfortunately,
hopefully
next
time
it
will
be
already
done,
but
I
mean
it's
looking
better
now.
I.
C
Yeah
I
mean,
I
would
say
at
least
like
having
the
the
sort
of
test
runner
scripts
for
like
finding
all
of
the
things.
Surely
that's
going
to
be
very
helpful,
so.
A
Yeah
my
second
approach
was
like
first,
I
like
started
that
and
then
second
approach
is
I'm
trying
to
understand
what
we
need
for
what
kind
of
use
cases
we
want
to
satisfy,
and
I
started
writing
those
down
and
then
I
also
a
little
bit
stuck
because
I
mean
we
have
so
many
test
test
jobs
so
like
looking
at
the
jobs
taking
some
time.
One
thing:
I
realized
that
we
use
feature
inconsistently
with
featured
gate,
so
I'm
thinking
like
I
don't
know,
maybe
we
can.
I
can
get
a
quick
feedback
here.
A
I
think
that
feature
may
stay
forever,
like
as
an
indication
that
this
functionality
needs
a
special
setup.
But
then,
besides
the
feature,
maybe
we
need
to
have
a
feature
gate
a
flag
and
this
feature
gate
flag
will
indicate
whether
we
want
like
what
exact
feature
gate
needs
to
be
enabled
and
it
may
be
even
used
for
beta
features.
C
C
Often
corresponds
with
a
feature
gate,
so
I
think
that's
accurate,
as
I
think
jordan
said,
on
a
pr
like
feature
has
a
special
meaning.
It
basically
means
this
thing
is
an
alpha
feature
when
things
cease
to
be
alpha,
features
those
are
supposed
to
be
taken
off
of
the
tests
to
ensure
that
they're
like
running
at
all
the
end-to-end
runs,
because
presumably
the
feature
gauge
is
defaulting
to
on
some
of
them.
Do
include,
like
some
logic,
to
say.
C
If
this
feature
gate
is
not
set,
then
skip
this
test,
because
you
know
in
theory
the
feature
gate
could
be
disabled,
even
though
it's
on
by
default.
But
generally,
I
think,
like
the
thing
is
like
all
of
these.
C
You
know
sort
of
test
selectors
or
whatever
we
should
be
careful
about
which
ones
that
we
use
and
do
not
use
because,
like
they
mean
things
to
other
people,
I
don't
know
that
this
is
very
well
documented
and
maybe
that's
something
that
we
should
like
have
a
conversation
with
sig
testing
about,
but,
like
generally,
I
would
say,
like
you
can,
for
example
like
if
you
want
to
just
filter
on
like
I
want
to
test
these
things
for
this
feature.
C
You
don't
even
need
a
schmancy
test,
selector
like
it's
literally
just
a
regex
match
so
like
you
could,
like
type.
I
don't
know
node
swap
and
as
long
as
all
of
the
test
names
have
node
swap
in
them,
then,
like
that's,
going
to
show
up
when
we
query
for
that,
when
we
pass
a
focus
to
jinko.
So.
C
A
Just
I
mean
we
can
keep
feature,
as
I
mean
get
a
feature
gate,
as
the
problem
is
that
we
use
feature
for
two
scenarios.
First
scenario
is
to
indicate
this
alpha
and
special
meaning,
and
it
may
be
forever
special
meaning
it
may
be
forever
special
environment,
and
the
second,
I
think
is
it
may
be
like.
C
I
think
you
need
a
better
way
of
communicating
these
things.
I
think
one
of
the
so
we've
talked
to
cigarch
about
this.
We've
talked
amongst
ourselves
about
this,
and
we've
talked
to
like
the
wider
signate
about
this
we
haven't
talked
to
sig
testing.
I
think
that
they
have
opinions
and
we
probably
should
talk
to
them
too.
So
maybe
that's.
C
C
To
follow
up
with
them
on
that,
maybe
mike,
if
you're,
already
working
on
this
or
yourself
sergey.
A
Yeah
definitely
so
the
only
thing
I
wanted
to
highlight
is
we
have
this
use
case
when
we
want
to
disable
feature
gate
like,
for
instance,
we
have
this
exact,
prop
timeout,
that
is
causing
a
lot
of
problems
for
customers
and
it's
disabled
in
gke.
For
sure-
and
I
know
I
think
it's
also
disabled
for
azure
for
now,
so
there
are
cloud
providers
who
don't
want
to
test
this
feature
on
and
we
need
to
have
an
easy
way
to
disable
it
and
test.
A
Without
it
I
mean
it
may
be
one
off
I
don't
know
like,
but
it
doesn't
feel
like
one
off
well,
that's.
C
What
I'm
saying
like
specifically,
the
idea
is,
like
you,
can
add
the
end-to-end
skipper
to
these
end-to-end
tests,
like
the
things
tagged
feature
mean
that
it's
an
alpha
feature
once
that
goes
away.
They
should
still
have
this
thing.
C
That
says,
like
you
know,
if
the
feature
gate
is
not
set
skip
it
kind
of
thing,
I
think
there
are
a
few
tests
that
will
actually
go
and
turn
feature
gates
on,
and
maybe
we
want
to
kind
of
avoid
that
in
sync
node,
but
because
I
don't
know
that
that
would
be
very
helpful.
C
A
Yeah
and
also,
I
wonder
if
we
need
to
have
a
requirement
for
people
to
test
that
feature,
is
disabled
when
feature
gate
is
not
set,
because
I
don't
think
we
ever
test
it,
and
it's
I
mean
we
only
slowly
solely
rely
on
code
review
to
to
review
that
kind
of
behavior,
which
I
don't
know
like,
and
to
test
that
behavior.
A
We
need
to
like
it's
challenging
because
you
need
to
disable
features
that
is
enabled
by
default
already,
so
you
need
to
have
a
special
environment
and,
like
presumably
will
be
special
execution,
which
is
I.
C
C
But,
like
I
don't
know,
part
of
the
problem
that
we're
dealing
with
right
now
is
there's
just
like
been
this
organic
growth
of
tests
over
the
past
seven
years,
and
I
don't
think
that
anybody's
kind
of
sat
down
and
said
like
oh
yeah,
we
should
have
this
like
one
unified
strategy,
they
kind
of
have
just
grown
piecemeal
in
many
directions,
and
I
think
it's
good
to
have
guidelines
and
it's
good
to
be
able
to
say,
like
xyz
and
it's
even
better
to
be
able
to
enforce
it
with
code,
but
generally
like
there's
some
things
that
you
know
like.
C
Does
this
code
need?
This
infra
is
like
going
to
be
a
hard
thing
to
enforce
in
code,
it's
very
hard
to
tell
the
linter
that
so.
A
Yeah
and
it's
an
another
use
case,
I'm
thinking
do
we
need
to
run
all
the
conformance
tests
with
all
the
beta
features
disabled
because
we
don't
do
it
today
and
it
sounds
like
a
good
idea
and
right
thing
to
do,
but
I
don't
know
whether
anybody
ever
run
kubernetes
with
beta
features
disabled.
It's
just
like
I
mean
we.
C
Are
in
a
world
where
everybody
like
a
question
for
sig
testing
and
possibly
a
question
for
production
readiness,
because
we
talk
a
lot
about
like
being
able
to
turn
off
beta
features.
But
I
don't
know
that
we
do
a
lot
of
testing
with
it.
A
Yeah
exactly
and
to
enable
this,
we
we
clearly
need
to
have
a
way
to
express
that
this
is
a
beta
feature,
and
this
is
a
like
feature
which
is
beta,
so
I
don't
think
just
beta
flag
will
be
enough
in
this
case,
maybe
just
by
the
fact
will
be
now,
but
anyway
yeah.
That
is
my
I'm
for
unfortunately,
I
don't
have
like
well
systemized,
but
I
I
spent
quite
some
time
to
like
try
to
put
it
in
in
the
writing
and
yeah.
E
There's
another
thing
I
want
to
mention
related
to
this:
I'm
not
sure
if
this
is
a
valid
approach,
but
wouldn't
it
make
sense
to
add
this
as
a
requirement
for
every
kept
that
once
it
migrates
from
from
alpha
to
beta
or
from
beta
to
ga,
to
update
the
proper
test
with
the
with
the
labels
or
remove
the
feature
labels
if
it's
required.
C
A
Okay,
yeah
and
whenever
I
reviewed
I
asked
for
that,
but
I
mean
there
are
way
more
features
than
we
review
amongst
this
group.
E
A
So
yeah
retractively
we
will
need
to
fix
a
lot
of
things,
but
hopefully
we
can
do
that.
I
mean
what
we
need
to
do
is
a
like
definition
of
what
we
want
to
achieve
and
then
it
will
be
easier
to
rename
everything
it
should
be
complicated.
F
F
For
example,
all
these
features
not
alpha
features,
I
don't
know
serial
tag
and-
and
I
believe
we
have
much
more
under
our
internet
like
it,
it
can
be
just
useful
to
to
see
under
which
lens
you
will
run
your
tests
if
you
specified
some
set
of
tags.
Oh.
A
Yeah,
that
would
be
even
better,
so
I
see
what
I
can
do.
Actually
it's
not
that
complicated,
okay!
Well,
let
me
see
yeah
I'm
I
like
writing
tools,
so
maybe
I'll
just
pick
up
that
one
as
well.
A
G
Yeah
I
reached
out
to
amit
who's
more
familiar
with
the
coop
test:
migration
yeah.
They
were.
They
had
some
conflict
today,
but
next
next
meeting
they'll
come
in
and
give
us
a
little
bit
more
information
about
that
and.
G
Oh
okay,
then
we
can
either
reschedule
or
I
don't
know
whatever
it
works
better.
I
don't
know
let.
A
Me
check,
maybe
I'm
wrong.
Oh
nice
look
is
not
I'm
checking.
B
A
D
H
D
A
A
D
Once
because
I'm
not
sure
of
the
impact,
I'm
not
sure
if
some
of
them
are
master
blocking,
I
need
to
check,
but
that.
C
Was
a
great
question:
I
we
have
like
one
or
two
that
are
master
blocking,
but
I
don't
think
it's
the
majority.
Do
you
need
one
of
us
to
check
that
yeah.
C
C
A
pr
to
move
some
stuff
off
of
one
project,
but
these
are
apparently
other
random
projects
that
we
have
yeah,
I
know,
is
the
advice
to
just
like
remove
setting
a
project
specifically
unless,
like
we
need
it
for
some
special
reason.
D
We
basically,
we
need
to
reach
out
to
get
server,
to
see
how
we
can
work
to
create
a
new
project
with
special
configuration.
But
the
idea
is
to
basically.
C
C
Yeah,
I
think
that
hopefully
should
be
fine.
Let
me
go
and
what
I
will
do
is
I
will
take
an
action
to
double
check
like
if
any
of
these
jobs
are
master
blocking.
C
C
I
just
so
we
know
like
the
things
that
are
possibly
like
at
risk
of
breaking,
and
so
like
one
of
us
is,
I
mean,
presumably
our
no.
You
will
want
one
of
us
from
sig
note
to
like
either
lgtm
or
approve
the
changes.
C
So
as
long
as
somebody's
aware
of
it-
and
you
know,
maybe
makes
a
note
in
the
channel-
it
should
be
okay,
but
I
just
I
I
think
it's
fair
to
at
least
identify
which
ones
could
possibly
break
and
when
so
we're
not
like
having
a
mystery
about
it,
and
then
we
just
revert
if
it
breaks.
It's
fine.
C
Don't
think
we
need
to
wait,
we
have
lots
of
people
on
the
call.
This
is
a
very
friendly
thing
for
someone
who
hasn't
made
a
contribution
to
jump
into.
Would
anybody
be
interested
in
dipping
their
toes
into
test
infra.
C
Sure,
okay,
I'll
do
a
I'll
do
a.
Let
me
change
this
action
item.
Arnold,
should
we
use
this
like
master
tracking
issue
or.
H
C
D
D
C
A
I
know
I
have
a
question
in
this
pull
request
from
aaron
that
moved
pre-submits.
A
I
remember
there
was
like
change
to
remove
project
from
a
test
definition,
but
also
there
is
some
central
file
in,
like
some
other
folders
that,
where
he
removed
the
projects
from
or
something
like
clean
made,
some
cleanup
first,
I
think
we
don't
have
permissions
like.
We
cannot
approve
changes
in
this
file
ourselves,
so
we
will
need
to
have
like
somebody
from
seek
test,
and
second
is,
do
you
know
how,
like
I
just
don't
know
the
context
for
this
file
like?
Is
it
important
to
remove
definitions
from
there
is?
D
Okay,
do
you
have
the
link
of
the
file
you're
talking
about?
Because
I
can
for
some
of
them
I
can
approve,
if
not
aaron
will
approve,
because
we
need
to
do
migration,
so
the
priority
for
us
is
to
migrate
everything
that
master
blocking
and
master
informing.
C
Okay,
so
you
want
to
do
this
first.
That
sounds
good.
A
Yeah
just
trying
to
minimize
the
amount
of
synchronization
you
need
to
make
because,
like
if
I
mean,
if
you
can
approve
then
like
I
don't
know,
ryan
sign
up
right
ryan
will
make
a
change
we'll
approve
and
we
can
watch
how
it
applied
and
that
tests
are
actually
running.
If
we
cannot
approve
really
quick,
then
it
may
be
like
we
will
need
to
have
somebody
else
being
around.
D
Should
not
be
an
issue
because
I
think
we're
basically
sick
testing
against
everyone
migrate.
Those
job
during
this
milestone-
and
we
end
before
this
frame,
so
I
provide-
should
not
be
a
problem
for
us.
It's
just
now.
It's
about
how
many
people
are
in
front
of
the
get
up
notification
or
the
slacks
to
see
if
they
need
approval.
A
Okay,
then
I
mean
I
yeah.
I
will
try
to
find
this.
What
I
meant
for
this
file
but
sounds
good
to
me.
D
A
Yeah
I
found
this
pull
request,
so
this
pull
request
has
this
change
is
basically
removes
the
project
specification
explicitly,
but
then
this
file
as
well,
which
removes
this,
I'm
not
sure
what
what
the
context
for
this
change
and
whether
we
need
to
do
it
in
when
migrating
to
these
jobs.
D
You
can
you,
can
you,
can
you
can
just
yeah
remove
the
this
flagger,
but
we
have
existing.
I
think
we
have
some
predefined.
D
C
I
already
I
already
had
that
pr
I
linked
to
the
specific
file.
I've
created
an
issue.
You
can
take
a
look
at
that.
D
B
A
So
yeah
get
allocated
by
resources,
go
to
beta
and
yeah.
We
can
approve
for
test.
A
No
more
future
kids.
Anybody
interested
no
francesca
already
here,
scc.
I
Yep,
I
can
help
in
this
area,
but
this
basically
is
very
simple:
that
okay,
those
are
two
fixed
that
we
need
to
basically
once
those
are
in
this
pr
is
easy:
peasy.
A
I
I
J
I
F
Like
I'm
working
on
it
like
to
drop
some
dynamic
configuration
and
to
provide
static
configuration
but
yeah,
we
can
discuss
it
offline.
A
J
F
Yeah
it's
one
of
the
problems.
The
second
problem
is
the
encounter
versus
that,
like
we
have
some
this
pressure
test,
also
when
we
are
waiting
in
after
each
for
like
this
pressure
condition
is
gone
and
we
start
the
pot
after
it.
The
pot
is
still
fair
off
because
of
this
pressure
condition
so
or
it
was
added
after
it
after
the
test
is
finished,
to
run
or
it
because
again,
some
asynchronization
between
cubelet
a
condition
and
like
api
condition.
I
I
I
I
realized.
We
have
a
topic,
probably
for
next
meeting
or
even
the
next.
We
have
another
fixes
where
we
are
discussing
for
the
memory
manager
test,
which
seems
to
a
possible
cause,
maybe
not
the
only
cause.
It
seems
to
be
that
the
memory
is
fragmented
and
we
have
this
test
starting
mentioned.
That
seems
to
suggest
that
depends
really
on
the
node
on
the
node
state.
So
on
one
end
we
want
those
tests.
I
On
the
other
end,
they
have
an
implicit
dependency
of
the
node
state,
because
if
you
have,
for
example,
two
for
two
front
two
fragmented
memory,
this
will
fail
and
it's
a
false
negative.
Actually
so
I'll
I
can.
I
guess
we,
I
file
a
topic
for
the
next
meeting,
or
so
it
depends.
Basically,
it
boils
down
to
depend
on
to
how
to
we
should
handle
those
tests
that
have
a
really
strong
dependency
on
the
node.
The
implicit
precondition
on
the
node
state.
A
B
Okay,
board.
A
I
I
I
I
A
So,
let's
go
from
there
and
since
we
were
discussing
our
art,
are
you
looking
into
that
right.
A
F
Kind
of
errors,
but
it
just
excludes
some
docker
related
tests
and
so
yeah.
C
C
Yeah,
the
other
thing
that
happened
is
yesterday
like
we
were
getting
pings
from
the
ci
signal
lead
on
that.
I
did
not
realize
that
that
got
added
as
a
release
informing
test.
It
looks
like.
Basically,
there
were
two
green
runs
and
dimms
was
like.
Let's
add
it
as
a
release
informing
and
someone
did
this
while
I
was
out
of
office,
I
don't
think
it
was.
I
checked
in
the
notes
and
I
don't
think
it
was
discussed
and
we
had
like
a
billion.
D
C
About
it
failing-
and
I
think
typically,
it
needs
to
be
green
for,
like
two
weeks
not
two
runs
before
we
add
this
release
and
forming
so
yeah.
I
just
reverted
that
yesterday
and
now
we
all
hopefully
stop
getting
things
about
it,
because
we
know
it's
broken,
or
at
least
we
know
it's
flaky.
A
Yeah
and
do
you
need
to
revert
the
ones
that
currently
run
pull
requests,
because
it's
also
so.
C
C
C
That
true
like,
where
would
that
be
set
because,
like
this,
has
changed
semi
recently,
I
wasn't
pulled
in
to
do
any
approvals
for
it
and
I'm
not
sure
why
it's
happening.
It's
it's
kind
of
wasteful
because,
like
we're
ignoring
the
results,
basically
so
like,
we
shouldn't
be
doing
that
until
those
tests
are
green.
F
B
Let's
put
it
here
for
next
week.
B
C
I
don't
think
she
is
sorry.
I
found
the
change.
I
think
we
should
probably
also
revert
this
one
and
in
fact
it
does
go
and
run
the
jobs
if
anything
in
test
e
to
e
underscore
node
or
e
to
e
node
gets
run.
Unfortunately,
these
jobs
are
like
never
passing,
so
we
probably
shouldn't
be
running
these
tests
until
they
are.
C
Revert,
I
couldn't
figure
out
why
this
was
happening
because
always
run
wasn't
specified
and
it
turns
out
there's
some
other
thing
that
does
a
regex
match
on
the
files,
so
I'll
create
an
issue
for,
or
I
don't
know
if
I
need
an
issue,
I'm
gonna
just
revert
the
pr
okay,
I
think
matthias
isn't
here
today.
It
looks
like
dim
submitted
both
of
these
pr's
and
matthias
lgtm,
both
of
them
and
neither
were
discussed
in
the
subgroup,
and
I
really
don't
think
we
should
have
done
that.
So.
B
A
Yeah,
it
just
happened.
I
don't
know
like
why
it
happens
and
I
was
really
surprised
and
confused
about
it.
So
coming
back
to
this,
this
is
flaking
cry
c,
p1,
conformance
and
idt
made
this
comment:
are
you
on
a
call
to
comment.
A
C
A
And
then
task
still,
nobody
sign
up
to
do
that
I'll,
just
keep
it
around.
B
B
A
Great,
so
anything
else
about
this:
if
not,
we
can
go
to
back
scrub.
C
Oh,
I
see
you
added
that
to
the
agenda.
I
just
submitted
a
pr
to
revert
the
thing.
C
Oh
for
for
10.06,
I
think
we
can
move
that
to
this
meeting.
I
have
found
out
why.
C
Assign
this
one
to
me
and
triage
accept
it.
I
think
it's
fixed
already.
A
C
Ryan,
I
think,
managed
to
bisect
this.
I
think,
there's
a
comment.
D
C
It's
just
that
the
code
in
memory
needs
more
space,
but
it's
like
not
a
trivial
increase.
It's
like
30
percent,
so.
C
Yeah
there's
no
one
currently
assigned
ryan
at
least
was
helpful
enough
to
figure
out
when
it
happened,
but
I'm
not
sure
what
we
want
to
do
about
it.
A
C
C
C
It
was
bisected
to
the
they
changed,
the
way
that
they
built
the
cubelet
binaries,
which
I
think
is
causing
the
memory
footprint
of
the
cubelet
to
increase.
It's
like,
I
think,
let
me
take
a
quick
look
at
the
issue
again,
of
course,.
C
Like
it
makes
sense
why
the
memory
footprint
increases
just
that
we
didn't
really
talk
about
it
to
anyone
and,
like
the
release,
note,
doesn't
really
explain
why
it
increased
okay.
So
there's
a
new
build
flag
which
enables
aslr
aslr
is
address-based
layout
randomization.
C
I
think
it
requires
like
slightly
more
memory
but
then
can
prevent
certain
classes
of
like
exploit
so.
F
A
Yeah
25
megabytes
yeah.
I
guess
it's
it's
not
that
much!
So
that's
physical
for
sure!
I
I
think
it's
not
that
much
because
runtime
takes,
I
think,
what's
the
kind
of
numbers
you
can
take
a
look.
A
C
A
Yeah,
comparing
to
run
times
it
takes
like
200
or
maybe
like
yeah
that
doesn't
seem
to
I
mean
I
understand
why
it
wasn't
not
not
just
I
just
yeah
it's
clearly,
it's
a
degradation.
A
Okay,
so
it's
triad
chat,
but
do
we
want
to
do
anything
beyond
this
fix?
I
think
we
can.
A
Beyond
this
revert
back
or
like
exchange
for
compiler.
F
Like
in
general,
should
we
discuss
it
with
the
security
I
don't
know
because,
like
I
cannot
evaluate
how
like
how
risky
it
the
security
problem?
F
C
That
needs
to
be
communicated
because,
like
we're
catching
this
now,
because
you
know
we're
looking
at
122,
but
the
vast
majority
of
kubernetes
users
are
not
going
to
like
see
this
for
some
amount
of
time,
since
they
are
often
many
versions
behind
like
the
latest
in
development.
So
there
may
be
nothing
to
do,
but
like
the
comms
around,
this
need
to
be
better
and
that's
been
like.
I
think
it's
come
up
more
than
once
with
like
sig
release
where
they've
made
a
change
and
then
nobody
knew
about
it.
C
B
C
There's
a
lot
of
appetite
within
this
group
to
necessarily
go
and
chase
that
down
so
I'll
see
if
it's
something
that
I
can
bring
up.
I
know
that
we're
not
going
to
have
a
chairs
and
leads
meeting
next
month
because
of
kubecon,
but
I
can
see
if
I
can,
like
start
a
slack
thread
or
something.
G
And
for
the
future,
I'm
just
curious
that
that
dashboard
doesn't
contain
profs
from
memory
like
if
we
need
to
chase
down
feature
memory.
Regressions
like
that
yeah.
C
I'm
not
sure
what
happened
there.
I
know
that
I
think
antonio
added
support
for
like
cubelet
resources
and
to
the
what
you
might
call
it
the
perf
tests,
but
it
doesn't
look
like
necessarily.
There
are
memory
dumps
showing
up
in
the
artifacts,
so
that
might
have
just
been
an
accidental
oversight.
If
somebody
wants
to
file
a
bug
and
follow
up
there,
I
think
that
that
would
be
the
right
thing
to
do
in
the
perf
test.
Repo.
C
I
can
just
do
that.
One
real.
A
B
A
Thank
you.
Erawaji
have
a
good
day,
bye.