►
From YouTube: Kubernetes SIG Node 20210909
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Good
morning
it's
september
9
2021.,
it's
a
six
signal.
Ci
subgroup
meeting
welcome
everybody.
Let's
go
for
agenda
first
item
is
mike.
I
added
from
your
pr.
Do
you
want
to
talk
about
it?
A
little
bit.
B
C
So,
just
looking
at
some
of
these
things,
like,
I
think,
we're
sort
of
in
the
unfortunate
boat
that
I
don't
know
how
many
of
these
were
actually
running
in,
like
a
separate
test
suite
like
selecting
you
know
like
just
on
fs
group
or
something
like
that
at
which
point
there's
a
question
of
like.
A
C
C
I
think
that's
the
problem,
so
at
least
like
one
of
the
things
I
know
scrolling
through
here
is
definitely
a
beta
feature
and
not
an
alpha
feature,
which
is
why
we
probably
shouldn't
tag
it
with
feature
basically
like
beta
things
should
run
in
the
default
end
to
ends
because
they're
on
by
default,
which
and
feature
colon
will
cause
things
to.
Basically
only
get
picked
up
in
the
alpha
runs,
so
it
we
may
want
to
actually
go
through
and
like.
D
C
We
need
to
go
and
look
at
these
and
see
like
okay.
What
is
the
maturity
state
of
this
thing
because,
for
example,
if
it's
like
completely
graduated,
like
I
don't
know,
I
know
that
we're
just
trying
to
go
through
this
sort
of
transitional
state
but
like
if
it's
graduated,
I
don't
know
why
it
would
have
a
node
feature
tag
on
it
right
like
wouldn't
it
just
be
a
sig
node
tagged
test.
A
A
For
api,
for
sure
I
mean
for
api,
we're
in
kubernetes,
without,
like
with
g,
only
to
make
sure
that
no
like.
A
A
Yeah
listen
but
like
philosophically,
I
think
people
will
push
for
being
able
to
distinguish
beta
from
ga.
C
A
Okay,
but
in
any
case,
this
comment
from
league
it's
about
different
thing,
it's
about
like
it's,
not
what
he's
writing
here
so
from.
C
Yeah
the
feature
thing
that
we're
putting
on
they
have
a
special
meaning.
It
means
that
it's
a
non-default
thing
like
the
flag,
is
not
on
by
default
so
like.
If
it's
an
alpha
feature
like
you,
have
to
go
and
turn
that
flag
on
you
have
to
put
it
in
a
special
test,
run,
it
can't
be
run
in
a
standard
average
test
run.
I
don't
know
that
any
of
these
things
currently
are
being
run
in
the
standard
end-to-end
suite.
We
should
probably
check
that.
C
So
I
mean
part
of
this,
I
think
is
just
the
confusion
of
like
we
haven't
really,
I
think,
had
someone
dedicated
staring
at
this
stuff
in
the
past
going
through
and
making
sure.
Like
you
know,
all
of
the
tests
correspond
to
the
correct,
like
maturity.
Phase
of
each
like.
There
are
things
that
are
ga
that
I
think
still
have
tests
like
they'll
go.
Ga
somebody
will
remove
the
feature.
Gate
and
they'll
still
have
the
test
tagged
as
like
a
beta
feature
or
an
alpha
feature.
A
C
C
Going
to
a
sig
arch
meeting
and
asking
about
this
like
how
we
should
deal
with
tests
notations
for
alpha
through
ga
maturity.
I've
I've
done
this
before,
but
not
like
to
ask
about
the
history.
C
Yeah
so
specifically
about
like
the
the
node
conformance
stuff
and
the
history
of
that
and
whatnot.
So
I'm
not
sure,
but
that's,
I
think
the
right
group
of
people
to
take
it
to
like
sig
testing,
wouldn't
necessarily
be
the
right
group
to
take
that
too.
They
might
not
know,
or
there
might
not
be
consensus.
A
Yeah
I
had
like
when
I
wrote
this
document
I
I
I
meant
to
have
feature
being
betty
as
well
being
a
feature
attack.
A
A
You
may
want
to
go
forward
to
that.
Maybe
we
need
to
change
the
plan
a
little
bit.
C
D
C
Going
to
get
excluded
from
test
runs
and
for
what
it's
worth
if
you're
curious
about
like
what
do
you
mean?
Where
will
it
get
excluded
from
the
test?
Runs,
you
can
go
like
poke
around
at
test
infra
and
look
at
the
configs
of
various
things
that
run
end-to-end
tests
and
you'll
see.
That
feature
is
often
skipped
explicitly.
A
Okay,
then,
okay,
I
have
an
action
item
here.
Aditya,
do
you
wanna
talk
about
the
phones.
E
There
are
some
cubelet
performance
related
tests
that
are,
I
have
checked
what
I
have
checked.
They
are
not
running
anywhere
and
like
we
want
to
deprecate
them,
so
I
want
to
know,
like
the
contest
context
around
definitely
deprecating
them
and
like
if
we
have
any
replacement
available
for
them
and
like
in
general,
what
we
want
to
do
with
that
or
like
that
also
needs
that
that
also
we
need
to
figure
out.
C
Well,
so
the
thing
that
this
mentioned
that
those
tests
were
deprecated
in
favor
of
I
mean
I
don't
know
anything
about
the
history
of
these
tests,
but
the
node
perf
dashboard
gone
like
I
removed
it
from
kate's
infra
so
like
it
hasn't
been
running
for
at
least
like
a
year
or
two.
E
I
saw
saraki
recently
added
some
performance
dashboard
and
like
there
is
a
link
on
the
top
of
the
meeting
dock
as
well,
but
that
I
found
to
be
sort
of
more
of
a
scale
testing,
not
exactly
resource
utilization,
yeah.
C
So
the
this
perf
dashboard
is
different.
This
is
cluster
scalability,
it
does
include
some
cubelet
scalability
stats
and
we
did
pull
in
cubelet.
I
think
memory
and
cpu
utilization
recently.
So,
like
that's
new,
as
of
I
think,
like
120
or
it's
been
backwarded
back,
I
think
all
the
way
to
120.
but
yeah.
So
there
used
to
also
be
a
node
perf
dash
dash
dot,
kate's
dot
io,
and
this
whatever
data
source
was
feeding.
It
was
not
running.
C
D
C
E
E
I
like,
for
example,
like
how,
like
the
the
decent
test,
recent
performance
dashboard
that
we
have
added
like
yeah
like.
Can
we
able
to
run
it
locally
or
like
in
our
dev
environment
or
somewhere,
because
these
tests
are
possible
to
run
locally?
Also-
and
I
don't
know
how,
like
this
dashboard
works,.
A
E
E
No,
they
are
not
running
anywhere,
but
it
can
be
run
locally
rather
than
like
the
the
dashboard,
like
the
skill,
dashboard.
C
Given
that
they're
not
running
anywhere
like,
is
there
any
reason
to
and
that
the
only
thing
that
we
can
run
locally
like,
I
guess
what
is
the
urgency
here
like?
Do
we
need
to
deprecate
these?
I'm
not
sure
why
we're
spending
time
on
this
super
old
issue,
like
I
guess
my
question
is
like-
is
this
a
priority.
E
No,
actually,
I
was
looking
some
way
to
determine
the
like
note
performance,
so
I
was
looking
only
like
exactly
the
how
notebook
tests
work
and
like
how
what
is
the
coverage
and
all
the
things
so
I
landed
here.
That's
it.
E
C
So
well
that
makes
sense
to
me
because
you've,
typically
at
least
like
in
my
experience,
trying
to
profile
cpu
utilization
of
the
cubelet,
like
you
can't
just
run
a
test
for
five
minutes
like
you-
want
at
least
a
30
minute
cpu
sample,
so
there's
no
way
to
make
a
performance
test
fast,
just
because
you
won't
get
a
very
accurate
picture
of
like
what
actual
performance
under
load
looks
like
I'm
looking
at
this
documentation,
and
it
is
like,
was
written
pre-2016
and
has
been
migrated
like
three
or
four
times.
C
So
all
of
this,
I
think,
is
quite
old.
I
think
we
can
probably
just
let
it
chill
if
we
want
to
I'm.
I
guess
I'm
excited
that.
I
discovered
that
this
exists
now,
because
I've
never
tried
running
these
tests
before,
but.
A
I
remember
recently
we've
been
migrating
some
performance
tests
from
oh
yeah.
Actually
we
have
them
running.
Let
me
show
you,
let's
just
get
it
so
we
have
them.
E
A
And
intestine
3b
even
has
a
definition
of.
A
A
C
That's
just
in
the
sig
describe,
so
I
guess
where
the
initial
the
individual
tests
are
declared.
They
have
feature
regular
resource
tracking
on
them
so
because
they
also
have
that
feature
on
them
like
feature
serial
and
slow
is
going
to
exclude
them
from
basically
everything
except
apparently
this
job.
E
A
C
Think
that's
just
getting
excluded,
I
mean
we
can
see
you
could
search.
I
guess
the
problem
with
like
governator
is
that
it
only
shows
you
failures.
It
doesn't
show
you
stuff,
that's
running.
A
C
We
skip
slow
one,
so
I
think
those
the
the
feature-
memory
manager
tests,
for
example,
like
I'm-
not
sure
why
those
ones
are
getting
pulled
in.
I
think
it's
just
because
they
have
cereal
on
them
for
at
least
for
the
serial
test.
We
don't
exclude
anything,
that's
slow
or.
C
Yeah
I
mean
you
can
go
and
look
at
that
job.
If
you
see
there's
like
a
proud
job,
config
url
at
the
very
top
and
the
gray
thing
no
above
the
like
bold
title,
it's
in
the
test
grid
thing
it's
very
sketchy
and
you
can't
click
it
down.
No
like
right
above
the
title
in
bold
text,
no.
A
C
And
so
you
could
look
for,
you
can
look
for
where
that
job
is
defined,
which
I
think
is
the
yeah,
so
you
can
see
which
things
are
skipped
and
which
ones
are
not.
C
A
And
this
performance
test
that
that
I
hear
that
are
failing
all
the
time.
Will
they
give
us
some
information?
Did
you
look
at
what
kind
of
performance
do
you
measure.
E
A
A
Clearly,
we
don't
want
to,
I
mean
I
don't
think
we
want
to
restore
this
node
pair
of
dashboards
that
we
used
to
have.
E
F
Hi
hi
everyone
so
yeah.
I
wanted
to
join
a
couple
of
times,
but
unfortunately
I
couldn't
it's
quite
late
here,
so
I
just
want
to
know
what
are
the
further
steps
for
the
log
convention
tests
last
time
we
talked
about,
we
wanted
to
remove
them
from
having
their
own
separate
tab
in
the
test
grid
dashboard.
F
So
we
moved
it
to
the
serial
test
suite
the
serial
test
failed
because
it
does
not
have
those
flags
as
part
of
the
the
test
suite.
So
I
created
the
pr
to
add
those
flags
to
the
serial
test
suite
now.
The
question
here
is
that
if
we
add
those
flags,
it
will
run
all
the
serial
tests
seriously.
You
know
testing
them
with
those
flags.
So
do
we
want
that
or
not?
F
That
was
the
question
and
I
wanted
to
test
that
say
by
running
all
the
serial
tests
locally
or
somewhere
else
with
those
flags
to
see
if
it's
having
any
effect
or
not,
but
I
wasn't
sure
how
to
do
that.
I
asked
a
question
around
that
as
well,
that
if
I
create
a
pr
for
the
test
and
the
job
config,
I
want
to
use
its
corresponding
pr
in
the
test.
Infrared
point
not
not
what's
in
the
master,
can
I
do
that.
F
These
are
the
lock
contention
tests
like
the
two
flags,
lock
file
and
exit
on
lock
contention,
so
we
want
them
to
be
moved
from
cubelet
flags
to
part
of
the
cubelet
configuration
and
one
of
the
prerequisites
for
doing
that
is
to
add
end
to
end
test.
For,
for
these.
A
F
So
they
were
running
at
a
separate
tab
before,
but
then
we
merged
it
and
ilana
found
out
that
this
was
also
being
picked
up
by
serial
test
suite
which,
where
it
was
failing,
so
the
end-to-end
test
was
reverted
back.
I
rereverted
it
based
on
the
call
based
on
the
discussion
that
we
had
last
time
that
we
need
to
put
it
as
part
of
serial
test
suite
and
not
run
it
as
a
separate
tab.
F
F
Yes,
so
how
do
we
suppose
we
make
it
part
of
the
serial
test
suite
then.
A
C
C
Well,
I
mean
in
in
any
case
we
would
still
need
to,
like,
I
think,
restart,
cubelets
and
whatnot.
I
think
we
may
be
getting
stuck
a
little
bit
in
the
weeds,
so
we
added
these
tests
and
I
can't
remember
exactly
what
was
causing
them
to
fail,
but
I
reverted
them
because
they
were
causing
some
existing
test
suite
to
fail.
F
So
I
added
it
as
part
of
the
serial
test,
but
we
don't
want
it
to
be
a
serial
test.
Yes,
earlier,
the,
if
you
see
the
old
pr,
which
was
reverted,
you
can
see
that
they
were
not
part
of
the
serial
test
suite
so
in
a
separate
tab
in
the
city.
Oh
wait
can.
C
Yeah,
just
so
I
can
read
which
yeah
they
weren't
getting
picked
up
by
serial.
They
were
getting
picked
up
by
cubelet
features
which
was
causing
like
our
blocking
test,
suite
to
fail.
So
there
was
no
issue
with
so
we
were,
I
think
we
suggested
maybe
like
put
them
in
serial
instead
of
having
them
in
their
own
separate
thing,
assuming
that
they
didn't
like
need
to
run
separately
because
they
had
their
own
separate
thing.
And
now
we
have
discovered
that
in
fact,
we
need
to
launch
the
cubelet
with
special
flags.
C
F
So,
which,
which
flags
I
need
to
add
in
order
to
not
get
picked
up
by
this
cubelet
feature
set
of
suite
of
tests.
Well,.
C
I
think
you'd
have,
to
probably
add
a
thing
to
exclude
them
from
those
things.
The
other
thing,
too
is
we're,
adding
a
node
feature
thing
and
we're
trying
to
get
rid
of
node
feature.
So
I
don't
think
we
want
that.
I'm
just.
F
Thing,
no,
it's
not
ga,
but
there
was
a
choice
of
like
either
deprecate
it
or
you
provide
and
someone
picks
it
up
provides
an
end-to-end
test
so
that
you
can
move
it
to
the
cubelet
configuration.
C
I
see
I
would
suggest
then
that,
rather
than
using
a
node
feature
thing,
you
put
a
feature
thing
on
there
and
that
you
exclude
that
particular
thing
from
the
node
feature
suite
until
you
can
figure
out.
We
know
why
it's
failing
there,
it's,
because
the
the
the
command
line,
flags
weren't
being
passed
right
so.
F
Okay,
but
like
since
we
want
to
remove
this
node
feature
and
this
extra
tags
adding
another
node
special
feature
tag
only
causes
extra
stuff.
C
C
C
Flaky
and
cereal,
so
that's
why
we
were
like.
Oh,
we
could
just
stick
it
in
the
cereal,
sweet
and
it'll
get
skipped,
not
realizing.
The
reason
it
was
failing
was
because
it
was
missing
that
command
line
thing.
So
that
makes
sense.
I
mean.
C
F
F
A
A
Then
you
just
need
to
have
like
add
some
flags
to
make
sure
that
they're
not
trying
by
any
any
of
them,
and
maybe
we
don't
have
enough
flags
to
express
it.
So
we
need
to
change
the
job
definition
specifically
for
not
feature.
Maybe
we
can
add
like
exclude
node
special
feature
or
special
tag,
and
this
will
solve
it.
C
Looking
at
the
chat,
matthias,
so
there's
a
question
for
the
feature
notation:
do
we
name
it
according
to
the
name
of
the
feature
gate?
I
think
that
was
the
intent.
I
don't
know
if
people
have
been
good
about
that,
though,.
C
So
I
don't
know
if,
in
all
cases,
it
like
100
corresponds
to
that.
C
Yeah,
maybe
that's
something
we
can
think
about
doing,
because
then
at
least
we
can
go
through
the
list
of
feature
gates.
I
think
there
is
some
historical
issue
with
the
fact
that
not
everything
was
actually
tracked
with
a
feature
gate,
and
we
had
some
things
that
I
think
were
like
not
fully
graduated
necessarily
that
were
being
sort
of
haphazardly
tracked,
and
I
think
there
may
also
be
some
features
that
didn't
have
a
feature
gate,
but
we're
still
tracked
with
those
in
the
test.
C
So
like
really
it's
just
a
lot
of
cleanup
and
inventory
that
we
need
to
do
like
go
and
like
dump
all
of
the
tests
and
then,
like
figure
out
like
you
know
what
belongs
to
what
like
there's
a
lot
of.
You
know,
there's
a
lot
of
spelunking
and,
like
you
know,
digital
archive
work
to
be
done
there.
It's
like
being
a
librarian
of
sync
node,.
F
I
have
another
question
which
says
which
set
of
test
suites
are
one
for
cubelet
configuration.
F
What
do
you
mean
so,
since
the
task
is,
I
don't
want
them
as
a
eventually
we
want
to
move
them
from
as
cubic
flags
to
cubic
configuration.
There
would
be
a
need
for
a
end-to-end
test
so
that
these
flags.
C
Once
you,
if
you
go
through
the
there,
are
some
docs
on
how
to
make
an
api
change
for
a
configuration.
If
you
go
and
add
that
to
the
configuration
I
wouldn't
necessarily
move
or
remove
the
flag
right
off
the
bat,
I
think
that
the
flags
need
to
be
deprecated
before
they
can
be
removed.
I
saw
somebody
asking
for
a
review
of
something
where
flags
got
removed,
which
we
shouldn't
do
so.
C
There
needs
to
be
some
sort
of
deprecation
period,
but
typically
for
most
of
the
flags,
we
will
just
add
them
to
the
cubelet
config
so
that
they
exist
there
in
parallel
and
those
those
can
be
set.
Then
like
in
the
cubelet
config
file,
when
you
specify
like
config
in
yaml
form,
then
you
can
use
that
to
specify
you
don't
just
have
to
use
the
command
line
flag.
F
No,
so
I
have
created
a
pr
for
that,
okay,
so
what
I
wanted
was,
would
there
be
a
need
for
an
end-to-end
test
in
which
currently
the
the
way?
I
am
the
the
way.
F
F
Okay,
cool
yeah
yeah,
so
I
I
do
have
the
appeal
for
that
as
well,
but
we
first
want
to
get
this
test.
This
end-to-end
test
merge
so
that
that
pr
can
be
reviewed
later.
A
Okay,
so
I
added
x
item
here,
so
I
think
I
like
this
idea
more
and
more
to
remove
special
tag
from
my
feature
or
from
like
in
this
tab,
just
make
sure
that
it
doesn't
pick
up
special
tag
test
and
then
your
test
will
only
appear
in
in
this.
Tab
is
right,
look
contention,
yeah,
yeah,.
A
Cool
okay:
do
you
want
to
discuss
time
before
we
go
to
dashboards.
C
Yeah,
I
mostly
just
wanted
to
ask
that
we've
been
meeting
at
this
time.
A
few
times
we
were
hoping
to
get
more
contributors
from
asia,
like
specifically
china
and
india.
F
I
have
a
question
so
one
what
was
the
earlier
time
and
second,
what
is
the
time
for
the
weekly
meetings.
F
C
Not
the
ultimate
the
previous
time
to
the
there
was
no
previous
time.
They
were
just
all
at
the
same
time
as
the
other
meeting,
and
so
we
added
this
one
in
the
hopes
to
get
more
contributors
attending
and
for
the
most
part
we
seem
to
have
basically
the
same
group
attending
both
calls.
So
like.
Is
this
not
a
time
that
works
for
people
we're
certainly
not
getting
folks
attending
from
china,
which
I
think
was
one
of
our
hopes
of
moving
this
meeting.
A
So
just
rehash,
we
have
this
signal
main
signal
meeting
with
like
feature
discussions.
Mostly,
I
use
this
at
10
pm
pacific.
So
it's
two
hours
later
than
this
one,
and
then
this
one
we
have
second
tuesdays.
We
have
this
time
and
all
other
weeks
we
have
10
like
wednesday,
10
p.m.
A
E
F
I
I
sorry
I
don't
want
to
provide
a
lot
of
inputs,
because
I
I
don't
attend
a
lot
of
calls,
nor
have
I
been
active,
but
just
in
general,
the
10
pm.
Sorry,
the
alternate
time
that
we
had
earlier
seems
quite
late
for
indian
folks.
This
time
does
work,
I'm
not
sure
if
that
would
be
the
case
for
the
weekly
calls
as
well.
C
Yeah,
I
don't
think
that
we
could
move
the
call
to
this
time
all
the
time
I
have
a
conflict
in
this
slot,
so
I
skip
another
meeting
in
order
to
attend.
So
that's
mostly
what
I
was
asking,
because
if
this
time
is
generally
less
attended
than
the
other
time
and
we're
not
necessarily
getting
a
different
group
of
people
attending,
then
I
think
it
it
adds
confusion
and
doesn't
necessarily
make
sense.
I
wonder
if,
like
would
it
help?
C
So
I
can't
just
like
assume
like.
Oh,
if
I
I
add
the
reminder
on
the
second
tuesday
it'll
actually
work.
Okay,
I
will
add
a
reminder
on
slack,
given
the
number
of
plus
ones
in
the
chat,
and
maybe
that
will
help
with
attendance-
and
I
was
gonna
say
also
like
we
advertised
this
meeting
in
our
kubecon
talk.
So
I
guess
we
have
to
have
them
at
least
through
november.
Otherwise
people
will
be
very
sad
when
we
tell
them
to
show
up
and
then
it
doesn't
exist.
A
You
can
advertise
during
signal
meeting
regular
signal
meeting
until
this
week
is
like
8.
Am
that
week
is
10
mm.
C
I'll
set
up
a
reminder
on
the
tuesday
before
the
next
one
and
we'll
see
if
we
get
I'll
take
an
action
to
do
that.
A
Okay,
surprisingly
long
time
we
spent
on
agenda
today,
which
is
very
good.
I
like
when
I
joined
this
fool.
A
D
C
Well,
I
got
approver
and
then
I
proceeded
to
go
on
vacation
for
the
rest
of
august,
and
I
am
now
kind
of
in
and
out
of
the
office
for
the
next
two
weeks
and
also
it's
kept
freeze.
So
I
think
we'll
find
out
after
we
stop
having
deadlines
and
vacation,
because
I
have
not
been
approving
very
many
things,
but
not
not
not
because
I
I
don't
want
to
just
because
I
have
other
stuff
going
on.
A
Yeah,
I
already
approved
a
couple
pr's,
I
think,
which
were
not.
D
Like
not
major
yeah,
it
seems
like
during
the
the
first
two
weeks
it
was
like.
Yes,
yes,
let's
approve
and
then
yeah
it
slowed
down,
but
probably
it
was
just
just
during
the
vacation
because,
like
sega
was
on
vacation,
you
were
a
vacation
several
weeks,
so
yeah
yeah.
I.
C
Think
it's
a
combination
of
a
bunch
of
things
too,
like
when
we
get
between
releases
as
well.
I
think
a
lot
of
people
myself
included
really
want
some
time
off
because
especially
like
last
release,
the
crunch
going
into
like
end
of
end
of
the
release
and
all
of
the
tests
burned
down
and
whatnot,
and
then
it's
just
like.
Finally,
I
can
take
a
break
because
that
was
especially
with
all
of
the
bugs
and
regressions
that
came
out
of
122
and
the
pod
life
cycle.
Refactor
like
that
was
that
was
brutal.
C
So
I
was
glad
to
take
some
time
off.
I
I
think
that,
like
we're,
definitely
seeing
that
we're
bandwidth
constrained
right,
like
we
still
don't
have
a
lot
of
people
when
we
have
the
people
that
we
do
have
now
with
approver
on
vacation
like
we
still
feel
the
pain,
so
we
need
more
people.
That's,
I
think,
really
the
we
don't
want
single
points
of
failure.
A
Yeah-
and
I
realized
that
now
like
being
approved,
is
a
little
bit
harsh
because
you
you
can
just
like.
Oh,
this
is
approver's
job
to
look
at
this
test
right.
A
You
already
used
them,
so
it's
yeah
right
now,
it's
a
after
vacation
trying
to
catch
up
and
in
google
there
is
a
pair
of
going
on,
which
also
consumes
a
lot
of
time.
A
C
D
C
There's
a
critical
urgent
regression
with
static
pods,
going
into
an
error
state
in
122.
A
Yeah,
I
think
it
like
a
review
and
feature
you
can
do
view
as
a
test.
It's
minimal
change
here,
so
I
would
archive
it
from
here,
leaving
it
to
our
product.
G
A
Okay
imran:
this
is
yours.
My
implant
dropped.
A
Yeah
I
reviewed
it
before
so
I
can
review
it
again
because
I
approved
original.
I
mean
the
original
pr.
A
I
I
mean
we
needed
somebody
for
a
assignee
section,
so
we
wouldn't
look
at
it
every
time.
A
D
A
Anybody
wants
to
sponsor
mike
I'm
totally
sponsoring
him,
but-
and
he
has
a
few
pr's
already
immersed.
C
Oh
sorry
is
this
for
kubernetes
membership
yeah?
Yes,
I've.
B
C
Send
me
your
list
and
I'll
take
a
look
and
I'll,
let
you
know,
but
I
would
certainly
be
happy.
D
A
C
Let's
punt
to
next
week,
because
I've
got
a
bunch
of
pr
reviews
and
cap
approvals
to
finish
up
anyways.
Today,
it's
it's
gonna
be
a
long
day.