►
From YouTube: Kubernetes SIG Node 20210216
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
A
C
This,
I
think,
will
be
a
pretty
short.
I
think
we
over
the
last,
I
think
last
we
spoke
in
november
and
then
we
figured
we'll
work
on
changing
the
modifying
the
cap.
Design
to
tim
hawkin
had
some
concerns
about
using
the
moving
the
resources
allocated
from
spec
to
status.
We
agreed
on
how
that
should
look
like
we
added.
I
think
the
summary
of
the
changes
that
we
did
over
the
last
month,
or
so
he
was
away
in
november
and
december.
C
We
didn't
make
much
progress,
but
towards
the
end
of
january,
like
we
got
traction,
we
were
able
to
work
together
on
this.
One,
and
tim
has
appeared
out
that
this
is
the
first
link
that
in
this
review
de
morgan's
peer,
the
1883.
C
So
the
summary
of
that
is
that
we
have
moved
resources
allocated
to
from
prospect
to
port
status,
I'm
just
going
to
make
update
in
the
gap
here
so
also
that
doesn't
work
well,
so
move
it
to
resources
from
respect
to
status,
and
we
had
previously
agreed
on
checkpointing
the
resources
allocated
in
the
cubelet.
C
I'm
gonna
have
not
had
started
working
on
it,
but
that's
something
I
plan
to
work
on
is
the
first
thing
so
that
we
get
that
squared
away
and
the
change
that
we
discussed
went
back
and
forth
is
to
add
a
pod
status,
dot
resize.
This
is
the
signal
that
that
the
system,
the
cluster,
would
send
to
the
received
to
the
user
to
see
that
okay,
your
resize
has
been
accepted.
C
It's
in
progress
or
it
is
temporarily
not
it's
deferred,
which
is
temporarily
not
possible,
but
we'll
retry
later
or
it
is
not
feasible
like
if
you
request
that
a
node
has
four
cpu
and
you
request
five,
that's
the
case
where
it
will
become
infeasible,
so
that
I
think,
is
a
fairly
minimalistic
set
of
signals
that
we
can
send
to
the
two
vpa
in
this
case
and
the
proposal
to
add
resize
subresource.
C
Currently
we're
still
going
to
use
patch
to
in
the
alpha
version,
we're
going
to
use
patch
to
update
it,
but
in
beta
time
frame.
We're
looking
to
do
add
a
slash
resize
sub
resource,
which
is
gonna,
be
used
to
set
the
desired
resource
resources
by
the
user
or
vpa,
and
tim
has
mentioned
that
he's
gonna
go
and
check.
He
has
a
weekly
meeting
with
the
folks
in
poland,
the
vpa
team
and
he's
gonna
check
with
them
and
make
sure
that
this
is
okay
from
our
side.
C
I
think
we
just
want
to
look
at
this,
give
it
up
once
over
you
and
direct,
but
if
you
could
take
a
look
at
some
point,
we're
targeting
this
for
122
at
this
point
right,
so
I
see
direct
has
said
no
to
121,
which
is
too
late
anyways.
So
that's
all
right.
D
C
So
that
pr-
and
then
there
is
a
second
pr
that
I
have-
which
is
the
changes
to
the
cri,
which
was
mainly
adding
the
template
changes
that
was
required,
the
prr
requirements
to
make
it
ready,
for
I
didn't
know
of
this
new
process
that
you
had
to
go
through
signaled
the
sponsoring
sake.
I
got
to
know
of
that
just
a
week
before,
and
I
thought
it
was
being
tracked
for
121,
but
that's
fine.
We
can
track
it
for
one
to
two.
C
That's
my
plan.
So
I
think
what
the
my
ask
force
ignored
is
to
just
look
at
these
changes
that
mainly
tim
hawkins
changes
and
if
they
look
good,
if
you
have
any
concerns,
I'll
also
run
this
by
the
6
scheduling
this
week
and
see
if
they
have
any
issues,
but
the
change
to
scheduler
is
pretty
minimal.
Instead
of
for
looking
at
resources
allocated
in
the
spec,
it's
going
to
look
at
it
in
the
status
and
do
a
max.
A
Thanks
renee
for
updates-
and
I
believe
we
talked
about
in
the
past-
we
agree:
okay-
was
to
move
resource
allocated
further
partners
back
to
the
status.
Even
we
have
some
like
the.
We
have
some
concern,
but
do
we
agree
on
that?
One,
because
the
principle
based
on
the
power
spike
and
the
status,
the
general
of
the
kubernetes,
the
km
model
and
the
spec
and
the
status
principle?
A
So
we
kind
of
compromised
that
one-
and
we
also
agree
in
the
past
to
do
the
no
code
checkpoint
for
this
feature
and
then
there's
the
new.
We
have
one
concern.
It
is
basically
it
is
how
to
notify,
there's
the
vpa
in
progress.
So
we
are
going
to
take
a
look
at
that.
The
the
results,
the
new
object
I
did
and
the
new
service
resource
idea.
A
C
Okay,
if
no
other
concerns,
I
had
a
question
I
saw
last
week.
I
think
I
dialed
into
the
meeting
there
was
a
checkpoint
restore
kept
discussion.
I
haven't
yet
had
a
chance
to
look
into
that
change.
I
believe
it
is
adrian.
Is
this
something
I
know
I
have
not
been
in
the
meetings
for
a
little
bit?
Have
I
missed
something
that
could
I
could
use
for
the
checkpoint,
any
new
developments
that
I
should
look
at
while
implementing
the
checkpoint
restore
or
do
I
follow?
What
direct
last
told
me.
E
So
there's
no,
I
don't
think
that
checkpoint
restore
discussion
was
related
to
your
work.
That
was
more
about
entirely
freezing
a
pod
and
at
the
c
group
level
and
then
potentially
restoring
it.
So
adrienne
was.
E
E
I'll
look
at
your
enhancement.
The
one
thing
I
was
wondering
was
just
for
my
memory:
we're
not
allowing
you
to
resize
the
pod
overhead
associated
with
a
pod
correct,
just
the
container
itself.
C
A
Derek,
can
I
ask
you:
what's
the
that
question
came
from,
is
there
any
use
cases?
Oh
you're
concerned,
we
allow
this.
E
A
F
Yeah
hi
so
last
week,
one
of
the
results
of
the
discussion
was
that
the
checkpoint,
restore
cri
changes
should
not
be
part
of
the
existing
cri
api
and
that
an
experimental,
cri
api
should
exist,
and
I
just
wanted
to
mention
here
that
I
ported
all
my
pull
requests
to
a
experimental
cr.
F
I
I'm
not
sure
if
that's
the
right
thing,
but
if
someone
could
review
the
pull
requests
related
to
the
changes,
I
would
be
happy
to
to
work
with
anybody
here
to
get
get
this
forward,
but
it's
now
all
in
a
api.
I
called
it
the
experimental.
Maybe
I
misunderstood
how
it's
supposed
to
be,
but
yeah.
Please,
please.
Let
me
know
in
the
pull
request.
I
just
wanted
to
highlight
it
here
that
it
has
been
moved
forward.
G
Hey
adrian
I
can,
I
can
take
a
look
at
it
and
then
yeah.
Okay.
A
Thank
you,
andrew,
and,
and
also
I
I
saw
your
post
off
your
last
week.
You
posted
of
the
demo
there
simple
demo
there,
please
people
follow
up
on
the
demo
and
take
a
look
and-
and
we
can
follow
up
more
in
detail
in
the
future.
Okay,
yeah
thanks
and
next
one.
Let's
move
to
next
topic
so
andrew
about
the
sql
version.
One
subsystem.
H
H
D
D
Review
pr's
on
node
meetings
we
can
get
to
that
after
the
meeting.
E
H
Yeah
it's
broken,
and
it's
currently
it
takes
random.
It
takes
random
mount
point
from
the
file
which
has
name
from
mount.
Let
me
check.
H
Yeah,
basically
for
from
prog
file
system,
it
takes
random
mount
point
for
c
group,
and
there
is
also
some
issue
with
kind.
If
you
check
comment,
you
can
see
that
we
found
that
kind
tests
are
also
randomly
passed
past
before.
E
H
A
To
me,
I
I
don't,
I
am
a
little
unclear
it
is.
I
I
didn't
look
at
that
here
in
the
morning.
Just
to
me
it
is
I,
like
you,
I
like
the
oriental
author's
proposal
and
have
the
more
clean,
more
clean
definition
where
to
start
mount,
but
that
for
sure
will
break
the
kind,
but
even
with
that
kind
of
effects,
still
there's
the
kind
that
is
ready
to
issue.
A
So
I'm
not
sure
that
it
is
so
that's
the
two
kind
of
problem
one
it
is
like
the
previous
problem
is
they
could
random
pick
up
the
month
point
if
on
the
system
denote,
but
the
that
one
is
not.
Let's
just
really
fix
those
problem
for
all
the
cases,
but
on
other
hand
for
those
kind,
cluster
and
the
steel.
This
one
is
still
running
the
problem
because
it
is
still
fair
because
we
always
so
that's.
A
Why
I'm
not
sure
I
like
what
he
proposed
to
start
from
the
root
more
clean,
and
but
I
can
I
can
see
that
legacy
region,
maybe
it
will
be
broken
more
for
the
existing
cluster
and
existing
production
could
be,
and
so
that's
why
this
is
why
I'm
not
sure
how
to
move
forward
with
that
one.
So,
in
your
cases,
I
underst-
and
I
think
I
got
I
haven't
clearly
heard
so
you
run
into
some
problem
with
the
css
driver.
A
E
Best
solution,
I
guess
I'm
trying
to
figure
out.
It's
like
it
seems
like
there's
a
failing
test
and
kind,
which
is
good,
but
I
don't.
I
don't
really
know
if
we
don't
have
a
test.
That's
testing
like
exotic
mount
point
setups.
Then
I'm
wondering
if
we
will.
H
Currently,
the
couplet
choose
randomly
mount
point
of
system
dc
group
mount.
So
there
is
no
any
request
for
some
specific
support.
D
We
currently
don't
have
any
unit
tests
actually
checking
this
behavior,
which
is
presumably
why
this
is
kind
of
buggy,
like
there's
no
test
changes
associated
with
this
pr.
So
that
was
one
of
the
things
that
made
me
not
super
confident
when
reviewing.
D
A
Thanks
so
next
one,
I
think
andrew
still
came
from
you,
but
I
think
the
director
id
I
already
respond
for
the
the
video
records
in
the
past
and
thanks
directly
actually
for
last
couple
years.
I
just
kick
off
the
record
button
and
directly
all
the
hard
work
and
upload.
E
Derek
yeah
I
gotta
download
all
the
stuff
and
get
it
uploaded,
but
all
the
recordings
are
there,
so
I
will
get
as
many
as
I
can
this
afternoon.
A
I
Make
thanks
yeah
for
that
one.
I
just
put
a
pr.
This
has
been
like
kind
of
a
long-standing
pr.
I
think,
since
I
saw
october
to
fix
an
orphan
pod.
So
basically,
when
pods
are
orphaned,
sometimes
the
volumes
are
not
cleaned
up
properly.
This
yeah
there's
another
issue.
I
think
it's
been
a
long
time
issue,
but
this
seems
to
be
like
a
good
initial
fix
and
we
did
see
some
at
least
for
us
in
production.
Some
actually
customers
hit
this
case.
I
So
that's
kind
of
why
I
brought
it
up
but
yeah.
It
looks
like
there.
This
kind
of
mixed
sig,
node,
six
storage,
but
jing
from
six
storage,
actually
took
a
look
at
it
and
looked
like
it
was
a
good
fix.
So
yeah
thanks
don
for
taking
a
look-
and
I
don't
know
if
anyone
else
has
seen
a
similar
issue
with
orphan
pod
volume-
cleanup-
hopefully
it'll
it'll
help
out
and
yeah.
We.
A
Thanks,
actually,
we
saw
this
falling
apart
in
the
production
once
a
while
yeah,
so
there's
the
several
attempts
to
fix
this
from
the
sig
storage
in
the
past
yeah.
So
thanks
for
brought
this
up
so
next,
I
think
the
steel
david
and
the
higher
commander
and
for
the
sad
weather,
really
the
topic.
I
Yeah
yeah,
so
I
was
thinking
since
we
have
a
little
time
we
can
maybe
go
through
this
cap.
Just
get
some
initial
thoughts
on
it
and
a
lot
of
people
have
already
provided
some
comments.
I
So
I
was
thinking
we
just
maybe
can
go
over
it
very
briefly
and
kind
of
get
some
get
get
some
more
eyes
on
it.
Yeah.
Maybe
if
you
can
share,
if
I
can
share
my
screen.
I
A
I
Perfect
all
right,
I
think
that
should
be
sharing
yeah,
so
anyways.
This
is
a
cap
that
me
and
peter
have
been
working
on,
so
basically
the
the
high
level
idea
by
the
way,
so
we
already
kind
of
decided
for
121
we
just
kind
of
want
to
get
some
more
thoughts
around
this
and
agreement
and
so
forth,
and
so
that's
kind
of
our
goal
for
121
just
kind
of
close
on
the
future
direction.
I
I
suppose
here
so
thanks
for
everyone,
for
who
we've
already
taken
a
look,
but
just
gonna
I'll
provide
a
little
background.
So
the
the
whole
idea
here
is
that
today
we
get
collects
a
lot
of
container
and
pod
level
metrics,
and
today
that
comes
mostly
from
c
advisor
and
so
we've
seen
and
we've
had.
I
I
think,
previous
discussions
in
signo
that
that's
something
we
kind
of
want
to
eventually
move
away
from
for
various
reasons,
and
so
some
of
the
reasons
is
that
right
now,
cri
is
actually
collecting
some
statistics
as
well,
and
some
are
being
collected
from
from
c
advisor.
So
ideally,
we
want
to
kind
of
center
around
one
source
of
truth
for
metrics.
The
other
thing
is
that,
like
every
container,
runtime
needs
to
integrate
with
c
advisor
today,
so
c
advisor
has
special
code
for
docker
for
container
d
for
cryo
et
cetera.
I
So
ideally
that
would
be
in
the
cri
implementation
and
then
also
we've
seen
some
perf
issues
like
I
think
peter
and
red
head
folks
can
mention,
because
some
of
the
metrics
are
collected
twice,
for
example,
by
c
advisor
and
also
by
the
runtime
itself,
so
I'll
just
going
to
go
over
briefly
briefly
to
kept
here.
So
there's
multiple
kubelet
metric
endpoints
like
that,
have
been
built
up
built
out
over
the
years.
There's
the
summary
api
there's
the
metric
c
advisor
endpoint
there's
also
some
other
endpoints.
I
So
our
kind
of
goal
here
is
basically
around
focus
around
the
summary
endpoint
and
david
ashford
did
a
lot
of
work
here
and
I
think
we
more
or
less-
and
maybe
this
is
a
good
question
for
everyone,
but
I
feel
like
we
came
to
some
agreement
as
a
community.
That
summary
api
is
something
that
we
can't
really
break
that
a
lot
of
people
rely
on
today,
so
we
need
to
kind
of
whatever
we
do.
We
need
to
somehow
support
the
summer
api
moving
forward
right.
So
that's
kind
of
the.
I
The
idea
that
we
went
with
in
this
cap
is
that
you
know
summary
api.
We
need
to
support
so
basically,
the
the
problem
today
is
that
there's
there's
two:
when
you
ask
the
summary
api
for
metrics,
it
can
collect
metrics,
either
from
c
advisor
directly
if
you're,
not
using
cri
and
or
if
you
are
using
cri.
It
collects
some
metrics
from
from
cri,
but
most
of
the
metrics
are
actually
coming
from
c
advisor
today.
I
So
peter
did
a
lot
of
awesome
work
here
and
put
together
this
table
that
basically
shows
the
summary
field
and
then
the
the
corresponding,
basically
where
it's
provided
from
today,
which
is
c
advisor
for
most
of
the
stuff.
You
can
see
in
this
column
and
then
this
field
is
the
corresponding
metric
on
the
c
advisor
prometheus
endpoint.
If
there
is
one
and
so
basically
what
we
kind
of
went
through
is
we
just
went
through
each
metric
and
kind
of
said
that
more
or
less
for
every
metric?
That
is
in
the
summary
api?
I
That's
not
present
in
cri
today
we
want
to
add
it
to
cri
and
eventually
have
the
crx
collect
that
metric
and
report
it
so
that
the
future
goal
is
that
summary.
Api
can
basically
move
completely
off
of
c
advisor
and
and
for
a
new
cri
first
for
metrics,
so
that's
kind
of
what
what
the
the
cap
focuses
on
it
goes
into
detail
into
all
the
various
metrics
here.
I
So
this
is
for
memory
so
for
memory
today
I
think
it's
very
there's
only
one
metric,
I
think
that
is
provided
by
cri
container
memory,
work
and
set,
but
summary
api
provides
a
lot
more.
So
that's
kind
of
the
the
very
high
level
proposal
here
is
to
kind
of
extend
the
cri
api
with
with
these
metrics
such
that
summary.
Api
can
migrate
off
of
off
of
that,
and
so,
let's
see
here,
you
can
look
at
the
cap
more
detail.
I
I
won't
go
over
it,
but
we
provide
some
more
motivation
around
why
we
want
to
do
this,
why
we
want
to
move
toward
the
cri
off
of
c
advisor
here.
We
also
talked
about
something,
so
we
went
through
the
summary
api
and
some
things
probably
don't
make
sense
to
carry
over.
I
For
example,
accelerator
metrics
are
in
the
summary
api
today,
for
example,
but
they're
deprecated
there
was
a
separate
kept
to
deprecate
them
in
the
summer
api,
so
we
kind
of
only
moved
or
proposed,
adding
stuff
to
cri
that
that's
not
that's,
actually
being
used
for
the
most
part.
So
that's
kind
of
the
the
high
level
idea
here.
The
other
kind
of
big
questions
we
have
is
around
the
metric
c
advisor
endpoint.
So
the
problem
is
that
some
people
collect
the
metrics
from
the
summary
api.
I
Some
people
collect
the
metrics
from
the
c
advisor
endpoint.
If
c
advisor
will
not
be
collecting
those
metrics
in
the
future,
we
need
to
have
some
story
for
people
who
collect
metrics
from
this
end
point
so
we
we
saw
some
early
discussions.
I
think
around
how
best
to
support
that
endpoint
one
one
idea
we
had
is,
for
example,
to
convert,
convert
the
the
metrics
that
we'll
collect
in
the
cri
implementation
into
prometheus
format
and
that'll
provide
at
least
some
some
story
for
people
who
need
the
metrics
in
prometheus
format.
I
So
there's
there's
some
section
that
that
talks
about
specifically
the
migration
path
there.
The
whole
idea,
I
think,
from
a
high
level-
is
to
make
it
not
make
it,
as
least
of
a
breaking
change
for
users
as
possible,
so
that
they're
using
the
summary
api
they
should
have
minimal
changes.
If
they're
using
the
metric
c
advisor
endpoint,
I
think
long
term.
We
want
to
make
sure
there's
minimal
breaking
changes
there,
but
of
course,
c
advisor
does
collect
a
lot
of
metrics
today
that
are
present
in
this
endpoint.
I
That's
probably
we
won't
doesn't
make
sense
to
add
to
cri,
so
some
metrics
will
probably
have
to
have
to
be
skipped,
so
there
will
be
some
type
of
you
know
migration
here,
but
we're
hoping
to
kind
of
keep
it
minimal
yeah.
So
then,
the
cap
talks
more
in
detail
around
the
the
cri
changes
proposed.
So,
basically
adding
we
already
have
some
initial
stat
fields
in
the
cri,
but
it's
kind
of
building
those
out
and
and
including
some
more
some
more
information,
basically
so
that
we
can
fulfill
the
whole
summary
api.
I
G
No,
that
was
a
that
was
a
pretty
good
summary
yeah.
I
think
I
think
the
biggest
couple
of
open
questions
that
we
have
now
are
like
yeah
how
to
handle
the
metric
c
advisor
because,
like
yeah
most
prometheus,
those
things
in
the
prometheus
world,
the
permutations,
adapter
and
even
now,
metric
server,
though
they're
moving
off
of
it,
rely
on
that
endpoint
and
the
also
the
other
thing
is
now
that
we're
moving
this
implementation
to
the
cri.
There's
the
opportunity
for
us
to
customize,
based
on
that.
So
like
there
was
some.
G
You
know
some
folks
who
chimed
in
from
microsoft
or
windows
container
world,
and
you
know
it's
it's
possible
that
you
know.
We
may
also
want
to
define
different
fields
for
different
operating
system
types,
but.
J
Yeah,
I
can
briefly.
J
Well,
yeah,
so
yeah,
I'm
still
looking
into
that
that
most
of
the
folks
who
were
looking
to
that,
including
me,
were
out
in
the
last
week
and
tomorrow.
So
I'll
keep
looking
at
that.
J
But
I
know
that
with
windows
containers,
we
kind
of
drastically
changed
how
a
lot
of
those
stats
are
given
between
the
host
compute
subsystem
version,
one
which
is
what
docker
is
based
on,
and
the
host
compute
subsystem
version
two,
which
is
what
container
d
is
based
on
for
windows
and
I'm
still
trying
to
reconcile
what
all
of
the
differences
with
that
are.
I
don't
have
all
the
history.
I
know
that
for
the
docker-based
implementation,
though,
the
windows
subsystem
would
return
metrics
in
a
completely
different
format,
and
then
it
was
either
docker
or
docker.
J
Shim
would
try
and
convert
those
to
the
kind
of
what
kubernetes
would
expect
and
expose,
but
since
docker
is
being
deprecated,
I'm
focusing
on
the
container
d
versions-
and
I
don't
have
as
much
history
there.
So
I'm
still
digging
into
that.
But
we'll
do
a
nice
review
on
this
in
the
next
coming
days.
I
Yeah,
so
I
think
our
our
biggest
ask
for
people
will
probably
be
around
if
the
the
metric
additions
make
sense
and
what
metrics
they're
relying
on.
How?
If
I
think
a
couple
of
interesting
questions
would
be
how
much
people
are
relying
on
the
prometheus
endpoint
and
specifically,
if
they
are
what
metrics
they
are.
Relying
on
that'd
be
an
interesting
question.
I
I
would
like
to
get
some
feedback
on
and
the
other
thing,
it's
probably
just
in
general,
around
this
direction
makes
sense
and
if,
if
people
are
on
board
with
moving
more
of
this
metric
collection
in
the
sierra
layer,
I
think
that's
another
big
question
yeah.
So
if
anyone
has
any
thoughts,
feel
free
to
jump
on
the
pr,
there's
already
been
a
good
amount
of
discussion
there.
So
I
think
that's
somebody
maybe
ask
yeah.
I
A
Thanks
peter
and
david
and
for
this
effort
yeah,
so
we
want
to
have
this
one
for
years
and
then
we
get.
Finally,
people
take
action
thanks
and
please
join
their
cap
and
the
pia
for
discussing
and
the
next
one.
D
Yeah,
I
just
put
this
on
the
agenda
because
it
came
up
in
some
internal
discussions.
So,
according
to
the
documentation
in
theory,
the
cubelet
is
supposed
to
support.
Like
say
you
have
an
api
server
on
version
n,
you
can
have
a
cubelet
on
version
n
minus
two.
So,
for
example,
you
have
an
api
server
on
119..
D
In
theory,
a
cubelet
on
117
can
still
work
with
that,
and
so
I
had
a
question
in
terms
of
if
anybody
knows
what
the
state
of
tests
for
that
are,
I
haven't
been
able
to
find
any.
I
also
don't
know
if
it
is
covered
in
conformance
testing,
so
I
reached
out
to
hippie
hacker
who
sent
me
to
the
kate's
conformance
channel
and
I
started
a
thread
there.
So
if
anybody
knows,
I
guess
please
post
there.
If
you
want
to
just
know,
that
would
be
good
too.
A
I
can
feel
you
some
context,
but
I
don't
know
the
current
status,
so
I
remember
back
in
the
1.6
or
197,
I'm
the
release
manager,
and
so
so
the
api
machinery
and
also
component
of
the
working
school
is
n.
Minus
two
is
defined
even
before
first
release,
so
we
decided
that
one,
but
where
I
was
like
the
release
manager,
I
noticed
that
we
never
test
so
there's
the
only
have
like
the
render
every
release
and
we
find
some
people
random
in
google
helpful
community
do
that
test.
A
So
I,
if
you
notice
that,
in
the
test
infrastructure,
I
found
the
issue.
I
want
to
build
the
infrastructure
for
the
system,
there's
the
two
things
where
it
is
menu,
but
it
is
the
procedure
you
have
to
have
the
auto
procedure
for
every
release.
So
so
we
do
have
like
a
test
grade
so
which
it
is
on.
I
hope
on
by
the
sig
class
life
cycle.
I
believe,
that's
that
time
seek
a
class
life
cycle
just
first
performed,
and
so
they
want
to
have
the
job
work.
A
A
So
there's
the
engineer
working
on,
but
I
I
never
follow
up
because
after
doing
this,
I
think
I
only
follow
up
after
that
release
and
the
next
release
as
the
consultant
and
then
I
don't
know,
current
status,
even
test
grid
being
changed,
and
the
sig
test
also
is
being
dramatically
changed
after
several
years.
So
I
don't
know,
but
it
is
in
the
past
that
we
do
every
release
as
the
release
manager.
A
We
document
okay,
what
kind
of
things
you
have
to
checkpoint
and
so,
besides
the
confirm
that
time,
there's
no
confirmed
test
here
right
so,
but
we
we
actually
signal
to
have
some
like
the
term
about
the
conform
time.
So
we
said,
oh,
you
have
to
check
box
those
kind
of
things
and
then
there's
the
certain
act
of
the
new
feature.
You
have
to
make
sure
the
alpha
feature
certain
things
you
have
to
test
there
and
then
there's
the
upgrade
test
just
share
here.
D
D
I
D
I
D
Yeah,
if
you
click
on
the
the
slack
thread
that
I
linked
in
the
agenda,
I
found
a
bunch
of
different
issues,
all
documenting
that
we
need
to
test
for
this,
since
it's
like
a
documented
capability
for
clients
to
be
able
to
do
the
n
minus
2
sku,
but
I
can't
seem
to
find
any
actual
tests
for
it.
So
I
will
continue
the
discussion
there
and
hopefully
get
everybody
involved
that
needs
to
get
involved,
whether
it's
cluster
life
cycle,
folks
working
on
conformance
testing.
That
kind
of
thing.
D
I
think
this
needs
to
be
a
bigger
conversation.
So
I
also
I
don't
know
how
much
interest
there
actually
is
in
doing
this,
since
it
increases
a
bunch
of
like
you
know,
test
cases
that
we'd
have
to
worry
about.
Given
that
we
already
support
three
versions,
that's
like
a
lot
of
skew
that
we
would
need
to
cover
so.
A
Yeah
david
looks
like
that.
We
stopped
the
three
distancing
in
the
1.15,
so
you
can
see
that
one
data
three
and
the
mast-
that's
basically
n
minus
two,
so
mass
master
branch
actually
is
1.50,
that's
exactly
what
I
missed
it
in
the
1.6
or
107.
A
Thanks
a
lot
to
brought
this
up
and
I
I
believe,
19.
We
also
have
the
api
change
right.
So
maybe
that's
the
another
reason
we
didn't.
Nobody
really
follow
that
and
and
that
issue
here.
D
Yeah,
this
is
just
a
reminder
for
the
group
I
know.
For
the
past
couple
of
weeks,
we've
been
super
busy
with
the
121
cap
freeze
and
121's
focus
stuff,
so
I
certainly
wasn't
going
to
pester
anybody.
While
we
were
all
busy
working
on
that,
but
I
am
hoping
to
get
sufficient
feedback,
maybe
start
narrowing
down
the
scope
in
the
next
couple
of
weeks
of
what
we
want
to
accomplish
for
alpha
swap
support
in
122..
D
So
if
you
haven't
had
a
chance
to
look
at
that
document,
please
go
ahead
and
take
a
look.
I've
linked
it
in
the
agenda.
Please
feel
free
to
send
me
a
note.
I'm
going
to
probably
start
paring
down
scope
soon
and
like
coming
together
with
an
actual
proposal
that
we
can
discuss
as
a
sig.
Hopefully
in
the
next
few
weeks,.
A
A
Any
question
on
this
topic
and
anyone
want
to
partner
with
allah
and
also
on
this
one.
D
I
think
karen
has
been
helping.
I
don't
know
if
he's
on
the
call
think
so
yeah,
I'm
very
happy
like
if
you
have
use
cases
if
you're
interested
in
this
feature.
Please
please
reach
out
to
me.
We're
gonna
need
lots
of
testing,
helping
hands
so
and
and
feedback.
So
please
do
reach
out.
I
would
appreciate
it.
A
B
Yeah
I
pasted
the
statistics
and
you're
welcome
to
watch.
Typically,
what
I
do
is
to
go
into
created
and
see
either
there
are
any
trends,
but
this
time
there
were
no
trends
like
just
a
lot
of
prs
were
opened.
B
Also,
I
didn't
find
anything
screaming
and
closed
ones,
so
I
think
it's
it's
just
regular
week,
but
I
didn't
look
very
deep
because
typically
I
spend
more
time
on
that
not
just
doing
it.
During
the.
B
A
Thanks
seki,
thank
you
everyone
and
for
attending
today's
meeting
and
looking
forward
for
next
week
and
please
use
your
question
and
feedback
share.
Your
feedback.
Add
the
pr
and
the
cap
thanks.