►
From YouTube: Kubernetes SIG Node 20210810
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
C
Yeah
so
welcome
everyone
to
the
august.
10Th
sig,
node,
meeting
meetings
are
recorded,
so
we'll
be
kind
everybody
and
are
uploaded
later
on
youtube
for
those
who
can't
join.
C
I
think
we
have
a
kind
of
relatively
packed
agenda
today,
so
I
don't
know
if
we
need
to
go
over
pull
request
stats
this
week
versus
just
diving
right
into
123
planning
discussion.
C
I
have
a
bias
to
do.
123
planning
discussion,
everyone,
okay
with
that-
and
I
can
read
the
table.
I
guess
with
that
in
mind-
I
guess
marna
and
alana.
Are
you
guys
yeah.
C
D
All
right:
do
you
see
my
screen.
F
D
All
right
so
elana
and
I
went
through
all
the
outstanding
items
from
122
and
the
new
items
that
are
already
in
123
and
put
them
into
this
table.
So
we
can
go
through
the
table
today.
Try
to
add
priorities,
maybe
some
sizing
and
make
sure
we
have
owners
assigned
so
the
first
one
on
the
list.
Here
is
ambient
capabilities.
D
D
C
C
I
mean
it
does
impact
the
pod
spec
pretty
strongly.
I
guess
did.
D
So
I
know,
like
tim,
all,
claire
pinged,
a
bunch
of
us
from
the
runtime
side,
and
I
think
when
I
did
mention
six
security.
But
we
can.
We
can
make
sure.
E
H
F
A
I
think
it's
not
just
on
ci
also
they
have
a
initial
goal
to
make
sure
want
to
have
this
collaborate
across
all
the
content
around
that.
So
this
is
why
they
came
to
the
signal,
but
I
agree,
and
the
most
is
like
the
implementation,
design
and
most
of
the
driving
better
stickers,
but
a
lot
of
logistics
actually
will
be
down
the
signal
here.
That's
why
we
are
the
data
holder.
D
All
right
so
do
we
have
any
objections
from
signature
site
derrick
or
do
you
think
we
want
to
review
this
in
one
of
the
upcoming
meetings.
C
A
So
we
know
you're
going
to
cover
sweetheart
right
so
say
I
and
the
continuity
and
the
crap.
Of
course
it's
not
all
you
review
yeah,
so
there
are
people
there,
but
they
just
at
least
need
someone
as
the
collaborate,
your
effort,
sex,
that's
sleep.
D
Awesome
all
right,
so
the
next
one
is
container
notifier,
so
maybe
dawn.
Maybe
you
can
speak
to
that.
Yeah.
A
So
this
one
is
the
the
cap
and
the
pier.
So
it's
we
have
the
server
discussing
since
last
year
and
the
initial
scope
is
to
be
expanded
per
kim
tim
harkin's
request.
So
it's
kind
of
the
want
to
solve
a
lot
of
other
notifier
issues
for
the
container
and
we've
been
discussed
the
wounds
since
we
founded
the
signal,
so
we
over
the
minute
discussing
we
actually
the
effectively
significant
reduce
of
the
scope
right
now,
it's
more.
It's
only
limited
how
to
solve
that
storage.
A
One
container
it
is
how
to
notify
the
container
is
going
to
be
die
and
this
kind
of
scope
for
other
limited
scope,
smaller
scope.
So
currently
we
agree
about
api
from
high
level
design
and
this
one
the
latest
update
actually
coming
just
right
after
couple,
we
accept
the
cap,
the
deadline,
and
so
we
suggest
they
ponder
to
this
weakness
so,
but
I
think
a
lot
of
things
is
already
ready.
A
Even
implementation
is
ready,
so
I
will
treat
this
is
large
and
the
relative
the
high
priority,
even
the
it's
been
so
long,
but
the
problem
actually
for
the
give
the
signal
and
the
safe
after
detach
of
the
storage
runner.
Actually
it
is
the
production
problem
for
for
such
a
long
time.
So
we
should
do
more
work
here.
So
I
will
treat
that
yeah.
There
is
a
high
priority
relative
high
priority.
C
I
don
I
I
guess
I
remember
a
few
of
us
had
met
as
like
a
small
group
on
this
and,
yes,
I
didn't
think
we
had
reached
an
api
agreement
on
this.
This
still
felt
an
awkward
spot,
maybe
seth.
I
know
you
were
looking
at
this
some
and
do
we
actually
feel
good
about
what
was
presented
here
or
I
I
didn't.
I
But
it
was
one
of
those
things
that,
unless
I
actively
resisted
it
seemed
like
it
passed,
I
believe,
went
forward.
A
I
actually
let
me
clarify,
I
think
about
I-
will
reach
off
the
api.
It's
just
because
we
disagree.
We
strongly
disagree
of
the
api.
Initially
it's
just
because
scope
was
too
big
and
right
now
we
agree
about
a
much
much
smaller
scope
and
and
also
also
agree
about
the
real
use
cases.
We
are
doing
those
kind
of
things,
but
for
the
height
to
use
cases,
I
be
new.
A
We
do
a
great
before,
but
it's
just
because,
like
the
team
hacking
in
wow-
and
he
want
to
make
that
scope,
expanded
the
scope
and
have
the
new
api,
we
totally
disagree
on
nightwing,
because
that's
we
feel
like
that
to
take
much
much
longer
effort
and
we
don't
see
the
clear
of
the
use
cases.
A
So
this
is
why
I
say
that
we
are
kind
of
agree
about
the
api
because,
before
expanded,
we
we
discuss
at
the
signal
we
think
about.
The
api
is
okay,
but
the
api
data
has
been
expanded
and
redesigned
per
expanded
scope.
We
totally
disagree,
but
anyway,
I
think
we
should
be
play
safe
and
because
reliability
is
the
top
concern
and
I
still
think
about
this
high
priority.
But
if
we
still
don't
agree
about
the
api,
we
just.
A
C
C
I
Yes,
I
it's
possible
that
this
gap
has
changed
since
the
last
night.
It's
actually
likely
that
it's
changed
since
last
time.
I
read
it
so
I
will
read
back
over
it
and
try
to
get
a
better
handle
on
what
the
current
scope
is.
J
Can
talk
to
that,
so
this
kept
kind
of
is
a
follow-up
cup
to
the
pod
security
policy
replacements
and
so
basically
there's
an
issue
where,
because
there's
no
standardized
way
or
definitive
way
of
saying
if
a
pod
is
targeted
for
a
windows
or
linux
node,
there
can
be
issues
with
people
or
like
operators
deploying
policy
to
their
clusters.
J
Between
windows
and
linux,
pods
and
there's
a
couple
different
ways
recommended
ways
to
identify
that
that's
with
os
node,
selectors
or
runtime
classes,
but
there
isn't
anything
standardized
that
things.
Like
the
you
know.
Api
server
validation
can
key
off
of
to
to
decide
if
it
wants
to
and
enforce
certain
rules.
So
this
kep
is
to
standardize
on
that
we're
still
working
with
jordan
leggett
to
finalize
that
leaning
towards
using
runtime
classes.
J
If
you
want
to
identify
windows
pod,
but
there's
a
couple
of
caveats
there
that
we're
working
through,
but
the
main
intersection
with
cignode
is
during
a
lot
of
these
discussions
that
came
up
that
it
doesn't
really
make
sense
to
apply
or,
like
I
think,
there's
there
was
a
suggestion
to
strip
a
lot
of
the
fields
that
only
that
don't
apply
to
either
windows
or
linux
at
the
cubelet.
J
D
D
So
this
will
be
mainly
like
with
the
api
and
no
cygnus
involvement
is
on
the
last
part
of
it
right.
Yeah.
J
Yeah,
the
the
main
involvement
is
with
the
sigoth
and
the
all
the
stakeholder
holders
from
the
pod
security
policy
replacement
and
the
api
changes.
Since
we
are
planning
on
doing
some
validation
like
ipod
admission
time
to
to
you
and
to
just
see
if
the
pods
should
be
admitted
and
if
the
fields
are
all
set
up
correctly,.
J
Historically,
people
have
authored
policies
that
require
things
like
you
know.
The
container
users
be
within
a
certain
range
for
all
pods
and
there
are
for
all
containers
in
their
cluster
and
that
doesn't
apply
to
windows
so
when
they
enabled
policies
like
that,
it
actually
prevented
windows,
containers
from
getting
or
windows
containers
or
pods
from
getting
scheduled
or
scheduled
into
the
system.
So
we
wanted
a
way
to
be
able
to
check
and
say,
is
this
for
windows?
If,
yes,
we
can
ignore
a
bunch
of
the
linux
specific
fields
or
is
this
going?
C
J
I
this
came
up.
I
believe
that
we
have
updated
the
we
did,
a
bunch
of
updates
to
the
pod
security
policy
and
pod
security
standards,
page
and
122
for
the
docs
and
that's
more
or
less
up
to
date.
I
don't
know
if
there's
anything
just
in
this
pr
I'll
have
to
check,
but
I
will,
if
necessary,
we
can
add
that
to
the
to
the
cap.
F
So
not
to
discuss
this
one
in
too
much
detail.
Do
we
have
a
psi
slash
priority
and
who
needs
to
review
and
approve
this
from
node.
J
C
I
asked
that
question
because
I
don't
know
why
we
can't
remove
those
fields
at
persistence,
time
versus
run
time,
and
so,
if
we
can
get
that
detail,
I
think
it
would
help
us
do
a
sizing
activity.
J
Okay,
I'll
I'll
make
sure
I'll
bring
that
up,
that
it
might
make
sense
to
just
remove
them
yeah
admission
time.
Instead,.
D
Okay,
all
right,
so
maybe
we
can
add
yeah,
add
a
follow-on
for
you
mark
on
that
item
and
then
we
can
come
back.
Should.
F
D
C
I
mean,
I
know,
dawn
myself
and
lantau
have
all
helped
stick
windows
as
it's
evolved,
so
I
mean
I'm
fine
to
do
that.
It's
just
it's
hard
to
hack
anything.
J
C
J
J
C
K
C
C
J
Yeah
and
that's
why
we're
leaning
towards
runtime
classes,
runtime
classes
would
like
if
we
do
end
up
supporting
you,
know
linux
containers
on
windows,
we
that
would
all
be
configured
through
runtime
classes,
and
so
that
would
be
a
good
way
to
identify
in
some
of
those
specifics,
but
yeah
we
can
I'll
follow
up
next
week
after
we
get
some
more
answers.
D
Great,
so
moving
on
to
the
next
one
swap
ela,
you
want
to
talk
to
it.
F
Yeah,
I
think,
like
the
good
news
is,
I
think
we
hammered
at
most
of
what
needed
to
be
done
for
beta
when
we
put
together
the
alpha
level
cap.
So
I
think
our
I
don't
think
there
will
be
much
changes
that
need
to
happen
now,
based
on
the
alpha
implementation.
F
Hopefully
we
can
just
say
like
let's
target
beta
and
go
ahead
and
do
the
thing
I
think
I'll
have
to
fill
out
the
prr
stuff,
but
other
than
that
and
yeah.
I
think
we
have
a
reviewer
approver
for
that
one
already.
I
don't
think
so.
It
was
definitely.
I
don't
know
if
it
would
count
as
large.
At
this
point,
maybe
like
medium,
I
guess
it
depends
on
how
much
work
needs
to
be
done
on
the
end
and
tests.
F
There
should
not
be
a
ton
of
code,
I
mean
we'll
see,
because
I
don't
think
that
we're
currently
doing
anything
in
the
cubelet
right
now,
where
we're
touching
c
groups
other
than
maybe
in
the
container
manager
stuff
for
like
device
manager
cpu
that
sort
of
thing,
but
not
through
the
normal
p
lag.
So
I
don't
know
I
need
to
look
into
it.
C
C
If
system
reserved
an
eviction
would
actually
work
properly
or
could
ever
work
properly
on
a
host
with
swap
on.
So
I
I
still
feel
we
need
to
answer
that
question
and
I
thought
that
we
had
that
captured
in
in
the
beta
criteria.
So
I
I
think
large
is
is
a
fair
assessment
on
this.
I
think
the
pod
level
secret
setting
is
relatively
small.
Yes,.
F
Yeah,
I
think
you're
right,
that's
definitely
in
the
cap.
I
didn't
include
it
in
the
short
notes
here
so
yeah.
I
think
it's
fair
to
leave
that
as
large.
D
Probably
we
need
some
real
world
testing
data
as
well.
D
L
Okay,
we're
good
now
you,
oh
sorry,
I
mean
yeah,
just
a
quick
update.
I
mean
I
think,
for
this
one.
We
did
a
lot
of
work
in
the
last
release.
I
think
there's
some
of
the
stuff
is
already
merged
after
the
the
branches
open.
L
So
I
think
it's
just
basically
continuing
that
work
and
then
doing
some
of
the
actual
implementation
within
the
container
run
time
so
within
container
d
and
within
cryo,
and
also,
I
think,
testing
will
be
a
big
focus
and
making
sure
that
all
the
metrics
are
there
and
I
think
that's
going
to
be
our
main
main
focus
this
cycle.
So.
M
Know:
nope
nope,
that's
that's
correct
yeah!
So
I
okay,
we
haven't
updated
the
cap,
pointing
it
to
the
new
version,
but
that's
basically
the
only
thing
that
should
is
probably
going
to
be
done
because
we've
missed
last
week
really
so
it's
pretty
much
just
carrying
it
from
last
one.
Okay,.
D
And
we
have
folks
identified
to
work
on
the
cryo
and
continuity
changes
right.
D
Okay,
so
the
next
one
is
in
place
pod
vertical
scaling.
Okay,
so
I
think
we
we
need
reviews
early
on
this
one
based
on
the
what
happened
in
the
last
cycle.
B
Do
you
want
to
switch
to
sidecar
containers
matthias
commented
that
he
want
to
leave
soon.
You
need
to
do
something.
D
Okay,
we
can,
we
can
add
it
if
needed,
go
ahead.
B
E
Last
page,
so
actually
it's
it's
a
very
small
subset
on
the
cap
that
was
started
by
rodrigo
almost
a
year
ago,
and
we
only
focus
on
one
particular
problem
is
to
is
for
the
job
completion
issue.
When
you
have
a
sidecar
in
a
job,
the
job
never
finishes,
and
the
goal
is
to
just
go
alpha
this
time
and
use
annotations.
E
E
D
E
F
I
think
that,
like
it's,
it's
not
yet
a
cap
right,
it's
just
a
draft
and
hack
md.
This
might
be.
G
F
C
C
Is
we
go
through
the
list
and
this
isn't
like
an
act
or
an
iconic
thing
in
particular,
but
if
we
could
annotate,
which
things
actually
impact
pod
lifecycle
and
if
we
could
set
a
budget
for
the
number
of
pod
lifecycle
impacting
changes,
we
want
to
make
yes
just
seeing
container
notifier
and
keystone
containers
and
maybe,
as
we
get
through
the
list,
we'll
find
out.
There's
like
six
fundamental
changes
to
pots.
That
might
be
something
we
we
want
to
buy
within,
but.
E
Okay
and
then,
if
we
have
enough
budget,
we
will
come
another
evening
soon
to
to
present.
C
Yeah
I
mean
I
definitely
want
to
hear
maybe
any
changes
in
perspective
that
had
happened
so
a
big
plus
one
to
that
concept.
I
still
will
openly
say
I'm
biased
towards
like
evaluating
if
we're
meeting
the
needs
of
correlated
communities.
So,
like
big
issue,
I
recall
the
primary
approach
was
we
had
no
way
to
still
inform
the
istio
community,
for
example,
on
how
you
could
actually
do
end-to-end
tls
without
limiting
the
pod
spec
like
because
it
didn't
include
any
containers
and
stuff.
So
maybe
maybe
keystone
containers
isn't
trying
to
solve
that
problem.
E
But
it
will,
it
will
actually.
E
The
first
step-
and
we
want
it
to
be
less
less
ambitious
than
rodrigo's
attempt
to
make
it
easier
to
review
yeah
that.
C
All
sounds
positive,
so
yeah,
please
set
time
to
present
when
you're
ready,
okay.
A
E
K
E
D
Thanks
so
on
the
pod
vertical
scaling,
we
have
reviewers
and
approvers
assigned
right.
Is
there
anything
more
to
discuss
there?
Besides,
like.
D
D
Okay,
all
right
same
thing
for
second
by
default,
we
did
the
alpha
in
the
next
previous
release.
We
can
target
a
beta
in
the
next
one.
D
Okay,
all
right
the
next
one,
mike
brown
ensures
secret
pulled
images.
This
one
missed
the
kept
threes
last
time,
so
we
put
it
back
here
to
see
if
we
can
attempt
to
get
it
in
now,.
A
C
B
You
can
put
me,
and
I
think
ruben
is
gonna-
call
reuben.
Do
you
wanna,
take
it.
B
Yeah
I
mean
pr
is
out
and
it's
a
good
scoped
pr
for
alpha.
Unfortunately,
like
we
have
so
many
iterations
that
we
didn't
fit
into
previous
release.
Hopefully,
actually
this
will
be
like
this
release
will
be
okay,
so
I
keep
reviewing
the
pr
okay.
D
All
right,
so
sierra
graduation
we're
waiting
for
an
update
of
container
d
into
the
ci
with
the
v1
protos.
So
if
anyone
can
talk
to
that
I'll
I'll
check
with
mike
brown,
so
I
know
like
they
did
the
work
to
merge
those
changes
or
they
were
in
flight.
So
hopefully
they
are
merged
and
it's
just
a
matter
of
updating
continuity
in
ci.
D
Yeah,
so
next
one
is
container
check
pointing
so
we
discussed
this
one
last
week
and
I
know
like
adrian,
has
an
updated
cap
and
that
kep
needs
review.
Basically.
O
O
Yes,
the
cat
needs
review
and
do
I
need
to
open
pull
requests
already
for
for
the
for
the
actual
changes,
because
I
still
have
opened
the
pull
request
for
the
proof
of
concept
and
we
discussed
that
we
want
to
do
it
differently
in
smaller
pull
requests.
So
do
I
wait
until
the
cap
is
merged,
or
do
I
already
open
a
corresponding
pull
request
for
code
changes.
A
I
think
you
can
parallelize
that,
since
at
least
in
this
meeting
I
think
the
last
time
we
agree
about
the
scope.
Right
only
do
the
check
pop.
We
agree
about
the
value,
and
so
you
can
update
the
cap
at
the
same
time
to
start
to
do
the
prop
typing.
A
I
love
to
review,
but
I
don't
think
about
I
can
personal
handle
that
I
have
so
many
other
things
so
so
so
I
definitely
will
take
a
last
look
so
who
want
to
volunteer
to
review
this
one.
C
No,
no,
I
know
that
I
can
spend
time
with
adrian
and
so
did
mernal
so
it'd
be
great
if
we
at
least
have
donner
or
tell
someone
else
to
give
a
look.
That's
not
necessarily
yeah
biased
by
our
own
previous
discussions.
A
I
I
think
that
part
just
mentioned
that
I
think
the
menu
please
also.
I
think
that
we,
this
is,
I
think,
it's
kind
of
even
we
cut
the
checkpoint
to
only
the
scope
to
checkpoint,
but
I
think
this
is
a
really
good
feature,
but,
and
so
maybe
menu
topples.
You
can
help
have
the
two
eyes
and
then
to
look
at
this
one.
Okay,.
D
A
D
D
Okay,
sizing,
maybe
large,
what
do
you
think
it
yeah?
I
would
just
I
think.
O
I
think
medium
because
or
I
because
we
we
discussed
that
it's
basically
going
to
be
changes
to
the
cri
api
and
I
expect
a
few
small
changes
to
the
cubelet
to
trigger
the
checkpointing.
But
as
it's
only
minimal
and
most
of
the
work
is
done
by
the
container
engine.
So
I
expect
medium,
but
maybe
it's
large.
So,
but
I
guess
we
can
adapt.
D
Okay,
no
worries
thanks
yeah.
I
believe
you
always
change
it
later,
yeah
yeah,
so
the
next
one
is
c
groups.
We
do
and
I
think
over
there
I
know
at
least
one
item
that
needs
to
be
fixed
so
clayton.
While
testing
pointed
out
that
there
were
issues
with
eviction
tests,
one
knob
the
memory
force
empty,
that
is
in
c
groups,
v1,
isn't
in
c
groups
v2.
D
So
we
need
to
figure
out
what
we
need
to
do
for
that
one,
and
the
second
thing
is
getting
the
ci
jobs
running
and
green
periodically.
F
P
Were
you
looking
into
those
yeah,
so
the
all
of
the
memory
pressure
tests
are
now
fixed
and
working.
The
ones
that
remain
broken
are
storage
which
I'm
currently
testing
a
fix
for
and
then
after
that,
pid
reuse,
but
pitterio
seems
to
be
flaky
not
entirely
broken,
which
is
a
step
up
over
the
rest
of
them.
So,
okay.
D
D
C
Q
It's
not
in
the
code
path
that
I
saw
in
the
cubelet.
It's
just
the
eviction
tests.
I
happened
to
be
running
this
on
fedora
34
and
just
noticed
it.
So
I
brought
up
with
renault
but
yeah.
It's
just
not.
It
doesn't
do
anything
and
you
can't
even
set
it.
You
get
a
permission
denied
because
it's
not
a
cisfest
that
doesn't
exist.
D
We
can
open
an
issue
yeah.
A
Right,
oh
cool
thanks
daniel.
I
want
to
understand
why
we're
using
that
love
for
the
sounds
like
that
love
just
for
tests.
Maybe
we
just
after
that
one
trigger
that
one.
Then
we
measure
it
is
successfully
or
whatever
I
don't
know,
is
the
reclaim
and
the
other
is
this
force
of
the
reclaim.
I
I
don't
understand
this
yeah.
D
Right
I
mean
when
I
look
at
it:
it's
trying
to
free
up
all
the
memory
assigned
to
a
c
group,
so
if
it
just
does,
then
it's
not.
I
think.
Q
It's
it's
trying
to
get
an
accurate
it's
trying
to
basically
get
to
an
accurate
value
that
can
use
for
some
of
the
rest
of
the
tests
also,
and
so
this
is
another
problem
I
noticed
with
a
lot
of
eviction
they're
actually
very,
very
hard
to
run
outside
of
an
environment.
I
don't
really
think
it's
a
requirement
for
this,
and
I
didn't
I
just
it
was
definitely
a
lot
of
them.
They
set
up
very
precise
conditions
that
are
not
necessarily
like
you
know.
Q
I
want
to
use
exactly
this
value
versus
just
setting
a
looking
at
what
the
current
status
is.
Some
of
that's
just
trying
to
get
the
test
reliable.
So
it's
understood
this
was
a
place
of
it
was
trying
to
get
it
down,
to
get
a
precise
value
to
then
use
a
precise
value
over
time,
which
is
somewhat
fragile,
not
ridiculously
fragile
in
a
controlled
environment,
but.
P
Q
I
noticed
that
as
well,
so
that's
great,
and
I
think
that
if
you're,
if
you're
planning
on
doing
that,
that
would
address
the
probably
would
address
the
need
for
it.
It
did
not
look
as
if
the
test
actually
fundamentally
depended
on
it.
So
we'll
get
that
bug
open.
A
I
saw
david
david
dash
paul
was
here
and
then
he
may
have
some
oh
earlier.
Maybe
he
left
so
I'm
still.
N
N
A
Okay,
can
you
share
because
I
remember
those
like
the
eviction
test
introduced
by
you
and
I
remember
there's
the
problem
we
try
to
address
because
eviction
tests,
especially
memory
and
the
disk
used
to
be
really
flicky,
and
I
believe
this
is
why
we
did
something
in
the
test.
N
I
mean
for
force
empty,
it's
because
it
there
were
previous
tests
that
would
cause
high
memory
usage
and
then
the
eviction
tests
wouldn't
work
well.
So
it
was
something
that
was
practical
at
the
time
it
may
or
may
not
actually
be
required,
but
it
was
something
that
we
did
to
deflate
the
test
environment.
A
P
That's
part
of
why
I
want
to
bring
them
to
stable
first,
because
then
we
have
a
bit
more
leeway
to
like
figure
out.
If
something
is
going
to
introduce
new
breakages,
we
should
be
able
to
get
around
like
get
away
with
not
using
it,
given
that
I'm
not
sure
it
always
works
today,
but
we'll
see.
L
Yeah
and
just
one
thing
I
want
to
add
regarding
ci
jobs
so
like
we
probably
want
to
get
ci
job
parity
for
see,
groovy
two
as
part
of
this
part
of
beta,
I'm
guessing
so
for,
like
you
know
the
serial
tests
and
conformance
we
already
have
the
we've
been
working
on,
like,
for
example,
container
d,
conformance
test
and
node
test,
etc.
But
I
think
we
also
want,
like
serial
test
parody
right
for
secret
v2,
so
something
else
we
probably
need
to
do.
F
D
F
Yeah
so
when
marinal
and
I
were
going
through
and
starting
the
planning
stuff
yesterday
putting
stuff
into
the
dock,
one
thing
that
sort
of
came
up
last
release.
So
we
have
a
more
extended
release
cycle.
F
The
release
is
longer
and
an
issue
that
we
had
in
a
large
number
of
cases
was
that
things
were
not
ready
to
review
until,
like
a
week
before
code
freeze,
despite
having
a
lot
more
dev
time
and
so,
and
that
caused
a
bunch
of
problems
with
trying
to
deal
with
ci
and
whatnot,
and
I
basically
spent
the
last
month
scrambling
to
get
like
tests
working
properly
and
unbroken
because
they
all
broke
right
during
code
freeze,
and
so
my
suggestion
would
be
maybe
to
try
to
make
that
a
little
bit
easier
on
people
and
to
spread
out
the
load
a
little
bit.
F
F
That
still
gives
a
lot
of
time
for
development
like
we're.
Looking
at
mid-october,
but
I
think
that
it
will
make
things
a
little
bit
smoother
since
things
are
very
deadline,
driven
right
now,
so
like
just
imposing
upon
us
as
sig
node.
A
sort
of
like
interim
deadline
that
if
there
are
any
deprecations
that
are
happening
or
new
features
like
it,
can't
be
ready
for
review.
One
week
before
code.
Freeze
like
it's
got
to
be
ready
much
earlier.
C
Yeah,
I
think,
there's
two
things
so
like
work
that
then
kind
of
spans
cigs.
I
could
see
that
being
difficult
to
tease
our
ways
through.
C
C
Time,
I
think
the
other
thing
is
I'm
seeing
we
could
approach
this
as
just
too
much
stuff
right,
and
so,
if
I
think
back
to
the
list,
we
just
reviewed
it
was
a.
C
It
was
a
really
big
list,
so
we
could
either
look
at
this
at
saying,
have
big
appetites,
but
then
time
bound
ourselves
to
then
try
to
scramble
to
see
what
can
get
in
that
time,
and
I
think
we'd
still
end
up
having
then
a
crunch
on
that
15th
and
getting
pushed
by
others
to
say
well
come
on,
let
my
stuff
through
or
that
type
of
thing
or
or
we
can
just
try
to
maybe
establish
a
change
budget
and
give
people
more
leeway.
So
I.
A
A
So
sorry-
and
the
message
is
something
I
like
this
idea,
but
I
think
about
alpha-
maybe
a
little
bit
of
difficulty,
the
alpha
feature.
Normally,
it
is
always
have
the
api
approver,
always
and
and
also
have
some
other
cross
other
stick
and
effort.
So
it's
much
harder
for
them
to
collaborate
that
one
and
also
normally
alpha
feature
always
have
the
like
a
flag
and
the
by
default
is
disabled.
A
I
I
will
suggest
like
a
vital
feature
and
also
deprecation,
like
the
promote
from
alpha
to
better.
Maybe
can
because
I
think,
a
lot
of
his
driving
better
feature.
Promotion
to
beta
or
ga
is
driving
by
our
org
or
stick
and
and
also
the
deprecation
driven
by
us.
So
that's
easy
to
collaborate,
but
the
alpha,
maybe
we
could,
because
anyway,
if
it's
not
good,
we
could
reject
it
and
all
we
should
did
this
anyways
disabled.
So
let's
do
that.
I
think
that
would
be
easier.
F
That
sounds
reasonable
to
me.
The
the
only
reason
I
was
picking
on
alpha
features
a
little
bit
is
in
part
because
of
the
difficulty
with
api
review
and
whatnot
like
if,
if
a
pr
is
in
a
work
in
progress
state
until
a
week
before
code
freeze,
the
likelihood
that
it
gets
approved
is
like
it's
much
lower.
So
I
think
that
if
we
try
to
push
things
a
little
bit
earlier,
so
maybe
we'll
say
like
recommended
for
alpha
but
not
required,
but
for
the
beta
and
deprecations
that
I
think
that
would
make
it
easy.
A
Yeah
those
would
that
be,
I
think,
that's
easier
and
also
if
we
want
to
suggest
this
one
kind
of
like
the
kind
like
though
we
try
to
call
out
not
mandatory,
but
we
should
announce
whatever
opportunity
like
they
are
sticking
with
the
community
and
also
send
the
email
to
the
signal,
and
so
because
not
everyone,
actually,
I
think
he
attended
today's
meeting.
They
may
don't
know
and.
C
I
think
it's
a
fair
ask
for
the
assigned
approver
on
an
item
to
ask
as
a
part
of
the
negotiation
between
accepting
the
work
and
building
the
work
that
a
schedule
that's
mutually
acceptable
to
both
parties
has
worked
out
and
it
I
think
it's
it's
nice
to
have
that
conversation.
I
I
know
I
would
find
that
useful
if
we
could
smooth
things
out,
but
maybe
each
approver
can
figure
out
if
that
time
works
for
them
or
not
type
of
deal.
I
know.
B
A
That's
good
yeah,
okay,
elena,
can
you
can
you
yeah.
F
I'll
take
an
action
to
send
an
email
summarizing
this
and
then
people
can
take
a
look,
and
let
me
know
if
it
looks
good.
A
A
I
think
both
and
I
agree-
and
we
have
like
we-
we
admit
in
the
past
the
kind
of
the
documentation
about
the
membership
and
approval
and
the
return
like
that
even
before
1.0
out
and
it's
kind
of
the
we
at
least
the
signal.
I
believe
a
lot
of
the
six.
I
asked
the
sig
api
machinery
and
the
and
the
signature,
so
most
of
the
sake
is
not
follow
that
one
they
have
like
the
relative,
the
higher
bar,
but
on
the
other
hand,
both
stark-
and
I
discuss,
we
think
about
it.
A
We
both
acknowledge
that
our
previous
bar
is
too
high
and
the
community
is
different.
Kubernetes
is
different.
Signal
is
different,
and
so
right
now
we
are
more
focused
on
the
reliability
and
the
predictor
release
and
the
others
can
have
the
other
work.
So
we
should
address
the
change
adapted
between
our
seeker,
reviewer
and
also
approval
requirement
and
the
reference,
those
kind
of
things
so,
and
we
also
so
we
have
some
like
the
draft
written
out
and
we'll
send
directly.
A
Maybe
you
want
to
add
more,
and
we
asked
directly
still
review
but
almost
done
and
we
are
going
to
send
the
existing
of
the
approval
and
also
plus
of
the
sub
project
leads,
for
example,
next
segment,
and
I
not
for
review.
Then
we
and
also
we
even
still
had
stupid,
hold
some
project
leads
position
so
after
their
share
their
old
experience,
so
we
have
the
better
balance
and-
and
we
know
we
will
share
with
you
to
the
our
signal
community
for
the
for
the
opinion,
so
that's
kind
of
status
directly.
C
Yeah
I
mean
I,
I
think
we
said
we
were
going
to
try
to
get
something
sent
out
based
on
the
draft,
but
I
don't
think
what
we
ended
up
selling
on
was,
I
don't
want
to
say
settled.
C
If
you
want
to
come
forward
to
be
an
approver
and
a
sig
we
need
to
like
know
you
on
a
human
level,
see
your
face,
hear
your
voice.
C
At
a
github
level,
if
that
makes
sense-
and
I
think
that
that
that's
important
for
us
to
have
both
trust
and
a
sense
of
security
and
real
community
and
then
the
other
major
theme
that
I
think
we
were
looking
at
was
maybe
the
pathway
that
other
approvers
had
gone
through
and
we
felt
like
in
some
cases,
maybe
some
folks
were
were
held
back
too
long,
but
there
was
always
an
intermediate
step.
So
I
think
what
you'll
see
written
there
is
like
a
new
top-level
approvers
in
the
cube.
C
C
Dude
have
rights
in
more
than
one
subfolder,
and
so
I
I
just
want
to
recognize
that,
like
in
the
community
like
we,
we've
gone
and
made
progress
in
that
in
the
last
meeting,
where
we've
elevated
some
rights
for
some
members
they're
in,
and
I
think
that
that's
a
big
next
step
to
moving
further
on
that.
C
The
big
debate
I
have
in
my
head-
and
I
think
I
I
want
to
hear
the
community's
view
on
this-
is
just
like:
what's
the
pathways
for
people
to
to
grow,
that
trust-
and
I
kind
of
captured
it
in
like
two
notes
so
like
and
they
come
with
different
time
frames
and
so
like
one
would
be
like
demonstrating
deep
inside.
C
Of
the
cubit,
via
like
submitting
a
contribution,
refactoring
fixes
basically
like
chopping
woodwork,
and
that
would
include
like
a
lot
of
low
level
analysis
for
like
bottlenecks
and
pod,
startup
time
or
latency
or
cubelet
object
allocations.
Basically
like
us.
Looking
at
people,
spelunking
prof
dumps
as
an
example
of
like
a
very
clear
way,
you
can
demonstrate
inside
out
expertise
and
then
we
we
have
the
more
traditional
path
that
we've
looked
at
in
the
project
which
was
like.
Did
you
drive
more
than
one
cap
across
stages?.
I
I
C
Folks
who
come
forward
try
to
optimize
for
either
path,
given
the
extended
release
cadence
of
kubernetes
right
now
that
that
kept
process
might
be
a
little
longer,
and
so
I
think
we
need
to
find
the
right
balance
on
that
and
I
think,
as
we
get
the
document
out
to
the
existing
set
of
approvers
and
sub-project
owners,
we
can
find
the
right
balance
on
that.
C
But
that
generally
captured.
I
think
the
discussions
that
we've
had
thus
far-
and
hopefully
we
can
reach
consensus
on
that
more
broadly.
But
I
think
that
kind
of
summarized
my
notes
here
so
yeah
so
look
forward
to
getting
that
out
to
the
mailing
list
and
hearing
feedback,
and
I
think
both
don
and
I
are,
and
all
the
other
existing
maintainers
are
committed
to
trying
to
find
the
right
balance
of
growth,
trust
and
overall,
like
project
security,
posture.
A
C
Oh
thanks
don
for
putting
this
on
the
agenda,
and
hopefully
we
can
get
this
out
on
the
next
week
or
soon.
A
K
A
That,
okay,
the
one
thing
that
I
want
to
I
did
that
we
all
agree
doctorship
is
going
to
deprecate.
We
are
not
to
change
that
wire
so
for
kubernetes.
The
signal
that
I
think
about
the
most
of
this
problem.
It
is
on
the
darker
stream
in
the
past
and
the
rest.
Actually,
we
we
don't
have
much
of
those
exposure.
I
think
maybe
there's
some
tools
we
build
and
maybe
have
that
problem,
but
the
most
we
don't
so
we
can
find
the
reviewer
yeah.
C
Okay,
I
I
have
a
heart
stop
now,
so
I
apologize
that
I
can't
go
over,
but
big.
Thank
you
to
milan
amarnal
and
everyone
else
who
spoke
to
the
items
I
put
forward
and
you'll
have
a
great
rest
of
the
day,
but
I
have
to
drop.