►
From YouTube: Kubernetes SIG Node 20220426
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Good
morning,
everyone
today
is
the
april
26th
and
we
just
had
the
1.24
venice
and,
as
the
regular
things
we
normally
will
do
the
retrospective
and
the
retro,
and
also
the
planning
so
for
next
weenies
so
and
the
ribbon
and
leaving
circuit
and
the
menu
and
the
proposed
to
volunteer
to
do
this
to
run
the
when
then
this
cycle.
So
we
will.
Maybe
you
want
to
take
over
and
start
talking
about
the
the
retro
for
the
last
release
quickly
summarize
because
I
I
know
that
sergey
did
a
lot
of
work.
B
But
I
saw
there
are
a
couple
of
topics
on
the
agenda
today.
So
should
we
go
over
those
first
before
we
dive
into
the
retros
and
planning.
B
I
saw
brada
adrian
and
vinay
put
some
topics
on
today's
agenda,
so
should
we
just
go
over
go
over
to
those
topics?
Real
quick?
First,
probably
a
good
idea.
A
C
C
We
wanted
to
ask
if
having
a
review
from
windows
or
vm
random
maintainers
for
the
alpha
phase
is
needed
or
is
needed
for
beta
just
to
know
what
are
the
blogging
things
that
we
need
to
unlock?
Okay,.
D
I'm
not
sure
like
what
will
we
do
on
the
windows
side?
Maybe
like
someone
from
windows,
maybe
mark
or
someone
can
say.
E
Yeah
I'll
take
a
work
I'll
say
I'll,
like
I'd
like
to
do
a
review
just
to
make
sure
like
if
the
feature
is
even
applicable
to
windows
that
the
implementation
isn't
going
to
make
it
hard
to
split
up
the
functionality.
But
in
terms
of
output,
implementation,
I'd,
say,
usually,
windows
support
is
not
a
requirement.
A
Yeah,
the
only
blocker
for
this
one.
It
is,
if
customer
enabled
this
feature
and
not
to
cause
the
regression
to
the
windows
nodes
right
so
because
the
that's
the
only
blocker
at
least
for
alpha
and
then
later
we
need
for
the
beta
and
the
ga.
We
need
to
figure
out
how
to
best
to
handle
if
this
is
not
capable
for
the
windows
node
and
if
there
are
any
other
things
so
how
we
are
capable
to
handle
those
kind
of
things.
So
I
don't
think
beyond
that
when
there's
any
blocker
here.
C
C
A
Think
so,
but
that's
just
a
start
point
and
we
and
the
the
ribbon
and
also
menu
and
also
circuit,
just
put
there.
So
we
are
up
to
edit
right
put
something
so
the
goal
it
is
for
today's
planning
we
want
to
find
the
owner.
We
are
driving
the
feature
end
to
end
reviewer
and
approver,
even
review
and
poor.
Maybe
we
need
to
make
a
photo
app.
Sometimes
this
is
kind
of,
though
in
the
past
we
are
just,
please
feel
free
to
add
there
yeah.
A
F
Yeah
I
just
I
just
wanted
to
to
mention
that
I
implemented
the
changes
we
discussed
last
week.
I
had
already
a
quick
review
from
from
ronald
over
it
and
just
wanted
to
make
sure
to
get
the
reviews
early
this
time
so
that
we
do
not
sleep
on
the
last
day
last
night
like
last
time,
so
just
highlighting
that
it's
ready
for
review.
D
Yeah
sure
yeah,
I
think
adrian
yeah.
This
will
be
near
the
top,
so
we'll
try
to
get
it
once
125
opens
and
I
think
on
the
kept
side,
I
think
don
or
derrick
needs
need
to
do
an
approve
for
merge.
I
don't
have
up
to
there,
so
we
can
poke
them.
Stephanie.
G
Oh
hi
yeah,
so
this
is
about
the
in-place
part
vertical
scaling
and
I'm
hoping
that
1.25
is
the
magic
number
for
us.
So
it
looks
like
we're
waiting
on
derrick
tim
hawkins
looked
at
the
changes
and
he's
he
still
stays
lgtm
on
it,
and
there
are.
There
were
a
bunch
of
to-do's
and
some
issues
that
were
that
we
during
the
cap,
we
figured
we'll
address
it
during
beta,
for
example,
designing
a
slash
of
sub-resource
called
resize
that
can
be
applicable
to
more
than
just
spots.
G
You
can
apply
it
to
deployments
to
jobs,
which
is
a
useful
thing.
So
there
are
all
these
things
that
I've
captured
on
a
wiki
page
on
my
github,
and
that
link
is
there
in
the
document,
I'm
hoping
that
we'll
find
volunteers
to
help
with
help
drive
this.
I
want
to
just
build
out
the
expertise
and
share
out
the
knowledge
to
more
people
than
just
me
and
you
and
derek
and
tim.
So
we
want
to
have
some
more
owners
for
this
and,
of
course,
lantau
knows
a
lot
about
this
too.
G
So,
but
we
want
to
have
some
more
people
be
able
to
in
case
I'm
on
vacation
or
something
so
that's
a
goal,
and
over
the
next,
it's
gonna
be
a
year
between
alpha
and
ga,
because
we
need
to
have
the
two
release
gap
so
that
we
don't
have
to
deal
with
the
back
support
for
the
cube
ctl,
the
client
side,
and
for
that
reason
we'll
have
plenty
of
time
to
you
know
design
this
and
do
this.
G
A
Yeah,
I
I
understand
this
one
is
taking
really
long
time.
I'm
sorry
for
that
one,
just
just
so
complicated
feature,
and
also
I
understand
what
came
from
the
dark.
Of
course
derek
was
also
busy,
but
another
thing
it
is
because
there
are
so
many
things
like
natural
resource
management
lead
a
lot
of
like
long
term
things
like
even
sick
who
so
worship
you
why
he
always
try
to
sink
more
deeper
for
the
longer
term.
So
I
just
yeah
eric
is
not
here
to
so
I
just
want
to
represent.
A
G
The
v2
also
he
mentioned
it's-
we
can
do
that
during
the
beta
time
and
I've
captured
it
as
a
issue
to
work
on,
and
I
think
we
have
a
volunteer
for
that
already.
So
that's
great
I'm
going
to
update
the
cap.
I
think
the
cap
needs
updating
to
1.25
targets
and
send
it
across
today
or
tomorrow,
and
I
think
your
direct
can
approve
that
it's
just
a
you
know:
papers
rubber
seal
rubber
stamp
for
it.
G
And
early
one
early
1.25,
so
in
case
there
are
any
issues
we
have
enough.
Runway
to.
You
know
address
I'm
hoping
that
most
of
the
to-do's
also
can
be
addressed
before
we
even
go
into
one
one,
two
five
release,
and
then
you
only
have
the
design
issues
left
so
yeah
yeah,
that's
it
for
me.
Raven
already
has
the
container.
These
side
of
change
is
ready,
it's
sitting
in
draft,
so
we
could
even
get
that
so.
A
B
Okay,
let
me
present
this
talk,
so
I'm
thinking
we
can
just
go
through
the
the
retro
first,
we'll
go
through
what
we've
done
in
the
last
cycle.
Sorry,
let
me
just
find
how
to.
B
Okay,
cool:
can
you
all
see
the
sharing
screen
share
screen.
B
Okay,
cool
yeah
so
for
the
last
psycho
sergey
wrote
up
this
doc.
So
for
the
last
cycle
we
have,
I
think,
six
caps
done
the
dynamic
public,
config
removal,
pod
overhead
graduated,
stable,
complete
credential
provider,
gratitude
beta
docker
chain
removal,
graduated
to
stable
priority
class
value-based
crystal
shutdown
to
beta
jrpc
probes
to
beta,
and
we
have
a
17
caps
removed
from
the
1.24
milestone
username
spaces
in
place
for
in
place
pod
vertical
scaling,
exact,
probe,
timeout,
container
checkpoint.
B
Assignments,
secret
v2,
c
advisor
list,
cri
stats
swap
that
come
by
default,
ensure
secret
images,
no
level
pod,
admission,
handlers,
pod
level,
resource
limits,
keystone
containers,
new
cpu
manager,
policy,
distribute
across
pneuma,
dynamic
resource
allocation,
pod
conditions
around
starting
and
completion
of
pods
and
block
creation,
and
add
the
allocate
postdoc
container
to
device
plug-in
api.
Sorry,
the
reason
why
I
want
to
go
through
all
of
this
is
just
to
you
know.
I
help
people
to
think
about
what
we
have
done
good
and
what
we
could
have
improved
yeah.
B
Hopefully
this
remind
people
of
something
so
in
124
cycle
we
have
23
tracked
and
six
merged,
and
we
also
have
some
history
data
here
in
one
in
122
we
tracked
24
and
the
13
were
merged
in
123,
14
were
tracked
in
the
aid
merged,
and
this
is
the
retro
summary
from
123
cycle.
B
Yeah
and
yeah,
I
think
now
we
can
talk
about
the
what
we
did
well
in
the
124
cycle
and
yeah
I'll
just
make
some
notes
here.
A
There
also
have
something,
for
example,
I
think
that's
the
misunderstanding
like,
for
example,
sql
version
2
front
of
planning.
We
clearly
say
that
in
the
planning
we
clearly
we
don't
want
to
promote
to
the
ga.
I
think
the
recent
planning
people
and
there's
a
couple
things
like
that
and
we
want
to.
We
are
missing
out
the
customer
user
inputs,
so
we
want
to
hold
that
one,
but
we
still
want
planning
right
so
like
in
the
planning
dock.
We
clearly
see
this
one.
A
We
are
going
to
improve
in
the
1.24,
but
we
want
also
customer
using
data
inputs
to
promote
so
that
the
planning
dock
is
clear.
At
least
I
know,
there's
many
commun
misc
understanding
on
that
one,
but
I
look
at
the
planning
docker
where
clearly
stayed
there
even
for
swag.
I
think
that
in
the
earlier
we
did
see
that
we
will
try,
but
we
also
want
more
inputs.
So
it's
not
negative
for
sure
we
are
going
to
the
next
stage.
A
So
I
think
the
couple
of
things
is
there,
and
but
the
other
things
is
like,
for
example,
for
the
in
place
of
interest
updates
resource
the
the
the
one.
It's
been
so
many
release.
I
understand
this
is
because
that's
it's
real
competitive
feature
even
like
designs
based
on
the
back
of
us.
We
change
up-
and
I
think
also
this
is
part
of
the
reviewer
missing
reviewer,
but
even
we
have
a
review.
A
I
think
that
will
take
a
really
long
time,
because
that's
the
large
feature,
it's
the
big
feature
we
we
kept
and
in
wow
0
6
is
not
just
signal
right.
So
that's
also
in
one
of
the
api
we
change
it.
We
change
api
because
api
reveal
has
a
couple
times
also
so
because
api
changes,
the
implementation,
also
have
to
change.
So
there's
a
couple
of
things
that
I
just
want
to
make
sure
cannot
just
one
side
fit
all
and
just
just
simplify
the
problem.
D
G
With
with
within
play
spot
vertical
scaling,
I
think
it's
good
to
move
cautiously
with
more
eyeballs
the
better,
because
it's
it's
got,
the
potential
to
you
know,
do
boo-boos
in
a
bunch
of
places,
and
it's
not
just
going
to
be
contained
to
one
place.
A
And-
and
I
I
know,
another
thing
is
also
is
nick
about
the
original
ulcer
move
forward
right,
so
so
the
original
ulcer
also
moved
forward
last
minutes
so
then
also
end
up
like
the
reviewer
or
or
approver.
Then
they
have
to
redispute
the
bandwidth
exact
time
and
also
everyone
has
a
lot
of
other
work.
So
so
there's
the
many
things
I
just
want
to
mention
here.
B
I'm
just
trying
to
capture
what
we
just
talked
about
down.
You
mentioned
original
author
move
forward,
which
cap
you
were
talking
about,
I'm
just
putting
that
at
so
one
of
the
things
that
didn't
went
well,
it
didn't
go
well.
A
So,
oh
I
I
forgot
last
time
I
checked
before
the
things
and
and
but
I
forgot,
which
exactly
cap
there's
the
there's
the
feedback
right
so
the
reviewer
and
but
didn't
address
the
viewers
comment
then
to
reply
so
so.
But
the
last
means
address
though
so
we
could
go,
but
if
we
want
we
can
auditing,
which
one
have
that
one.
So
then
we
basically
decided
just
to
remove
from
the
milestone.
I
think
in
here,
even
in
this
meeting
we
discussed
because
I
think
of
the
original
author's
progress
right.
A
So
we
have
to
redistribute
the
review
bandwidth
before
deadline.
So
now
we
just
decided.
Maybe
we
can
search
here
for
1.24,
because
I
like
really
tracking
those
type
and
progress
very
well.
So
we
have
several
meetings.
Talk
about
those
things.
A
B
Yeah,
I
think
I
can
go
find
that
later
find
out
which
cap
we're
talking
about
yeah.
So
I
just
captured
two
things
that
well,
we
could
have
done
better
anything
that
we
did
well
that
we
want
to
write
down
in
case.
We
look
at
later.
D
B
D
Yeah
I
mean
that
just
came
in
late
and
we
are
still
discussing
working
through
that
one.
A
I
A
A
So
we
can
honestly
when
there's
the
new
idea
really
idea
and
before
we
understand
we,
we
have
to
encourage
pleases
and
design.
Please-
and
this
is
this-
is
kind
of
a
dilemma.
We
have
right
so
try
to
balance
more
contribution,
but
at
the
same
time
we
also
have
to
manage
you
as
a
team
community
team,
and
we
have
to
manage
of
the
our
what
we
deliver
the
reliability
product
reliability
to
our
customer,
even
for
wonder
right.
So
this
is
why
we
have
to
when
we
make
decision.
A
We
have
to
say
this
is
the
common
design.
It's
is.
This
is
benefit
for
largest
of
the
vendor.
This
is
any
problem
for
the
user
regression.
Others
cannot
perspective,
but
at
the
same
time
we
how
we
guided
certain
things,
maybe
only
benefit
one
vendor.
We
still
need
to
think
about.
Oh,
this
is
benefit
to
that
winter,
but
benefited
user.
How
we
make
that
is
generic,
so
this
is
why
a
lot
of
time
take
longer
time.
It's
not
like
winter
offer.
You
build
a
certain
things.
I
Think
in
the
next
cycle
or
two
we
need
to
preemptively
allocate
basically
multiple
caps
worth
of
work
towards
sort
of
reliability
and
maintainability
improvements,
especially
with
like
some
of
the
fallout
from
the
refactoring.
That
clayton
did
where
we
still
keep
on
getting
random
escape
bugs
like
we
need
to
do
some
proper
investment
in
testability
and
reliability.
I
Yeah
part
of
my
plan
for
the
ci
subgroup
for
the
next
cyclist
try
and
push
us
into
doing
more
of
that
now
we
have
tests
running,
but
I
think
we
also
need
to
be
a
little
bit
stricter
about
what
we
accept
and
so
not
accepting
changes
that
don't
have
some
kind
of
test
change
without
very
good
reason,
especially
for
sort
of
new
features
like
if
something
doesn't
have
like
and
like
decent
unit
and
ep
testing
like
we
can't
support
it
like
merging
code
in
open
source
is
when
someone's
code
becomes
all
of
our
code,
and
so
then
it
becomes
something
that
all
of
us
need
to
be
able
to
support,
and
part
of
that
involves
documenting
a
lot
of
those
expectations.
A
Oh,
it's
good
good
things,
but
we
basically
know
I
I
believe
in
the
process
you
have
to
have
the
test
and
the
no.
The
problem
is
that
that's
the
reviewer
and
also
prover
have
to
really
carefully
real
test,
also
to
make
sure
there's
the
courage.
I
see
I
spot
many
cases
in
the
past.
There's
still
have
we
said:
oh
you
have
to
have
tests.
A
I
Like
we
have
entire
packages
in
the
kubrick
that
don't
have
a
single
unit
test.
A
But
yeah
that's
another
another
king
of
the
world.
This
kind
of
this
unit
has
the
kubernetes
hulk
have
the
unit
test.
It's
been
talked
about
a
lot
in
the
past,
which
is
required
for
a
lot
of
work.
I
do
think
about
all
the
components
could
be:
do
the
do
their
component
test
and
and
signaled,
even
at
the
signal
that
if
we
could
start
off
the
component
test
right
so
next
there
and
will
be
wonderful
right.
A
So
so
so
at
least
we
spent
time
to
build
the
signal
e2e
test
and
the
ret
has
later
become
to
majority
of
the
test.
Cases
came
from
this
community
become
to
kubernetes
the
conformance
test,
the
frontal
signal,
but
I
do
believe
like,
like
the
sky
donor,
it's
really
easy
to
come
off
the
component
test
and
even
master
right,
so
the
control
controller
signaled
a
little
bit
tough
because
we
do
like.
Even
today,
we
couldn't
guard
like
the
resource
management
test
unless
we
want
more
kernel
behavior
and
we
need
we
one
earlier
time.
A
I
did
ask
a
co-op
to
hope.
We
have
the
road
map
before
we
build
a
signal.
The
e2e
and
we
can
have
like
a
mark-
the
linux
kernel
behavior
and
then
build
some
like
the
reasonable
test,
like
the
unit
has
that
and
all
those
kind
of
things,
but
that
need
that
we
have.
We
don't
have
like
a
volunteer
to
help
on
those
things
yeah,
so
so
this
is
so
those
it
is
the
planning
and
and
in
the
community.
This
is
kind
of
like
the
lead,
the
leader
more
volunteer
and
contributor
here,
yeah.
J
G
I
have
one
suggestion
from
my
experience
with
this
part
vertical
scaling
feature
like
we
have
owner's
file
in
each
directory,
which
tells
you
who
owns
the
like
issues.
If
there
is
anyone
to
reach
out
to
I'm
wondering
if
it
might
be
helpful
to
have
a
small
map
of
you
know,
okay,
if
you're
making
changes
in
this
code
here
are
the
places
to
potentially
add
tests,
because
what
I
felt
okay,
I
was
adding
some
tests,
for
example,
in
the
the
resource
quota
handling.
I
looked
quickly
in
the
same
folder
for
a
test
file.
G
It
wasn't
there
and
then,
since
you're
doing
your
development
here,
look
your
context
is
elsewhere.
You
want
to
get
the
thing
working
first
and
then
you
already
are
planning
your
e
to
e
test.
So
yeah,
okay,
I'm
going
to
get
this
in
e3.
I
don't
know
where
the
unit
test
is
so
I'm
going
to
have
to
locate
it.
I
don't
have
time
to
do
it
right
now
and
then
you
move
on
to
something
else.
The
context
goes
away
and
then
you
forget
about
it.
I
have
myself
dropped
the
ball
a
few
times.
G
Thanks
to
elena,
we
caught
a
bunch
of
them,
so
I'm
wondering
if
it
would
be
helpful
to
you
want
to
be
automated
as
a
tool.
Hey
this
feature
has
been
added.
Would
you
consider
adding
tests
in
these
locations,
something
like
that,
so
that
that's
right
in
the
face
as
you're
building?
So
you
have
the
context
and
oh
yeah,
I
can
add
it
here.
A
K
A
And
I
come
across
a
certain
method
function
then
we
think
about
this.
One
needs
some
tests
and
nobody
types
because
cost
is
confusing
to
me.
So
our
engineer
actually
have
the
good
habit
to
add
to
do.
Then
we
can
really
say:
oh
this
one
maybe
needed
some
test
or
something
like
that.
So
then,
later
we
do
the
code
search
and
find
the
to
do
and
and
admob
test.
So
so
so
that's
another
way
also
fix
the
today's
problem
and
yeah,
but
the.
A
We
need
to
really
carefully
require
this
to
the
reviewer
and
the
approver
and
the
to
okay.
To
say:
oh,
please,
you
are
missing
this
test,
my
habit,
honestly
in
the
past,
I
review
code
even
internal
code.
My
first
thing
is
always
review
the
test
code
because
the
test
code
refractor,
it
is,
what
do
they
want
to
build
right,
so
they
have
the
chinchillas,
not
changing
the
pr
description.
What
I
try
to
problem,
I
try
to
address
that.
A
B
I'm
just
trying
to
capture
the
silver
linings
in
all
of
the
stuff
we
just
talked
about
sovine.
You
mentioned
that
during
the
review
process
of
the
in-place
vertical
scaling,
we
found
some
meeting
tests
right
during
the
review
process.
G
Yes,
and
so
some
of
those
were
the
unit
test,
isn't
in
the
same
place
where
you're
writing
the
code
you're,
making
the
code
changes.
They
are
in
some
other
place
in
a
completely
different
folder
directory
structure.
So
that's
not
really
very
obvious,
the
obvious
ones
I
got
most
of
them,
and
I
added
to
those
where
it's
needed,
where
I
needed
extra
test
cases
and
I
think
a
reviewer
like
alana
already
did
a
good
job
going
through
the
tests
in
terms
of
okay.
You
have
these,
but
this
is
missing.
You
know.
G
In
the
our
case,
what
happens?
We
need
a
test
here
to
that
level.
So
as
a
dev,
when
I
review
code,
I
don't
go
that
deep
either.
So
it's
a
good
habit
that
I've
encouraged
that
and
then,
of
course,
if
it
makes
it
easier
for
okay
you're,
adding
this
you're
making
this
code
change
here
like
consider
adding
tests
here
here
here,
or
these
are
the
places
where
the
test
can
be
added.
The
unit
tests
go
in
these
places.
G
G
It
could
be
a
little
overkill,
but
something
to
think
about.
A
G
Especially
if
we're
getting
in
a
project
this
large,
if
we're
getting
bugs
creeped
through
because
of
missing
tests,.
A
I
I
actually
relate
that
the
case
will
share
is
actually
endorse.
What
early
I
say
that
we
really
need
the
reviewer
and
approver
to
look
check
of
the
test
situation
right
and
no
one
will
know
everything
on
the
signal.
So
that's
why
we
were
actually,
if
you
spot
something
and
look
at
it,
because
you
know
you
you,
because
you
are
in
charge
after
something
or
you
may
be
the
user
about
the
kubernetes,
so
you
may
reach
some
constant
say:
oh
can
we
have
the
test
here?
A
D
D
L
I
I
can
add,
maybe
one
thing
regarding
maybe
things
that
can
be
improved.
It's
like
I
found
one
of
the
challenges
has
been
kind
of
features
that
span
kubernetes
and
container
run
times.
L
So,
like
chris
comes
from
the
container
stats
work,
I
think
one
of
the
challenges
is
syncing
up
changes,
because
if
you
need
to
make
changes
in
cri,
you
can
make
those
changes
and
then
have
to
wait
for
them
to
roll
out
the
new
release,
to
be
cut
of
the
container
d,
for
example,
and
then
has
to
come
back
to
you,
know
kubernetes
and
then
add
the
corresponding
feature
and
if
you
need
to
add
something
else
back
to
the
container
and
time
it's
going
to
another
cycle,
so
I
think
those
features
are
definitely
challenging.
G
I
did
have
two
caps
separate
caps
with
this
in
mind,
but
in
retrospect
I
should
have
weighed
in
on
getting
the
cri
changes
in
early
so
that
the
container
decide
can
make
progress
and
then
the
rest
of
the
feature
can
come
in
the
rest
in
the
next
release
cycle,
but
it
for
the
most
part
all
all
along
it
seemed
like
they
belong
together
and
they
should
go
together.
A
You
want
to
share
your
experience
because
in
the
past
I
hope
nanta
also
is
here
so
both
they
can
share
how
in
the
past,
they
manage
to
think
of
the
changes
between
kubernetes
and
the
component
runtime.
D
D
It
won't
be
as
hard
in
the
future,
but
there's
always
going
to
be
phases
where
we'll
have
to
kind
of
do
this
dance
and
we'll
have
to
plan
ahead
like
it
will
be
very
hard
to
land
the
changes
in
the
container
run
time
and
cri
and
back
to
cuban.
It
is
all
in
one
release,
so
I
guess
it's
more
like
there's
no
shortcut
there
we
have
to.
We
have
to
be
aware
that
it's
it's
gonna
take
time
along
those
lines.
D
One
more,
I
think
one
positive
I
want
to
point
out
is,
I
feel
like
there
is
a
better
working
relationship
with
run
c
and
the
kubernetes
community,
like
we
were
able
to
cut
releases
very
fast
in
124,
because
some
issues
were
found
when
we
were
trying
to
update,
run
c
or
lip
container
in
kubernetes.
So
I
think
that's
a
positive
compared
to
to
the
past.
A
Do
you
think
about
the
what's
the
why
the
run
say
maybe
have
the
better
collaboration
with
the
kubernetes
than
the
container
runtime?
Is
that.
D
I
think
I
I
I
don't
think
like
container
runtimes,
don't
have
a
better
relationship
with
kubernetes.
Maybe
I
think
we
just
need
more
folks
from
the
container
runtimes
engaged
and
proactive
in
cutting
releases
and
so
on.
A
Yes,
yeah
yeah
denise,
that's
the
awesome
of
the
vision,
yeah
yeah.
We
have
to
because
a
lot
of
things.
If
we
think
about
those
these
features,
we
want
the
idea
to
support
the
kubernetes
user
we
have
to
represent.
It,
represent
kubernetes
and
engage
practically
engaging
with
content
runtime
community
right.
So
so
this
is.
This
is
at
least.
This
is
from
my
perspective,
because
I
think
about
the
the
this
is
a
tough
problem
from
day
one
when
we
decouple
certain
things.
This
is
also,
I
think,
last
time
I
share
in
the
doctorship
removal.
A
I
share
some
history.
Why
that
we
build
the
docker
ship,
because
that
time
we
understand
it's
difficult
and
we
want
to
iterate
faster.
So
that's
why
we
have
the
building
model
docker
share
right.
So
now
we
are
slow
down
on
the
container
runtime
interface
development
and
we
also
have
like
the
cryo
and
the
container.
The
latest
support
our
container
runtime
interface
so,
but
still
require
of
the
collaboration
from
our
cr4
yeah.
B
All
right
any
other
good
bad.
We
want
to
mention.
B
Okay,
cool,
let's
go
to
the
planning,
so
here
is
a
doc
that
I
think
sergey
wrote
and
yeah.
So
these
are
the
caps
that
were
caught
from
1.24
and
below
are
the
things
that
we
should
consider
tracking
for
1.25
and
also
some
general
proposals
and
required
action
for
even
future
releases
and
something
that
are
done,
but
I
think
for
some
purposes,
they're
still
open,
so
we
track
them
at
the
bottom
of
the
stock.
B
B
Yeah,
let's
go
through
them,
then
so
the
first
one
in
place
pod
vertical
scaling.
I
think.
D
D
Yeah
yeah
I'll
request
everyone
on
the
call
if
they
identify
their
they're
kept
here
and
please
write
author
and
if
you
already
have
a
reviewer
approver.
Otherwise,
if.
D
Have
an
identified
reviewer
approver
as
part
of
this
planning,
I
will
identify
those
yeah.
B
Cool
yeah
because
it
seems
like
not
so
many
people
have
added
themselves
here.
So,
yes,
I
will
just
probably
still
go
through
them
and
see.
If
there's
anything
you
want
to
talk
about
or
anything
that
we
don't
want
to
keep
as
part
of
1.25.
B
Yes,
no,
it
should
be
like
something
high,
slash
m
or
high
slash
xl
yeah,
okay,
so
I
I
get
the
first
two
are
both.
Let
me
just
double
check
are
both
related
to
the
in
place
vertical
scaling.
D
G
Yeah
they're,
both
in
the
same
pr,
okay,
I
had
to
miss
two
separate
tips
for
initial
thought
was
exactly
that.
Maybe
we
want
to
get
the
cra
done
early
but
as
time
went
on,
if,
as
I
was
doing
it,
it
all
got
done
together
and
it
didn't
seem
like
a
very
large
to
make
a
separate
change
and
the
question
came
out.
Okay,
what?
If
you
know
this
here
I
went
in
and
then
we
can't
do
the
other
one
that
that
you
know
chicken
and
egg
probably
has
always
been
there.
G
G
D
Okay,
okay,
and
for
the
priority
I'm
gonna
say
hi
and
excel
is
the
size
yeah
alpha
is
125.
C
But
this
table
is
in
the
steps.
D
D
A
A
If
we
look
at
the
cap,
send
it
to
us
there's
the
60,
it's
not
the
cluster,
so
this
is
why,
as
when
circuit
asked
me
initially,
I
told
him
just
only
carry
from
what
we
left
from
the
1.24,
because
anyway
we
are
going
to
have
this
meeting
right,
even
follow-up
meeting.
So
so
I
the
missing
part
there
because
60
definitely
we
cannot
take,
and
a
lot
of
things
is
just
a
suggestion
even
right,
so
we
should
do
this.
We
should
do
that
and
actually
even
don't
have
the
cap
open.
B
B
Okay,
I'm
sorry
medium
s
in
priority
or
size
as.
K
In
priority
sizes,
I
guess
medium.
D
Yeah,
I
think
it's
it's
a
medium.
I
still,
I
think,
mike
we
still
own
another
round
of
like
discussion
with
signored
on
where
we
left
it
before.
K
B
D
To
next
stage,
I'm
not
sure
we
will
just
put
this
on
hold
till
she's
back
yeah
yeah.
I'll
just
add
a
note.
Yeah.
A
D
D
Were
you
thinking
about
test
targeting
the
alpha
or
for
whatever
we
add
next
for
anything,
there's
no.
B
Yeah
moving
on
feature
node
from
forming.
I
don't
see
an
owner
here
either,
but
I
think
sergey
yeah
looks
like
sergey
is
over
here
I'll,
just
his
name
here
and
I'll
wait
for
him
to
add
more
details.
B
Uncle
thank
you
he's
doing
containers
yeah
this
key
and
we
have
owners
and
yeah
the
priority
size
here.
B
Anything
we
want
to
talk
about
for
this.
One
he's
doing
something.
I
Dt
is
out
for
125
and
I'm
not
sure
if.
D
Okay,
so
maybe
let
me
let
mathias
come
talk
about
it,
because
I
don't
think
that
kept
changes
are
finalized,
yet
that's
ignored,
so
maybe
I'll
just
put
discuss
with
signord,
oh
yeah
I'll.
Let
him
know
thanks.
B
Yeah,
the
next
one
supporting
no
level
user
names
based
remapping,
I
think
radar.
This
is
the
one
you
just
talked
about
right.
Somehow,.
C
A
C
D
C
B
Okay,
moving
on
secret
v2.
D
Yeah,
so
for
that
one,
I
think
we're
still
waiting
for
feedback
from
from
production,
and
I
know
like
on
on
the
red
hat
side:
we
are
trying
to
get
more
users
use
it.
So
I
I
guess
you
can
continue
to
do
that
like
david,
what
do
you
think
I
know?
Y'all
were
also
trying
to
test
it.
L
Yeah
yeah
so
yeah
we're
also
trying
to
kind
of
move
move
in
the
future
to
see
your
v2.
That's
the
work
that
we're
doing
so.
My
thinking
is.
We
can
try
to
take
this
to
to
ga
and
125
that's
good
effort,
and
I
think
we
should
get
good
amount
of
feedback
in
124,
because
okay.
D
A
So
so
this
one,
we
basically
will
depend
on
the
homage
user
input
game
right,
so
they
did
g
or
not
g,
so
yeah,
okay,.
L
Exactly
and
I
think
also
the
recent
costs
m97
release
is
secret
v2
by
default.
Additionally,
the
latest
ubuntu
2204
release,
which
is
very
popular,
a
lot
of
cloud
providers
use,
is
also
secretly
too
big
default.
So
I
imagine
due
to
those
two
disk
roads
moving
into
group
v2,
we'll
get
a
lot
more
usage
in
124.
A
I
want
to
make
sure
people
are
understanding
that
we
may
be.
Sometimes
people
folks
using
stable,
stable
is
mean,
like
the
I,
from
the
developer
perspective
that
we
think
about
as
a
stable
lies
in
novel
and
encouraging
user
try
out,
but
it's
not
mean
stable,
immediate
meaning
ga.
So
we
have
the
the
code
that
is
g
or
not.
F
Oh
no
lll
makes
sense
yeah
the
code
changes
are
not
so
you're
right,
yeah,
okay,.
B
Okay,
fixing
with
x,
probe
timeouts
this
one
doesn't
have
an
owner
either.
D
Yeah,
let's
come
back
to
this
one
once
sergey
is
back
next
week.
E
C
Sorry
one
more
question
regarding
the
secret
v2:
how
should
feedback
be
added
or
something
because
maybe
some
teams
here
that
use
flat
car
that
we
are
also
doing
flat
car
container
linux?
Maybe
they
can
add
some
feedback
because
they
might
like.
D
A
D
L
B
Oh,
so,
for
the
next
one
ability
to
list
what
for
resource
assignments,
it
looks
like
sweaty
commented
they're
not
to
pursue
this
in
125.
M
B
Okay,
should
we
just
remove
it
here
or.
A
B
Okay,
see
advice
earlier,
see
our
iphone
container
and
pod
stats
owner.
We
have
peter
han
and
david
porter,
okay,
anything
we
wanted
to
talk
about
on
this
one.
L
I'm
here,
oh
cool
yeah,
I
mean,
I
think,
the
main
thing
we're
going
to
continue
this
effort
yeah.
I
think
we're
just
a
little
bit
out
of
bandwidth
on
120
for
release,
but
I
think
it's
something
we
want
to
pick
up
yeah.
I
think
we
have
this
most
of
the
cri
changes
in
place
now.
So
the
main
question
is
adding
some
alternative
endpoint
in
the
sear
in
the
container
runtime
for
the
slash
metric
c
advisor
endpoint,
so
having
like
prometheus
metrics
that
people
can
use
directly
from
the
endpoint.
L
H
A
H
Yeah
so
kind
of
my
my
plan,
at
least
for
cryo,
was
to
try
to
get
like
an
implementation
done
early
in
the
125
cycle
and
it
might
involve
another
couple
of
cri
changes
just
because
the
way
that
the
advisor
is
kind
of
baked
into
cubelet.
Now
it
makes
the
you
know.
Those
metrics
are
broadcasted
over
the
cubelets
http
api,
and
so
the
cri
might
want
something
similar
to
that
but
yeah.
H
So
my
plan
was
to
try
to
get
it
done
early
in
125
and
then
start
collecting
that
data
and
evaluate
you
know
the
performance
changes.
Okay,
what
yeah.
A
So
so,
basically
the
students
go
to
the
beta
or
a
lot
to
go
to
beta,
basically
also
blocked
by
the
performance
data
right.
J
L
L
A
David,
I
want
to
make
that
clear,
so
we
are
not
to
plan
to
immediate
duplicate
of
the
set
of
weather
and
the
point.
So
if
customers
have
that
one,
because
it's
just
better-
but
we
do
want
that
data
to
to
block
and
block
the
batteries
for
this
one
right
so
for
the
core
matrix
to
the
controller
kubernetes
control
plan
and
include
of
kubernetes,
we
hope
to
using
the
ci,
then
we
could
actually
demonstrate
the
community
switch.
Otherwise
we
cannot
to
tell
community
switch
right
so.
L
L
Yeah,
maybe
it
makes
sense.
I
think
that
the
the
thing
we
were
discussing,
if
we
just
don't
want
we
don't
we
don't
have
ability
to
turn
off
c
advisor
metrics
to
dave,
and
we
were
thinking
to
only
add
that
ability
until
after
we
have
alternative.
But
maybe
I
think
you
bring
up
good
point,
maybe
it
makes
sense
to
have
the
ability
to
turn
it
off
even
before
we
have
an
alternative
for
customers.
Maybe
that
don't
rely
on
that
endpoint
and
also
so
they
can
get
their
the
performance
data
and
then.
H
When
I,
I
can
see
there
being
use,
if,
if
we're
willing
to
accept
a
case
where,
like
some
people,
don't
want
matrixy
advisor,
then
I
can
see
it
being
useful
that
you
know
we
we
have
it
talk.
You
know
we
have
turning
off
the
c
advisor
collection
of
these
metrics
toggled
by
this
feature
gate
and
then
it's
up
to
the
cri
to
broadcast
whatever
prometheus
metrics.
You
know,
however,
it
wants
to
yeah.
H
K
It's
a
little
it's
a
little
deeper
than
that
too
right
peter
I
mean
insofar
as
locking-
and
you
know
sequence.
When
can
the
data
be
available
at
all?
And
you
know
what
format
do
you
need
and
do
you
need
it
in
the
kubelet
format,
c
advisor
format
or
you
know
the
current
prometheus
yeah.
H
But
so
for
this
particular
endpoint
that
we're
talking
about
it's,
where
we're
taught
we're.
Basically,
assuming
that
the
current
metric
c
advisor
endpoint
is
like
is
an
unofficial
like
api
of
kubernetes,
because
people
haven't
have
assumed
that
it
will
be
there
and
treat
it
as
such,
even
though
it's
so
we're
we're
basically
talking
about
perfectly
mirroring
that
endpoint
for
the
container
stats.
This.
K
H
A
I
just
wanted
to
time
check
right
now
is
already
and
inaudible,
oh
three,
so
maybe
what
maybe
we
should
call
out
the
community
and
who
is
the
cap
owner
and
up
to
this
one,
then
we
can
come
back
to
to
this
next
time.
Hopefully,
next
week,
when
we
go
over,
this
one
will
be
faster.