►
From YouTube: Kubernetes SIG Node 20211026
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
B
Yeah
we
down
on
prs.
I
think
many
pr's
got
unassigned
from
signature
label
because
they
are
not
related
to
signal
so
clean
up
happening
and
we
closing
out
of
closed
pr's.
Mostly
it
was
work
on
progress
or
some
performance
optimizations,
like
my
new
performance
instrumentation,
that
people
were
suggesting
nothing
that
got
lost
in
this
prs,
nothing
that
I
found
of
and
merged
prs.
We
have
one
of
the
cap
pr
merged,
so
this
is
great
and
we
will
discuss
prs
later
pr
status
for
the
soft
deadline.
B
So
yeah,
I
mean
nothing
very
big
happened
this
week,
but
if
you
interested
what
what
is
happening,
what
pr
is
being
created
and
such
just
click
the
links
on
the
table.
B
Do
you
mean
next
table
for
a
pr
for
enhancements?
Oh.
B
A
A
The
next
agenda
item
review
from
our
soft
deadline
last
week.
B
So
ephemeral
like
I,
took
all
the
caps
from
this
spreadsheet
that
the
release
team
is
tracking.
I
think
ephemeral
containers
goes
to
beta
and
somebody's
typing
npr
is
already
merged,
so
satellite
is
very
good.
I
think
there's
some
documentation
that
needs
to
be
updated
as
such
in
place.
Pod
update.
There
is
a
agenda
item
for
today
about
this
cap,
so
pr
is
out,
but
it's
still
a
little
bit
in
progress.
I
don't
know
if
we
want
to
talk
about
it
now.
B
Okay
later
then
so
yeah,
but
pr
is
out,
so
it's
meets
our
criteria
for
a
soft
deadline.
Then
container
identifier
was
removed
from
milestone.
I
think
they
failed
to
update
cap.
Google
cri
support
pr
is
out
and
it
needs
to
be
reviewed.
I
think
this
pr
introduces
support
for
both
versions
of
cri.
D
B
Yeah
perfect
yeah,
I
was,
I
remember,
we've
been
discussing
that
you
want
kubla
to
only
support
one
sierra,
and
I
was
wondering
what
is
what
is
the
status
of
that
and
like?
How
do
you
want
pressure
to
use
it
anyway?.
B
Credential
provider
working
progress,
pr
is
out.
I
think,
idt
created
this
pr,
and
I
know
that
as
a
result
of
this
pr,
many
unit
tests
are
failing.
There
is
a
separate
issue
tracking
that
and
I'm
not
sure
what
it's
like,
how
much
progress
we
are
making
on
that
in
general.
It's
not
a
very
big
cap,
so
I
wouldn't
expect
much
troubles,
but
I'm
not
sure
it
was
a
oh
it's
beta.
B
A
If
it's,
if
it's
beta-
and
it
missed
this
like
unless
it
was,
you
know
it
kind
of
missed
for
a
few
days-
it
should
be
merged
already.
If
it
was
beta
and
we
want
to
graduate
it
because
the
idea
is,
we
don't
want
all
the
beta
stuff.
That's
going
to
be
on
by
default,
to
merge
last
second
and
the
release
cycle,
because
then
that
makes
code
freeze
really
miserable
for
people
maintaining
trying
to
get
the
tests
working.
B
Yeah
and
it's
already
failing
some
tests,
so
detaching
couplet
credential
provider
out
of
three
makes
it
make
something
fail.
So
I
don't
know:
do
we
want
to
cut
it?
I
think
idt
authored
that.
So
I
don't
think
she's
on
the
call.
A
Yeah,
I
think
that
this
isn't
at
a
great
time
for
her
I
mean
I
would
assume
at
this
point
that,
unless
it's
like
very,
very
close
to
merging
for
beta
anything
that
has
not
merged
for
beta
at
this
point,
probably
is
going
to
get
cut
from
the
release.
A
A
B
Okay,
next
one
vp
cri
changes.
I
think
it's
rolled
into
in
play
spot
update.
So
I
think
we
can
talk
about
it
in
such
a
agenda
item
or
for
input
play
spot
update,
so
yeah.
This
is
just
part
of
the
pr
that
is
part
of
the
cab
that
needs
for
in-place.
Sport
update.
A
There's
a
bunch
of
reasons
swap
won't
make
it
one
is
that
I
still
don't
think
we
have
support
in
the
cris
two
is
that
we
have
a
lot
of
work
to
do
and
I've
spent
most
of
the
release
fighting
with
failing
tests
and
like
bugs
and
regressions
from
122..
I
haven't
had
any
time
to
work
on
it
and
I
don't
think
anybody
else
has
been
working
on
it.
So
I
think
it's
going
to
slip.
B
This
is
good,
ensure
secret
pooled
images,
so
pr
is
out,
but
it
still
contains
few
to-do's
inside
the
pr.
I
don't
know
whether
that.
F
The
the
text
exists
already
in
the
pr
for
what
needs
to
be
shown
on
the
on
the
on
the
kubelet
feature,
gate.
B
Next,
one
reject
smt,
aligned
workload,
pr
is
merged,
so
this
is
on
on
track.
Next,
one
is
alpha
improvement
for
grateful
termination
to
support
priority
class
for
grace
with
termination.
Pr
is
out
and
it's
alpha.
So
it's
on
track,
and
last
one
is
alpha
for
grpc
probes.
I
think
pr
is
good.
We
just
need
a
few
more
eyes
and,
like
just
finally
merge
it,
it
was
out
for
a
very
long
time.
So
we
need
to
I
mean
it's.
G
Sergey,
can
you
hear
me
yep,
sorry,
just
you
just
missed
the
1pr
from
kevin,
which
is
alpha
and
I'm
going
to
add
it,
but
it's
merger
ready.
So
just
for
the
record,
I'm
adding
it
right
now
to
the
document.
G
B
In
release
trekking
but
yeah,
if
it's
tracked,
we
can
double
check.
E
I
guess
we
have
couple
mist
here,
but,
but
I
think
about
all
is
merged,
though
so
we
are
okay,
yeah.
B
So
thanks
remember:
we
started
working
on
checkpoint
as
well,
but
I
don't
see
it
in
really
striking
either.
E
E
There
are
a
couple
of
things.
I
think
this
is
mister
in
the
track
board
and
yeah.
H
C
E
E
H
B
B
B
Yeah,
I
was
wondering
about
it,
so,
okay,
this
is
as
well,
is
not
tracked
by
release
team,
so
we
need
to
pick
something
some.
A
That's
interesting
I'll
double
check
that,
are
you
sure
it's
not
being
tracked
by
the
release
team.
A
A
A
I
Hi,
yes,
so
I'm
just
for
some
background.
I'm
at
intelligent
resource
management
stack
for
npt
and
within
kubernetes,
and
one
of
the
things
that
we
need
to
solve
for
our
customers
and
there's
four
different
companies
currently
interested
in
the
solution
is
figuring
out
how
to
honor
isil
cpus
from
the
kernel
and
basically
how
to
have
some
cpus
in
a
pod
that
are
pinned
and
some
that
are
just
floating
and
considering
the
number
of
requests
I've
had
this.
This
is
a
real
issue.
I
I'm
trying
to
gather
a
group
of
people
in
community
already
who've
worked
on
this
or
are
interested
in
working
on
this
and
solving
this
problem,
or
even
customers
that
have
used
cases.
So
I
can
get
a
full
what
we
need
to
solve
together
and
start
working
on
a
cap
related
to
it,
and
I
can
put
in
the
chat
it's
already
in
the
agenda
to
previous
stabs
like
this,
but
neither
of
them
are
familiar
to
fruition.
C
The
cubelet
cool
the
set
of
cpus
it
reserves
for
the
system
from
a
different
cpu
pool,
or
are
you
trying
to
deal
with
end
user
workload,
so
the
cubelet
runs
being
able
to
say
container
one
in
this
pod
is
assigned
this
particular
cpu
set,
but
container
two
is
on
the
shared
one.
I
C
I
guess
what
I'm
trying
to
figure
out
is
some
of
those
use
cases
are
already
implemented,
so
I
was
just
trying
to
figure
out
what
the
gap
is
so
like
for
context.
You
should
be
able
to
if
you
had
like
a
certain
cpu
manager
policy
on
the
system.
Reserved
cpu
reservations
are
done
by
a
particular
algorithm
that
picks
like
zero
and
one
first,
for
example,
sure.
C
C
So
that's
what
I
also
wanted
to
clarify
on,
because
there
are
a
number
of
capabilities
now
within
cpu
manager
and
topology
manager,
so
just
like,
if
you
had
a
set
of
use
cases
that
you
wanted
to
share
whatever
group
we
pulled
together
here,
we
could
probably
best
reach
mutual
understanding
on
what
is
or
is
not
actually
possible.
Now.
I
That
would
be
fully
fully
helpful.
Shall
I
link
you
in.
C
I
Okay,
that
sounds
good
I'll
contact
you
after
to
get
the
actual
spelling.
A
Next
topic
from
adrian
is
looking
for
approver
feedback
on
a
cap.
A
So
that
it
can
be
ready
for
124
is
adrian
here.
E
Yeah,
I'm
I'm
going
to
leave
you.
It's
just.
Don't
have
bandwidth
at
this
moment
so
yeah
I
I
know
that
is
in
my
queue,
but
I'm
looking
for
another
people
also,
together
with
me,
to
review
this
one.
I
think
the
call
actually
already
provide
the
first
nine
of
the
video
here
so
yeah.
I
will
sync
up
with
tom
too,
but
I'm
going
to
be
the
approver
for
this
month.
H
I
have
a
question
on
this
one,
so
this
one
says
the
goals
is
just
for
for
rent
sick
use
case.
It's
not
for
migration.
I'm
just
wondering
whether
we
can
you
know
since
we're
going
to
do
the
checkpoint
right,
whether
it's
good
to
consider
you
know
the
migration
use
case
too.
Otherwise,
when
we,
you
know,
merge
this
later,
we
find
out.
Oh
you
know.
We
need
to
the
same
thing
you
know
checkpoint
can
be
used
for
migration.
We
have
to
change
some
of
some
of
the
design.
H
It
could
happen.
I'm
just
thinking
with
that.
We
can
extend
the
goal,
expand
the
goals
to
include
them.
Yes,
yeah,
so.
D
E
Casey
this
is
also
back
tonight.
The
in-place
upgrade
right
update
one,
because
this
is
why
we
are
kind
of
the
like
the
in-place
update.
We
didn't
really
break
initially.
We
also
break
many
things,
but
the
until
really
it's
hard
for
us
to
move
forward.
So
that's
why,
at
this
moment,
venus
is
so
hard
to
break
into
the
pieces.
E
So,
but
this
one
we
kind
of
go
from
there
when
we
think
about,
we
can
break
into
the
baby
steps
milestones
and
to
verify
the
value
for,
like
the
one
node
know
the
level
venue.
So
that's
and
then
we
can
start
to
think
about
migration,
because
migration
is
such
a
big
topic.
Even
it
is
maybe
bigger
than
the
in-place
update.
So
that's
why
I
found
day
one
mechanical
break
and
then
make
people
think
about
that's
the
value
validate
that
the
correctness
and
the
functionals
and
then
think
about
migration.
So.
H
Okay,
okay,
yeah.
I
think
that
makes
sense
as
long
as
we
keep
in
mind
that
you
know
no
restriction
for
future
support
of
migration
yeah.
I
haven't
taken
a
look
at
this.
I
will
go
through
this.
I
just
want
to
make
sure
this.
You
know
this
design
or
this
proposal
will
lead
us
to
the
you
know
more
complicated
design
of
migration.
I.
C
Think
the
other
thing
is
and
adrian's
not
here,
but
at
least
when
we
had
a
lot
of
discussions
on
it
initially
started
from
the
migration
scenario,
and
then
we
got
feedback
from
from
various
other
users
who
were
interested
in
actually
doing
something
that
was
not
in
the
user,
visible
data
plane.
First,
so
that's
where
this
forensic
scenario
took
root,
and
so
I
I
anticipate
one
will
will
build
on
top
of
the
other
in
the
future.
C
But
this
narrow
use
case
was
was
pretty
clear
and
sensible
from
like
a
security
standpoint
as
well.
E
So,
cathy,
since
you
are
actually
into
this
magnification
cases,
maybe
you
can
also
help
review
this
one
to
see
just
focus
on
extensibility
about
the
design
yeah
we
did
at
the
beginning.
We
did
the
some
principle
also
make
sure
we
could
extensible,
but
we
also
don't
want
to
at
this
moment
to
start
from
a
lot
of
power
level
on
user
facing
api.
At
this
moment
we
want
to
make
that
is
more,
like
the
node
content
feature
at
this
moment,
because
once
you
start
those
api
change,
other
things,
let's
take
a
forever
right.
E
So
unless
you
really
settle
down
for
all
the
use
cases,
so
you
could
have.
Can
you
help
on
the
and
the
pr
review
and
but
to
keep
in
mind
that
we
we
want
to
extend
it,
but
we
don't
want
to
expose
those
api
change,
at
least
at
a
really
early
stage,
until
we
figure
out
the
migration.
So
that's
kind
of
the
original
we
discussed
yeah,
okay,
yeah
sure.
H
A
Okay-
and
I
think
it
sounds
like
we're
ready
to
move
on
to
our
next
agenda
item,
I
don't
think
that
we
have
finae,
but
let's
talk
a
little
bit
about
in
place,
pod
vertical
scaling.
A
Looks
like
vinay
is
apologizing
for
not
having
a
chance
to
address
remaining
action
items.
Did
we
also
want
to
talk
about
splitting
the
pr.
H
Yeah
for
this
I
think
you
know
one
of
our
team
members
posted
a
comment
on
the
about
the
latency
yeah.
Vinay
has
not
addressed
that
yet,
but
I
can
see
you
know
yeah,
he
probably
he's
busy.
So
maybe
we
will
reach
out
to
him
over
slack
or
maybe
he
will
come
to
the
next
week's
meeting.
H
A
H
Okay,
let
me
think
about
this.
This
is
a
big
it's.
I
think,
there's
a
lot
of
codes
right
now.
Okay,
I'll
think
about
this.
How
about
that
and
get
back
to
you.
A
Okay,
yeah,
that's
just
my
advice,
giffin,
how
you
know
we're
we're
getting
close
to
code
freeze
and
a
weak
delay
at
this
point
means
not
just
a
week
of
delay
for
you,
but
also
a
week
of
delay
for
reviewing
and
whatnot.
So
I
would
not
personally
if
it
was
me,
would
not
wait
a
week
just
to
see
if
finney
will
come
around
to
it,
given
that
he
said
he
has
limited
availability
already.
H
A
Sounds
good
anything
else
on
that
one.
A
Sounds
like
no
moving
right
along
renault
and
mike
brown
cri
pr
version
support.
D
Hey
so
we
have
a
pr
open
that
like
supports
multiple
versions
of
cri
and
the
cubelet
and
derek,
and
I
were
chatting
earlier
whether
that
makes
sense
or
we
should
always
have
the
cubelet
support.
One
version
of
the
cri
at
a
time
and
the
runtimes
can
support
multiple
versions
at
a
time
if
needed
so
mike.
You
want
to
share
your
thoughts
like
what
we've
been
chatting
in
the
background.
F
Yeah
sure
so
I
suppose
the
the
container
runtimes
need
to
support.
You
know
end
versions
of
the
cri
api
and
could
certainly
post
that
up
and
over
grpc,
but
the
the
issue
we
have
with
moving
forward
by
adding
that
to
our
service
versions
of
the
container
runtimes
is
having
a
version
of
of
kubelet
that
supports.
F
You
know
the
the
newer
versions
of
the
container
runtime
interface,
so
we're
running
into
this.
You
know
what
what
what
comes
first,
you
know
the
ag
or
the
chicken
kind
of
kind
of
problem
and
need
to
bust
through
that
one
option
that
sasha
had
put
through
was
this.
Certainly
this
option,
for
you
know,
kubelet
supporting
both
the
current
the
version
of
cri
and
the
prior.
F
You
know
without
a
switch
just
automatically
going
to
the
current
most
updated
version
and
if
that's
not
exposed
on
the
container
runs
I've
been
falling
back
to
the
prior,
but
either
way
we
need
the
test
buckets
to
have
some
kind
of
scoping
for
testing
on
a
container
runtime
that
has
access
to
these
cris.
So
what
we're
looking
for
from
signaled
here
is
some
recommendations
on
on
how
we
should
move
forward.
F
I
I,
I
think
it
makes
sense
to
have
a
kubelet
support
either
either
you
know
with
with
a
feature
gate
or
or
certainly
just
by
checking
the
latest
and
that
that
way,
we'll
be
able
to
move
forward
with
cry
api
changes
going
forward.
This
isn't
just
going
to
be
a
v1
versus
v1
alpha
this
release,
it's
going
to
be
in
the
next
release,
we'll
have
a
v1
versus
2
kind
of
problem
that
we
have
to.
You
know
resolve.
F
So
if
we,
if
we
can
come
up
with
a
way
to
test
the,
you
know
the
newer
versions,
then
we
can
certainly
push
that
back
to
an
older
version
of
container.
You
know
our
container
d,
for
example.
We
could,
we
could
do
it
in
service,
but
again
we
want
to
be
able
to
test
this
stuff
before
we
add
it
to
a
service
level
version
of
the
container
runtime.
C
Think
I
appreciate
the
testing
challenge.
I
think
what
I'm
hung
up
on
just
in
a
total,
like
transparency,
is
like
having
the
cubelets
support.
Multiple
cr
levels
feels
like
a
step
backwards
from
the
original
intent
of
the
cri
itself,
which
was.
F
What
it,
what
it
really
is
is
how
do
you
move
forward
right?
If
I
up,
if
I
upgrade
kubelet,
but
I
haven't
already
updated
the
container
runtime,
then
how
does
kubelet
move
forward
right.
C
C
And
just
be
patient
with
me,
I'm
just
letting
you
know
that.
Okay,
I
I
I'm
dealing
with
that
mental
gymnastic
right
now
of
saying:
hey
the
whole
intent
of
the
runtime
be
versioned
with
kubernetes
needs
and
not
vice
versa,
and
what
I'm
worried
about
is
we'll
come
back
to
a
situation
where
it'll
be
early
kubernetes,
where
we
were
having
to
deal
with
particular
api
versions
of
the
docker
engine
at
that
time,
right,
whether
it
was,
I
forget,
the
versions
now
13
16
17,
that
type
of
thing
yeah
yeah.
C
That
was
that
was
a
huge
problem,
and
so
I
I
feel
like.
If,
if
we
go
down
this
path,
it
might
be
something
we
we
need
to
tread
super
carefully
on
because
I
feel
like
it
is
undoing
one
of
the
original
goals.
E
Actually,
that's
not
true.
The
original
goal
would
be
hi.
It
is
just
that
doc.
Docker
api
is
not
never
think
about
compatible
with
kubernetes.
So
that's
right,
that's
why
we
want
to
define
our
api
and
it's
not
like.
We
only
want
to
support
one.
So
we
original
is
just
negative.
We
try
to
define
api
the
container
for
antenna
runtime
and
which
it
is
naturally
compatible
with
kubernetes
use
cases
right.
I
have
the
power
concept
initially
and
had
the
container
and
the
problem
it
is
back.
E
Then
no
matter
is
the
docker
or
rocket.
They
are
not
really
think
about
our
kubernetes
use
cases.
Why
is
kim
from
like
the
only
part
without
container
concept,
docker
only
think
about
the
container
without
the
power
concept.
So
that's.
Why,
and
so
we
ended
up
at
least
myself
and
up
to
just
discuss
with
both
sides
cannot
reach
the
agreement.
So
that's
why
I
said:
okay,
let's
define
that
api,
so
to
call
out
who
can
support
that
api
kubernetes
leader,
but
do
we
do?
This
is
why
we
have
the
alpha.
E
E
The
only
question
is
even
in
our
control,
that's
the
overhead,
how
much
overhead
we
want
to
handle
today
and
do
we
think
about
the
because
if
we
are
thinking
about
the
all
the
customer
today,
usage
users
about
kubernetes
do
the
bundle
upgrade
on
the
node.
E
Why
we
need
the
carry
those
overhead,
but
if
it
is
like
the
out
of
box,
you
already
have
the
container
runtime
and
then
you
later
in
place
into
kubernetes.
If
a
lot
of
use
cases
like
that
way,
maybe
we
need
a
thinking
part
right,
so
you
may
end
up,
have
the
different
version
of
the
kubernetes
first
for
for
the
for
the
os
out
of
os,
because
that's
kind
of
a
trend
about
the
container
optimized
operating
system.
E
F
D
C
D
D
E
D
F
Right
so
how
we
handle
this
in
container
data,
we
implemented
the
v1
api
natively
right
and,
if
you're
still
using
the
v1
alpha
api,
it
does
a
you
know,
a
marshall
and
marshall
cycle
on
the
apis
as
they
come
in
for
both
the
response
and
for
the
for
the
call.
So
there's
a
little
bit
of
extra
overhead
v1
alpha
apis
right
now
and
when
kubelet
moves
up
to
b1
it'll,
be
you
know,
just
native
native
speed,
although
it's,
I
wouldn't
say
it's
a
whole
lot
slower,
but
yeah
there's.
F
J
I
would
like
to
add
also
one
one
topic
to
it,
so
it's
not
only
about
like
how
we
install
or
updating
the
couplet
on
runtime.
So
it's
also
about
what
kind
of
user
experience
we
want
to
get
like.
Do
we
want
to
expose
the
error
message
saying?
What
do
you
have
incompatible
component
or
you
have
a
components
which
try
to
agree
on
some
version?
I
think
this
negotiation
is
a
bit
better
for
the
user.
F
Yeah
and
that's
what
sash's
pr
has,
and
I
I
reviewed
it
looks
it
looks
pretty
good
basically
when,
when
kublet
tries
to
mate
up
with
the
container
runtime,
it's
just
going
to
check
the
latest
version
of
the
api.
If
that's
not
available,
it'll
go
to
the
prior
and
then,
if
there's
a
reboot
of
the
container
runtime,
you
know
for
any
particular
you
know,
status,
type
change
or
you
know,
bug
type
fix
in
the
node.
Without
acquiescing
the
entire
node,
then
kubelet
will
re-establish
connection
with
the
container
runtime.
F
It
looks
pretty
good
and
then,
if
you
reboot
kubelet,
it
will
attempt
to
re.
You
know
will
attempt
to
initially
establish
connection
on
the
new
version
of
the
container
runtime
api.
Again.
Okay
and
the
container
runtime
certainly
already
supports
you-
know
both
versions
of
the
api.
At
the
same
time,.
C
F
To
best
present
them,
I,
I
guess
you
know
from
that
perspective,
because
we
do
need
the
buckets
to
be
tested.
We
should
we
should
have
a
some
kind
of
a
feature-
gate
or
whatnot
to
pick
the
version
of
the
container
on
time
that
from
the
two
that
we
want
to,
we
want
to
execute
against
for
this
bucket
so
that
we
can
iterate
those
buckets
once
right
and
then
and
then
again
with
the
other,
the
you
know
the
other
api
and
also
it
would
be
nice
to
have
a
bucket
that
tested
this.
F
This
reboot
feature
right,
restarting
the
container,
runtime
and
or
restarting
kubelet,
and
being
able
to
establish
a
connection
with
the
with
the
node
still
alive.
You
know
for
those
type
situations.
I
can
tell
you
that
in
container
d,
we
test
the
at
the
integration
level
v1
api,
and
then
we
use
cry
tools
to
test
against
the
b1
alpha
api,
but
we
also
need
a
change
in
the
cry
tools
to
be
able
to
test
both
versions
of
the
api.
D
I
I
I
think
we
shouldn't
have
alpha
once
the
runtimes
already
have
one
release
out
with
the
weaver,
because
practically
there
is
no
change
between
those
two
versions.
Right
now
like
we,
we
when
we
talked
about
moving
to
beta,
we
said
like
we
have
all
this
pain
of
updating
the
runtime
moving
it
back
into
the
ci
and
like
basically,
a
long
cycle
to
update
everything.
So
we
decided
we'll
just
move
the
api
to
v1
as
a
one-time
step
and
skip
the
beta
and
get
rid
of
the
v1
alpha
one
after
we
do
that.
D
F
I
E
But
mac,
I
want
to
ask
you
one
thing:
okay,
I
understand
the
customer
container.
They
have
different
customer
users,
not
everyone's
kubernetes
right
so,
but
we
are
only
focused
on
kubernetes
users
here.
E
You
can
use
in
container
d
and
which
have
the
civ1
alpha
one,
the
old
one
and
anyway
continuity
right
there
also
out
of
support
the
policy.
So
it's
that
code.
But
do
we
talk
about
from
the
kubernetes
perspective,
then,
from
this
perspective,
we
call
out
right
now
we
are
moved
to
we
what
ci
worship.
So,
I
think,
customers,
if
they
are
the
kubernetes
customer,
I
mean
offer
officer
like
the
the
provider
they
are
going
to
found.
If
we
still
think
about
the
bundle,
it
is
our
majority
use
cases
they
just
upgrade.
F
So
right
now
the
only
version
of
continuity
that
supports
the
b1
api
is
the
diversion
1.6
beta,
which
we
should
be
able
to
ga
within
the
next.
You
know
a
few
weeks
to
a
month
at
the
outside.
Okay,
so
we'll
be
able
to
you
know
in
sync
ship,
a
version
of
container
d
that
will
be,
you
know,
suffice
for
your
packages
for
kubernetes
users
that
I
don't
think
that'll
be
an
issue
this
cycle.
Okay.
That
said,
I
probably
also
need
to
move
this.
F
The
support
for
the
v1
api
back
to
the
prior
version,
which
is
1.5.6,
I
believe,
or
seven
we're
up
to
now.
We
need
we'll
need
a
1.5.8
container
d
to
be
available
for
your
customers
that
don't
want
to
move
up
to
1.6
of
container
d,
okay
that,
but
I
have
no
way
to
test
it.
Yet
I
need
a
version
one.
I
need
a
version.
One
api
version
of
kubelet
right
now:
kubelet
is
hardcoded
into
the
v2
v1
alpha
2
api.
F
But,
as
marnell
said,
the
good
news
for
everybody
is
all
of
all.
The
major
features
you
know
are
in
both
apis
right
now,
so
really
we're
just
we're.
Just
talking
about
the
you
know,
the
go
lane
kind
of
grpc
kind
of
you
know
issue.
How
do
we?
How
do
we
get
the
right
api
you
know
connected
so
just
trying
to
understand
if.
D
E
C
E
C
I
agree,
like
I
think,
there's
like
the
cognitive
overhead,
but
the
performance
overhead.
I
would
view
it
as
a
as
a
blocker
if
I
have
to
pay
any
marshaling
cost,
because
many
of
our
environments
are
very
resource
constrained,
and
so
I
d
I'm
very
sensitive
to
that.
So
I
guess
bernal
you've
been
looking
at
sasha's
pr.
Is
there
any
marshaling
overhead
yeah.
D
So
I
I
just
I
just
asked
sasha
we'll
we'll
work
and
we
can,
I
think,
between
mike
sasha
and
me
we'll
get
that
get
those
numbers
out
and
then
we
can
make
make
a
decision
like
whether
the
overhead
is
acceptable
or
not.
So
we.
C
C
D
E
And
we
here's
the
one
context
I
want
to
share
here
when
we're
doing
this
ci
and
we
decided
make.
This
is
the
kind
like
the
internal
api.
I
just
want
to
want
to
make
sure.
Otherwise
we
even
cannot
rule
out
this.
We
were
alpha
one
continuity
and
the
crowd
in
production.
E
I
just
want
to
make
sure,
does
not
affect
our
user
facing
api,
which
is
nick
our
job
as
a
data
plan
owner
and
our
job
to
maintain
that
compatibility
and
the
smooth
migration,
all
those
kind
of
things
we
just
discussed
we
can
send,
and
but
for
user
that's
not
exposed
to
the
end
user.
So
it's
so.
I
just
want
to
share
everyone
here.
So
that's
why
we
from
day
one
we
remove,
there's
the
mini
consent
that
time
found
the
api
reviewer,
and
we
just
said:
okay,
that's
our
job!
This
is
our
internal
api.
F
The
only
problem
in
the
actual
implementation
there
was
an
internal
api,
but
for
the
cry
on
the
cry
side
of
it,
the
internal
api
ended
up
bleeding.
You
know
some
of
the
the
cry
types.
So
that's,
what's
that's
what's
going
on?
We,
we
didn't
have
pure
internal
types
in
all
cases
and
that
sasha's
pr
addresses
that
issue.
C
I
think
the
worst
case
would
be
that
if,
if
you're
running
cubelets
in
resource
constrained
environments,
which
the
world
is
doing,
the
idea
of
every
runtime
invocation
having
to
do
a
type
marshal
to
an
internal
format
and
then
generating
the
garbage
from
that
is
particularly
stressing
to
me,
is
there
anything
we
can
do?
C
Cubelet
version
it
would
do
the
internal
marshalling,
because
this
would
significantly.
F
So,
just
to
make
sure
you
caught
it
when
I
said
we
were
doing
marshall
and
marshall
in
container
d,
I
didn't
mean
that
all
of
the
changes
that
sasha
were
doing
were
doing
marshall
and
marshall
across
the
api
it.
I
merely
meant
that
when
you
connect
to
the
v1
alpha
2
api
in
container
d,
it
will
do
a
marshall
and
marshall
cycle
for
to
the
v1
apis,
which
are
implemented
natively.
C
C
D
D
B
Yeah
I
wanted
also
to
comment
on
don
yours,
a
comment
about
like
we
shouldn't
spill
this
over
to
end
users.
I
think
if
you
only
support
u1
alpha
one
for
one
release
and
it
will
be
spilled
over
to
end
users
and
users
will
be
notified
next
release
that
the
reversion
of
continuity
is
not
compatible
with
their
version
of
kubernetes
if
they
haven't
updated
continuity
from
1.5,
so
I
think
it
will
be
spilled
over
it's
just
a
matter
of
like
do.
We
do
it
easily,
so
we
do
it.
Next
to
this,
they
will
see
when.
E
I
talk
about
the
user
at
this
moment,
it's
more
like
the
talk
about
the
the
developer,
I'm
not
talking
about
the
vendor
provider.
So
that's
why
earlier
I
said:
okay
do
we
want
to
do
the
boundary
it
needs,
which
is
like
the
iteming
and
the
creator
of
the
cluster
and
then
do
the
bundle
things.
Instead
they
have
the
different
version
of
the
content
id
right.
So
so
I
think
that
what
is
our
user
using
kubernetes
in
the
past
many
many
years,
kubernetes
actually
doing
the
boundaries,
so
we
did.
E
I
know
today
we
we
didn't
do
that,
but
in
the
past,
if
you
look
at
all
the
worship,
we
didn't
say:
oh,
what
kind
of
version
of
the
darker
we
wanted
it.
What
kind
of
version
of
the
container
did
we
validate?
We
basically
okay
here
it
is
one
and
we
only
support
those
limited
version.
So
later
we
lose
this
one,
because
it's
we
have
more
container
runtime,
so
we
decided
not,
but
I
think
about
with
you
most
of
the
use
like
the
vendor
for
the
kubernetes
provider
actually
doing
this
to
know
the
level
of
boundaries.
E
If
we
still
just
the
cases,
then
we
basically
don't
need
to
support
more
than
one
version
here
right,
so
you
just
have
to
go
out.
That's
and
the
video
to
make
their
life
easier.
Also,
we
give
them
like
the
transaction
migration,
like
one
release,
call
out
that
you
have
to
prepare
to
switch
it
to
the
v1
worship.
E
C
Well,
I
guess
the
key
action
is
we'll
just
get
that
measurement
and
if,
if
that
measurement,
when
we
come
back,
maybe-
and
our
next
meeting
is
quantified-
we
can-
we
can
figure
out
both
the
cost
to
the
consumer.
Who
is
having
to
have
the
cuba
pay
for
this
overhead
of
marshalling
and
then
the
cost
to
the
community
to
cognitively,
contain
it
and
and
that,
but
at
least
we'll
have
some
data.
So,
let's,
let's
I
guess,
try
to
get
that
in
our
next
meeting.
C
Mike
we
actually
are
measuring
node
overhead
at
the
lowest
granularity.
You
could
imagine
right
now
and
I
think
both
marnell
and
I
have
another
hour
meeting
afterwards,
where
we're
trying
to
find
more
optimization
paths
so
like
this
is
a
bit
of
a
personal
topic
for
me
just
because
of
how
much
time
we've
been
spending
to
try
to
figure.
B
A
Have
anything
else,
I
don't
think
we
do
sorry.
I
was
adding
I
added
all
of
this.
This
is
a
great
discussion
and
hopefully
we
can
get
the
recording
up
to
for
additional
context.
I
added
all
of
this
as
well
to
the
the
api
review
agenda
for
this
week.
So
hopefully
I
can
also
take
a
look
at
it.
A
There
see
what
folks
have
to
say
about
that
with,
with
all
of
the
concerns
that
we've
discussed
today
as
well,
so
yeah
doesn't
look
like
there's
any
more
agenda
items,
so
I'm
happy
to
call
the
meeting
for
today,
and
I
hope
everybody
has
a
wonderful
rest
of
your
week-
cheers
everyone
thank.