►
From YouTube: Kubernetes SIG Node 20220906
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20220906-170341_Recording_2560x1440
A
A
Meeting
on
September
6
2022,
first
off,
we
have
driven
with
the
Retro
for
125..
Even
are
you
able
to
do.
B
Sorry,
yeah
I
cannot
pretend
that
doc.
If
someone
can
make
me
the
host.
B
Here
we
go,
can
you
all
see
it?
Yeah,
okay,
cool,
so
yeah
I
created
a
dog
following
the
previous
ratio
sessions,
so
here
are
well.
First
of
all,
this
is
a
link
to
our
cab
tracking
dock,
and
here
is
a
table
for
all
of
the
Caps.
We
have
worked
on
in
the
125
release
cycle
and
they
are
and
their
status.
So
in
total
we
have
16
caps
that
we
have
worked
now
in
the
past
cycle
and
among
them
we
have
completed
10
of
them.
B
Although
one
of
them
got
reverted,
this
storage
quotas
and
six
of
the
cat,
we
had
to
remove
them
from
the
Milestone,
which
means
we
didn't
complete
them
in
the
125
cycle
and
well
in
place
pod
vertical
scaling
we
merged
to
CRI
changes.
So
we
have
made
some
progress
on
this
cap
so
yeah.
This
is
well
on
what
we
have
completed
in
the
125
cycle,
and
here
are
some
history
data
in
the
past
Cycles.
B
So
we
can
well
see
here
in
125
we
merged,
like
a
majority
of
the
Caps,
that
we
tracked
so
yeah.
We
were
making
good
progress
there
and
yeah.
So,
let's
we
can
talk
about
the
in
the
past,
recycle
that
saying
sad
went
well
and
the
thing
said
could
have
gone
better,
so
we
can
start
the
conversation
here.
B
One
thing
I
want
to
cover
here
is
that
for
the
changes
that
require
a
container
runtime
change,
we
merged
the
CRI
changes
first
and
that
on
blocks
the
container
runtime
implementation.
We
did
this
for
the
in
place
called
vertical
scaling.
We
did
this
for
evented
plaque,
so
yeah
I
think
this
is
a
very
good
practice.
We
probably
should
follow
this.
We
probably
should
do
the
same
for
future.
Changes
that
require
container
runtime
change.
B
D
B
Yeah
so
I
guess
we
well
kind
of
agree
that
we
should
do
this
for
all
future
similar
changes
right.
We
want
to
merge
the
CRI
change
first.
B
Maybe
we
can
make
that
like
happen
before
the
alpha
change
before
the
main
Alpha
feature
is
merged,
which
should
aim
to
merge
with
your.
It
hinges.
First.
D
Do
you
think
it's
a
good
practice
also
to
have
a
separate
cap
for
that,
which
is
what
I
had
initially
for
this
particular
one?
There
were
two
caps,
one
tracking
just
a
CRI
change
that
way.
Any
issues
that
are
that
need
to
be
discussed.
You
can
have
a
focus
point
for
design
discussions
on
that
yeah
or
is
it
Overkill.
B
I,
don't
know
if
it's
just
a
too
much
work
for
the
cap
owner
or
the
future
owner.
That.
A
B
Have
to
create
two
different
caps,
one
for
a
CRI
only
well
personally,
I,
don't
think
we
really
need
that,
but
it's
just
a
code.
Wise
it'll
be
ideal.
If
we
can
separate
the
CRI
changes
out
from
the
main
Feature
work,
yeah.
A
D
E
I
can
add
something
I
think
like
overall
from
the
test
stability
stuff
it
we
there's
a
lot
of
improvements
compared
to
in
the
past,
tests
were
pretty
stable
up
to
the
release
and
there
was
no
kind
of
last
minute
fire
or
anything
like
that
or
regression
that
we
needed
to
go
really
dig
into
so
I
think
good
job
on
all
the
testing
efforts
being
pretty
stable.
B
B
Anything
else
well,
I,
can
add
another
one.
We
have
completed
a
majority
of
our
caps
compared
to
the
previous
psychos.
B
Okay,
any
other
things
we
want
to
talk
about.
Maybe
things
that
could
have,
we
could
have
done
better,
something
that
can
be
improved.
We
have
talked
about
a
CRI
change.
I
think
that
well
falls
into
this
category
a
little
bit
I'm
sure
we
can.
B
B
A
Least,
the
implementation
was
way
bigger
than
the
than
what
it
look
from
the
kit.
D
A
D
Tests
coming
in
late
was
I
would
have
probably
focused
on
that
a
bit
earlier,
if
possible
in
the
cycle,
because
I
went
from
in
my
within
one
week,
I
went
from
being,
you
know
a
positive.
This
should
go
into.
You
know
no,
let's
hold,
because
I
found
a
few
code
changes
that
needed
to
be
need
to
be
made
to
the
scheduler,
even
though
the
scheduling
Sig
was
very
on,
they
would
go
on
that
the
they
felt
the
changes
were
low
risk,
but
something
this
big
of
a
feature.
We
want
to
be
cautious.
B
Just
like
detected
some
change
close
to
the.
C
B
B
Anything
else,
any
other
caps.
B
Or
in
general
anything
we
want
to
improve.
B
Okay,
if
not,
then
well,
we
can
wrap
up
with
our
retro
session.
Then
I
will
copy
this.
What
we
have
talked
about
to
the
main
doc.
A
Thanks
Reuben
for
leading
the
discussion
here,
and
maybe
we
can
start
126
planning
next
week
and
head
back
to
the
agenda
yep
all
right.
Next
up
on
the
agenda,
we
have
Daniel
ye
on
CRA
stats
performance,
update.
C
Yes,
I
think
yeah.
The
idea
of
just
adding
additional
stats,
the
CRI
or
efforts
to
eventually
deprecate
sea
advisor
have
everything
including
Primitives
exporting
and
such
supported
and
I.
I
know
you're
wondering
like
what's
what
the
overhead
was
going
to
be
with
these,
you
know,
fields
how
this
container
stats
or
lists
pod
stats
was
eventually
changed.
C
So
I'm,
just
kind
of
here
to
give
a
quick
overview
of
you
talked
about
a
couple
weeks
earlier
and
responsibly
sure
it
doesn't
look
like
the
new
Syria
fields
are
going
to
be
too
much
additional
overhead,
so
I've
the
Stockman
here
that
just
kind
of
illustrates
my
findings.
C
So
my
proposed
change,
I'm
added
added
fields
that
are
presidency
advisor,
is
not
currently
in
CRI
includes,
like
Network
stats
process
thoughts,
yeah
some
CPU
stats,
a
wide
variety
of
like
Linux
secrets.
That's
so
I
think
some
of
the
main
grpc
calls
we
were
concerned
with
were
list
container
stats
here,
endless
pod,
sandbox
snots,
because
all
these
digital
new
stats
most
directly
affects
these
two
RPC
calls,
and
so
I
was
part
of
my
change.
C
I,
just
I
just
added
new
CRI
fields
and
their
stats
to
V1
and
I
had
I
have
the
containerdy
side
here
as
well
and
I'm
just
setting
these
to
some
some
value.
That's
some
dummy
value
essentially
and
I.
I
ran
I.
Think
four
test
runs,
total
I
ran
one:
oh
I'm,
sorry
I
also
use
like
P
Pros
profile.
That's
a
go.
Lane
profiler
gives
like
a
nice
overview
of
how
much
CPU
time
you're
spending
and
which
function.
So
that's
super
helpful,
but
in
total
I
ran
I.
C
Think
four
runs
one
with
just
the
main
activity.
Branch
one
was
my
custom
version
1080
and
then
both
those
but
with
feature
Gates
enabled
down
below
and
on
these
I
deployed
100
replicas
of
nginx
and
I
used
kind
to
run
my
custom
container,
D
versions.
So
here
is
you
can
pick
file
I
used
the
quality
config
pretty
standard,
just
a
lot
of
replica
sets,
because
we
want
to
see
a
lot
of
PODS
how
much
these
additional
stats
are
going
to
cost
us.
C
So
I'm,
just
gonna
highlight
here
yeah
key
Prof
grasp
for
this.
This
is
the
main
it's
energy
Branch,
with
no
changes
with
none
of
my
custom
changes.
I
think
this
is
from
August
31st
I
had
the
Visual
terminates
and
my
document,
but,
as
you
can
see,
you
know
just
filtering
by
these
projects
here
it
takes
about
the
0.11
cumulative
time,
which
is
the
total
function
time
and
flat
flat
time.
C
Here
is
the
amount
of
time
it
actually
spends
it's
side
of
function
and
not
calling
other
functions
versus
cumulative
is
the
time
it
spends
total.
That
includes
call
you
other
functions,
so
I
had
submit
stats
here
like
on
Marshall,
even
which
is
related
CR.
C
If
you
see
like
photobuffs
that
here's
the
general
list
container
stats
and
again,
this
is
the
main
branch
here
that
we
have
our
container
D
version,
which
overall
is,
as
you
can
see
this
flat
time
here-
is
a
little
higher
right
for
for
marshalling
and
I
think
that
makes
that
makes
sense
just
because
there
are
more
security
eye
fields
and
where
stuff
you
have
to
deal
with
for
the
Buffs,
all
that.
C
But
yes,
but
overall
aside
from
like
on
marshalling
the
stats,
are
you
know
extremely
similar
for
those
containers,
thoughts
which
we
actually
calls
runtime
service?
Client?
C
C
Yeah,
and-
and
here
is
the
flame
graph
for
these,
so
flame
grass
are
pretty
cool.
They
essentially
show
you.
You
know
where,
where
functioned
so,
here's
a
root
which
is
I'll
get
out
of
here,
actually
so
so
I'm
just
filtering
by
this
expression
here,
let's
pause
stats,
which
is
why
it's
saying
root
is
only
13.31
of
CPU
time,
and
that
just
means
that
we
spent
13.31
per
time
in
this
function
that
we
filtered
here,
in
which
case
this
is
the
spot
stats.
C
C
This
container
stats
here,
as
you
can
see
this
is
like
point-
you
know
one
one
percent
of
time
and
he
calls
all
of
our
grbc
methods
here
relating
to
the
photobuff
and
then
and
such
as
the
odd
Marshall.
C
All
that
and
that's
one
of
the
things
we're
concerned
about
which
is
just
the
additional.
You
know:
photograph
overhead
and
grbc
overhead
there
and
again
this
is
our
main
containerdy
version
with
none
of
my
custom
changes
and
so
still
look
at
the
containerd
version.
With
my
custom
changes
here-
and
this
is
all
for
list
container
stats
list-
pod
sandbox-
that's
it
is
a
little.
C
It
takes
up
some
more
CPU
to
have
this
yeah,
but,
as
you
can
see,
my
custom
Infinity
changes,
everything's
super
similar
0.11
or
0.09
seconds
of
CPU
time.
So
no.
C
Major
regressions
there
it's
a
little
more
like
a
Marshall
in
here,
which
makes
sense
just
because
you're,
adding
new
Fields
around
20
or
so
to
the
CRM.
C
The
next
thing,
I
looked
at
I
was
just
listening
to
intercepts
again.
This
is
going
to
be
listed.
Hot
stats.
Excuse
me,
you
know
container
and
pod
level.
So
that's
as
a
sea
advisor
currently
supports.
We
need
that
for
many
different
container,
slash
pod
levels
for
each
metric
that
we're
looking
at
so
here
is
the
main
you
could
look
at
this
oops,
okay!
Yes,
this
is
going
to
be
the
main
version
of
of
containerdy
yeah
I'll,
just
better
highlight
it
here.
Actually
so.
C
Yes,
so
let's
pause
that
it
takes
up
some
more
time
and
those
container
stats
here,
but
if
we're
looking
at
yeah
overall,
so
this
takes
13.15
of
CPU
time
here
and
with
the
with
the
new
CDI
Fields.
This
takes
up
13.31,
as
you
can
see,
that's
10.73
seconds
versus
upload
for
it
so
again,
not
not
really
a
major
regression
with
the
new
CRI
Fields,
but
yeah
actually
what's
interesting
to
note
about
Los
pots,
that's
if
we
followed
the
flame
graph
here.
C
C
And
if
you
look
at
the
flame
graph
here
too,
as
you
can
see,
sea
advisor
like
get
C
advisory.
Theater
info
takes
up
quite
a
bit
of
time.
If
we
can
look
at
get
pods
that
list
pod
stats
here,
that's
13.15
of
CPU
10.,
but
really
just
getting
the
stats
from
C
advisor
takes
up
like
11.60,
which
is
honestly
like
most
of
most
of
the
time.
That's
been
here,
so
you
know.
C
If
that's
eventually,
we
can
try
to
reduce
like
the
sea
advisory
implementation
and
get
stats
or
something
we
could
save
a
lot
of
time
for,
for
instance,
here
right.
C
C
C
So
you
know
in
theory,
if
you
could,
like
figure
out
some
way
to
cash
it,
even
even
just
going
with
the
basis
that
this
the
shorter
time
will
be
cached
if
we
gotta
average.
This
among
our
four
runs
whoops
yeah.
If
sorry
I
think
guys
seem
to
have
lost
my
tab
here.
C
Thank
you.
Sorry.
Okay,
yeah,
here
we
go
yeah.
If
we
like
average
this.
Among
our
four
runs,
we
could
actually
save
like
around
three
point,
two
seven
percent
of
CPU
time
just
by
caching
common.get
spec.
Here
but
again
we
are
seeking
to
deprecate
the
advisors,
so
I'm,
not
sure
if
that
would
be
the
best
use
resources,
but
that's
just
something
to
note
there
yeah
and
yeah.
C
We
can
improve
it
if
we,
if
we
want
to
probably
and
just
his
another
thing,
I
tested,
I
tested
Prof
with,
was
using
again,
like
my
main,
like,
like
the
man
continuity
release
and
not
using
my
custom
maternity
version,
but
with
feature
Gates
enabled
and
there's
this
cool
feature
yet
called
pod
to
container
stats
from
CRI.
C
If
you
said
this
to
true,
this
enables
the
cubelet
to
gather
container
and
pods.
That's
from
the
container
runside,
rather
than
straight
from
seat
advisor
and
so
Theory
like
supervisor
is
pretty
slow
right,
as
we
saw
here
with
the
see
the
device
container
info,
calling
everything
and
if
you
get
more
stats
from
just
the
Sierra
itself.
Instead
of
Galaxy
advisor
you
could
you
know,
the
hypothesis
is
that
you
could
save
a
decent
amount
of
CPU
type,
and
so
here
is
my
side.
C
Here
is
the
main
container
D
release
strictly
on.
C
Decides
to
make
Affinity
release
with
feature
Gates
enabled,
which
will,
in
the
future
gate
again
is
you'll,
get
pod
stats
strictly
from
Sierra
and,
as
you
can
see
here,
most
pod
stats
takes
up
a
little
bit
nine
percents.
This
is
the
mint
energy
release
versus
here.
It
takes
up
like
13.15
and
now
for
a
custom
maternity
versions
with
the
new
steered
eye.
C
Changes
takes
the
13.31
without
the
feature
you
get
and
then
with
the
feature
get
it
takes
up
12.19,
which
is
honestly
a
pretty
solid
treatment,
but
maybe
not
as
much
as
we
would
like
and
there's
maybe
some
reasons
why
they
aren't
as.
C
So
either
they
aren't
as
aren't
as
good
as
we
might
have
hopes
with
the
new
feature.
Yet
so
so,
essentially,
what
happens
is
like
list
pod
stats.
So
here's
the
feature
gate
here.
It
says
if
P
dot,
Auto
container
stats
from
CRI,
then
a
little
call
big
list
strictly.
C
But
regardless,
if
you
have
a
fallback
here
for,
was
pod
stats
partially
so
yeah.
If
we
could
maybe
eliminate
this
fallback
or
add
some
more
stats
this
to
your
eyes,
so
we
don't
have
to
fetch
it
from
C
advisor.
C
You
know,
I,
don't
see.
Advisory
does
have
more
metrics
in
Syria
right
now
we
could
potentially
make
things
run
a
lot
faster
because
you
don't
have
to
call
that
main
C
advisor
method
that
takes
up
like
majority
of
of
your
functions
have.
C
C
C
C
Memory
process
etc
for
for
whatever
stats
we
are
looking
for,
but
you
know
what's
interesting
so
now
here
is
that
on.
C
And
again
this
this
is
a
run
with
the
feature
Gates
enabled.
C
C
You
know
ad
networks,
that's
I
actually
accused
that
number,
since
that's,
but
like
ad
process
stats,
for
example,
isn't
being
called
and-
and
so
this
is,
this
is
a
thousand
second
run,
and
then
this
is
a
1500
Second
P
Prof,
as
you
can
see,
the
Thousand
second
view
Prof
Ile
only
like
CPU
pod
seeking
memory
stats
are
added
versus.
C
Here
you
have
some
Network
stats
as
well,
for
example,
process
stats
aren't
being
added
and
and
which,
in
which
case
you
know,
I
think
there
might
be
I
think
some
investigation
might
need
to
occur
as
to
why
you
know
some
of
these
odd
pod
stats
at
this
itself
are
not
being
called.
C
You
know,
in
theory,
if
these
could
be
called,
maybe
some
additional
CPU
time
to
be
saved
as
a
sea
advisor
to
have
to
do
less
work
and
touching
these
stats
here,
yeah
so
I.
Think,
overall,
though,
I'm
gonna
recap,
this
there's
not
a
major
regression
at
all
in
adding
these
new
CRI
fields.
C
I
I
said
with
the
Prof
and
I'm,
comparing
like
that:
August
31st
release
of
of
containerdy
versus
my
custom
changes,
but
yeah
I
think
it
is
just
interesting
to
note
that
touching
metrics
from
C
visor
takes
up
a
lot
of
CP
time
currently
and
potentially
that
scenario
you
have
improvements,
I
could
be
done
just
speeding
things
up.
C
Yeah,
that's
should
be
it
from
my.
If
anyone
has
any
questions
or
anything.
E
Awesome,
thank
you,
Daniel
for
all
the
investigation
and
looking
into
that,
yes,
I
think
like
what
this
shows
basically
is.
You
know
we
had
the
earlier
kind
of
conversation
around
if,
with
the
CRI
stats
cap,
it's
kind
of
was
a
performance
impact
around
adding
the
rest
of
the
fields
to
the
CRI
would
be
and
I
think
this.
This
work
and
Daniel's
kind
of
profiling
and
so
forth
shows
that
the
overhead
is.
E
It's
should
not
be
three
highest
quite
minimal
overall
and
in
fact
we
have
a
issue
with
containerdy
or
with
the
key
advisor
today
kind
of
not
catching
the
the
spec
right.
That
Daniel
showed
so
by
moving
to
to
getting
the
stats
from
the
CRI
will
eliminate
that
code
path.
So
you
could
see
some
some
benefits
there
yeah.
So
it's
kind
of
the
findings
from
that
yep.
A
C
Yeah
I
think
it
might
be
hard
to
say:
I
was
read
up,
I
know
you
know.
Obviously
it's
obviously
fetching.
This
actual
stats
does
take
more
time
too.
As
for
my
politics
c
groups
and
everything
so
I
I
think
I
would
have
to
do
some
reputation.
Maybe
and
I
could
run
other
people
off
and
have
some
better
results.
E
Yeah
I
think
also
one
thing
to
point
out
like
today.
We
already
do
that
parsing
NC
advisor
right,
so
if
we,
since
the
long-term
goal
of
the
cap
rate,
is
to
get
rid
of
the
parking
and
see
advisor
we'll
just
move
that
parsing
to
The
Container
runtime
right,
so
that's
like
a
fixed
cost
that
will
kind
of
have
to
pay
somewhere
right.
F
E
Cool
yeah,
so
I
think
next
steps.
Maybe
we
can.
We
can
discuss
it
in
the
follow-up,
but
I
think
next
we
want
to
see
for
for
125
planning
actually
or
sorry
for
for
next
next
kept
planning.
We
want
to
decide
if
we
how
we
want
to
proceed
with
the
CRI
cap.
If
we're
okay,
it's
going
to
add
the
rest
of
the
stats
to
the
CRI
and
and
do
that
as
the
next
step
and
Implement
them.
So
we
can
start
migrating
those
fields
to
the
CRI
and
continue
the
effort
on
the
cap.
E
A
Sounds
good
thanks
thanks
David
and
Daniel
you
can
move
on
to
the
next
topic
on
the
agenda.
So
next
up
we
have
kutong
with
an
issue
about
unexpected
initial
delay
of
probes.
A
G
Yes,
I'm
gonna
call
yeah,
I
wanna
briefly
talk
about
the
issue
regarding
the
initial
delay
seconds.
G
I
recently
invested
some
issues
on
the
probes
invocation
time,
sequence
and
I
find
something
weird
in
the
in
the
121
plus
and
I.
Then
I
find
a
bigger
issue
which
was
already
tracked
in
the
in
this
describe
industry.
I
pasted
in
the
dock,
I
think
the
the
overall
the
bigger
problem
is,
the
initial
delay
seconds
doesn't
work,
as
the
API
says
from
the
API.
G
It
looks
that
it
should
be
just
the
first
time
a
probe
got
invoked
is
is
from
the
start
time
of
the
container
plus
this
initial
delay
seconds,
but
it
doesn't,
but
our
implementation
is
not
like
that
and
I,
and
there's
also
concern
about
Google
to
restart,
and
at
that
time
we
it's
not
clear
if
this
initial
delay
second
still
make
a
difference
for
different
container
like
if
to
sorry.
G
If,
if
two
probes
have
a
different
initial
delay,
when
couplet
restarts,
should
we
also
respect
the
difference
between
them
or
we
ignore
it?
That's
not
very
clear
and
a
second
point
is
we
already
had
some
Jitter
added
in
this
before
the
before
the
probes
brought
before
the
props
invocation
I
I
from
the
from
reading
the
code
and
the
people's
discussion.
I
understand.
G
That's
that
the
purpose
is
to
avoid
standard
herd
problem
where,
during
the
couplet
restart
we
it
does
all
the
probes
for
all
the
parts.
I
understand,
that's
a
problem,
so
we
add
the
Jitter,
but
about
Jitter.
There
is
also
problem.
One
is
we
are.
We
are
giving
different
Jitter
time
to
different
probes.
Even
they
are
in
the
same
same
container
and
there's
an
ongoing
PR
to
to
solve
this
problem
altered
by
math
sales.
G
That
means,
if
say,
say
if
the
say
we
have
a
container
which
have
multiple
probes
and
each
probe
have
they
all
have
the
same
period
seconds
and
eventually
they
will
run
at
exactly
the
same
time,
regardless
of
the
initial
delay
seconds.
G
So
this
is
some
problem.
I
find
to
see
more
details.
People
can
look
at
the
debug
I
pasted
here,
so
I
want
to
bring
this
to
people's
attention.
This
may
not
have
a
huge
impact
to
to
most
of
customers,
but
some
customers
want
the
initial
delay
seconds
more
predictable,
I
I'm,
going
to
propose
some
fix
to
this.
To
make
this,
as
close
as
to
the
spec
says,
which
is
the
continuous
start
time,
plus
initial
delays,
probably
doing
some
simple
sleep
instead
of
the
current
implementation.
G
A
Thanks
for
raising
this
guitar,
so
I
think
maybe
folks
then
take
a
look
at
the
issue
and
maybe
kitten
you
can
come
back
next
week
with
your
proposed
changes.
Yes,.
G
E
Yeah
I
had
a
quick
question
if
I
can
I
was
just
kind
of
curious,
so
with
the
with
the
implementation
change
that
we
had,
how
does
it
do
we
not
add
Jitter
anymore
I'm,
just
trying
to
understand
the
case
where
there's
like
a
lot
of
if
that's
over,
because
it's
you
know
the
overhead
for
the
runtime
and
so
forth
of
doing
all
those
exact
calls.
You
know
at
the
same
time,
for
example,
for
an
exec
probe.
G
Right
I
think
Jitter
is
still
needed,
but
I
wanna.
The
current
time.
Implementation
of
the
initial
delay
is
it's
it's
kind
of
a
mask
during
which
the
the
probe
is
automatically
considered,
as
as
as
good
I
I
I
I,
don't
I,
don't
like
that
way.
I
may
thinking
we
still
need
Jitter,
but
we
also
sleep
a
reasonable
amount
of
time
based
on
the
initial
daily
seconds.
A
All
right,
you
can
move
on
to
the
next
topic
for
now.
Adrian
checkpoint,
restore
next
steps.
F
Yeah
hi,
so
one
of
the
discussion
points
during
getting
the
initial
checkpoint
support
into
into
the
cubelet
was
always
the
the
part
that
all
the
memory
pages
are
now
written
to
disk,
and
and
what
do
we
do
to
make
sure?
Not
not
anybody
on
the
system
can
can
access
it
so
for
for
the
alpha
release,
we
we
said
we
just
store
it
in
a
way
that
only
root
can
access
it,
and
so
there's
no
immediate
problem.
F
We
are
generating
with
a
checkpoint,
but
for
the
future
is
how,
if,
if
we
want
to
extend
checkpoint
to
to
higher
levels
besides
the
cubelet-
and
there
were
a
couple
of
things
we
discussed
how
to
how
to
do
it
and
I
I
just
try
to
think
about
it.
F
And
one
one
idea
was
to
add
additional
authorization
to
the
cubelet
API
endpoint,
but
as
far
as
I
know,
there
are
no.
There
are
no
cubelet
endpoints
which
have
additional
authorization.
Currently,
another
idea
was
to
have
the
service
run
on
a
different
port
and
provide
cubelet
API
endpoints,
which
require
higher
authorization
like
checkpointing,
and
the
last
idea
I
am
currently
thinking
about
is
what
would
it
would
it
work
if
we
encrypt
the
image
I
see
that
cryo
and
containerdy
are
both
using
oci
Crypt
to
decrypt
images.
F
So
my
basically
my
question
is:
is
there
anything
anybody
can
think
of
like?
Is
it
a
totally
wrong
idea
to
to
think
about
encrypting
checkpoint
images?
This
way
we
would
not
have
the
memory
pages
on
disk
for
everyone
accessible.
If
we
are
talking
and
pushing
it
to
a
registry,
then
it
would
also
be,
and
then
encryption
makes
even
more
sense,
so
I'm
just
looking
for
some
initial
feedback.
If,
if
people
think
that
encrypting
checkpoint
images
would
be
the
right
thing
to
do,
or
if
it's
not
a
really
good
idea.
A
Right,
I
I
think
it
makes
sense,
but
I
think
we'll
have
to
talk
about
where
we'll
store
the
keys
to
encrypt
right,
because
these
images
are
only
accessible
to
someone
that
has
root
on
the
host.
And
where
are
we
gonna
store
the
the
keys
used
to
like
encrypt
this
yeah.
F
This
is
this
is
also
not
it's
not
not
clear
to
me
yet
because,
as
far
as
I
see,
cryo
and
I
think
also
continually,
they
only
support
decrypting
images,
but
not
encrypting.
So
it
seems
on
the
container
engine
level
the
there
has
been
no
at
least
no
no
visible
thoughts
went
into
where
to
store
keys
to
encrypt
images,
not
just
to
decrypt
images,
so
yeah,
it's
it's
just
an
idea.
F
I
had
and
I
wanted
to
see
if
there's
some
some
feedback,
but
yes,
the
keys
need
to
be
handled
correctly.
That's
right!
Yes,.
A
Really
I
think
that's
what
I
would
worry
about.
It's
it's
a
good
idea,
but
we
need
to
figure
out
how
it
would
make
sense.
F
Okay,
so
then
at
least
that's
some,
some
positive
feedback,
so
that's
that's
good
and,
and
maybe
I
can
try
to
see
if
I
can
figure
out
how
to
implement
something
and
then
once
we
have
an
implementation,
a
proof
of
concept.
Maybe
it's
it's
it's
something
we
can
look
into
or
continue
from
there.
E
So
I
had
a
quick
question
more
general
just
around
what
you
mentioned
around
the
Google,
API,
endpoint
and
so
forth.
So
I'm
just
trying
to
understand
in
terms
of
next
steps
for
checkpoint
research,
it's
kind
of
in
general
is
that
kind
of
the
the
next
step
is
how
we
integrate
it
with
kind
of
the
Kubler
API
and
make
it
more
than
just
kind
of
API
call
on
Google
it
or
kind
of
just
trying
to
understand.
You
know
how
the
work
with
the
encryption
fit
into
the
checkpoint
restore
status
in
general.
F
A
Yeah
I
think
also
David,
like
there's
a
there's,
a
very
like
if
we
expose
it
at
the
API
level
and
someone
is
able
to
to
get
like
privileged
access
to
the
API,
they
can
basically
checkpoint
and
be
able
to
read
the
memory
of
any
container,
so
I
think
we're
just
taking
baby
steps
here.
E
F
F
D
Yeah
hi
thanks
everyone,
so,
with
the
In-Place
update,
I
got
to
try
out
the
latest
continuity.
The
master
Branch
head
of
the
master
Branch
I,
built
that
and
replaced
the
container
de
binary
that
when
you
deploy
in
gke,
the
default
is
168
or
something
and
replace
that
with
the
latest.
D
One
and
I
tried
out
essentially
I
tried
a
true
events
code
with
the
updated
E28
test
that
does
the
full
checks,
which
is
validates
that
the
update
the
resource
status
is
updated
correctly
and
it
does
update
the
iPhone
one
issue
where
the
time
it
takes
to
run
the
end
end-to-end
test
all
34
test
cases
was
it
jumped
from
869
seconds
to
2988,
which
is
significantly
higher,
and
it
looks
like
the
issue
is
not
in
the
container
of
these
side
of
things,
but
it's
more
on
the
once.
D
The
update
is
done
and
you
query
and
you
get
back
the
updated
resources
reflecting
that
into
the
status
is
taking
a
significantly
longer
more
time
and
that
needs
investigation.
I,
don't
think
it's
Alpha
blocker,
but
it
could
be
I,
don't
have
strong
opinions
about
it,
but
the
issue
is
there
I
created
it
and
we
should
track
it
and
then
fix
it
at
some
point
of
course,
sooner
than
later
now,
that's
the
only
thing
that
I
found.
As
for
a
c
group,
we
do
support
goes
Ronald.
D
How
are
things
going
with
I
know
you
started
looking
into
looking
at
it
and
I,
don't
know
if
you
got
a
chance
to
take
a
full
view
of
it
and
any
more
feedback
on
how
things.
A
D
And
the
last
thing
I
wanted
to
check
was
whether
initially,
we
discussed
or
I
think
with
the
direct
to
split
the
pr
into
multiple,
smaller
peers.
To
that
end,
I
created
another
PR
for
the
API
changes.
Do
we
want
to
bring
that
in
I?
Don't
know
if
that's
something
we
can
decide
on
if
we,
if
that
seems
like
a
good
idea,
I'd
like
okay
to
test
on
that
one
so
that
it
it's
at
least
ready.
D
D
What
are
those
things
yeah
I
wish?
I
could
add
it
one
of
these
days,
I'll,
send
you
a
PR
requesting
to
add
with
the
contributor
list.
I
should
do
that.
Now
it's
been
a
while
I'll
do
that
this
week,
hopefully
yeah.
A
All
right,
folks,
that
brings
us
to
the
end
of
the
agenda.
Do
folks
have
any
other
topics
they
wanted
to
bring
up.
A
Right,
thanks
for
joining
then
see
you
all
next
week,
bye
now.