►
From YouTube: Kubernetes SIG Node 20210209
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
All
right
well
welcome
everyone
to
the
february,
9th
signoid
meeting
meetings
are
recorded
for
those
who
aren't
able
to
attend.
So,
let's
be
you
know,
friendly
and
kind
everybody
so
that
we're
happy
about
that.
What
we
have
on
the
agenda
today
we
have
number
items.
I
think
I
want
to
give
enough
time
to
make
sure
we
got
all
the
enhancement
stuff
done.
So
maybe
first
sergey
do
you
want
to
anything?
We
want
to
talk
about
with
respect
to
the
active
pull
requests.
B
Yeah,
as
I
said,
all
the
focus
on
the
enhancements
and
we
need
to
start
approving,
pull
requests
review
more.
So
if
you
have
time
maybe
after
this
week,
please
start
doing
it.
A
Yep
all
right,
I
guess
alana
you
give
a
doc
pointer
to
the
latest
state
of
enhancements.
Do
you
want
to
walk
through
that
and
make
sure
everyone.
C
Yeah,
do
you
wanna,
give
me
co-host
and
I
will
share
my
screen.
C
Okay,
so
I
just
went
through
this
quickly
this
morning
to
kind
of
update
where
everything's
at
so
everything
that
has
not
yet
merged.
I
have
marked
with
this
sort
of
yellow
color
and
everything
that
has
merged
is
good
to
go.
I've
marked
with
green
and
anything
that,
like
is
n
a,
I
just,
didn't,
give
a
color,
so
they
were
like
we
were
previously
using
green
red
and
yellow,
and
I
just
have
yellow
and
green
now,
because
everything
has
at
least
a
pr
up.
C
So
this
one
for
the
run
as
group
promotion
to
ga
there's
just
some
pr
feedback
required.
This
one,
I
think,
has
the
prr
approved,
but
just
needs
a
node
approver.
This
one
has
been
merged.
C
So
basically
anything
that
is,
I
guess,
in
yellow
here.
I
don't
necessarily
need
to
go
through
all
of
them.
They
are
not
yet
ready
to
go.
Everything
that's
in
green
is
ready
to
go.
I
don't
know
if
there's
anything
that
we
want
to
discuss
other
than
I
think
this
one
had
a
comment
from
derek
saying:
probably
won't
make
it
into
121.
A
Yeah,
so
what
I
was
trying
to
to
figure
out
was
there's
this
dock
and
then
there's
the
enhancement
tracking
spreadsheet,
and
it's
just
understanding
that
if
there
was
a
disconnect
between
the
two
don't.
C
Know
if
there
are
it's
really
hard
to
filter
by
sig
in
the
enhancements
tracking
spreadsheet
right
now,
let's
see
what
they
have
for
node,
so
they
have
as
well
15,
which
is
what
I
had
and
they
had
four
tracked
and
11
at
risk.
I
think
we
had
another
one
merged
since
then,
so
they
probably
just
haven't
updated
it
so
that
matches
what
I
have
here,
which
is
five
track
10
at
risk,
so
basic,
I
counted
these
all
of
the
yellow.
Things
are
at
risk.
C
A
A
I
guess
we'll
all
focus
on
getting
the
final
pr
reviews
or
try
to
address
any
comments
that
were
not
yet
addressed.
I
guess
is
anybody
on
the
call
that
feels
like
their
particular
kept
needed
attention,
but
was
getting
missed,
or
that
was
not
on
this
track.
I.
C
A
So
so
in
the
interest
of
that,
I
guess,
if
that's
the
case
either
put
your
cap
on
the
agenda.
If
it
is
not
here,
then
it
was
at
risk
and
then
we
for
interested
time.
Then
we
can
go
through
the
remaining
agenda
items.
So
thanks
alana
sergey,
do
you
want
to
talk
through
your
deprecation
timeline?
Next
yeah?
That
will
be.
B
Great,
so
do
you
want
me
to
project,
or
I
just
talk
over.
B
B
It
looks
great
okay,
so
I
want
to
talk
about
docker
removal
and
it's
a
cap
two
two
two
one.
I
should
have
waited
one
more
and
then
it
will
be
to
do
two
anyway,
so
docker
shim
removal
was
announced
in
120.
B
B
So
graduation
criteria
for
removal
of
docker
shim
is
at
least
there
is
gather
feedback
from
users
at
the
quad
tests.
You
know
and
all
the
documentation
needed
plus
we
have.
You
know
to
clean
up
test
and
intense
grid
for
coverage
of
other
runtimes.
B
So
timeline
right
now
is
123
this,
which
is
in
december.
I
already
use
the.
I
assume
that
we
switch
to
three
releases
a
year,
not
for
releases
here.
I
don't
know,
but
it's
locked
right
now,
but
I
think
it's
a
safe
assumption
now.
B
So
if
we
remove
it
in
123
in
december,
then
at
the
same
time
like
our
currently
saying
that
the
duplication
should
be
for
one
here,
so
the
same
release,
we
will
completely
remove
for
docker
from
entry,
so
the
plan
was
to
remove
it
to
compile
without
document
one
release
and
like
remove
later
when
possible,
with
the
switch
to
three
releases
per
year
december
becomes
a
time
when
we
actually
can
remove
the
operation
from
entry
completely.
B
B
I
also
want
to
switch
to
what
needed
just
to
give
some
average
view.
So,
as
we
pointed
like
one
of
the
prerequisite
for
docker
shim
at
least
to
indicate
our
intention
to
duplicate
docker
shipments,
to
make
other
runtimes
preferred
way
to
go,
is
to
stabilize
and
announce
stability
of
cri,
so
cira
api
was
a
prerequisite
and
we
switched
it
to
like
we
created
v1
of
sierra
api.
I
I
found
this
change
on
the
cryo,
so
I
already
switched
to
that.
B
I
don't
think
a
container
is
switched
yet
so
this
works
yet
to
be
done.
Also,
we
have
some
changes
coming
in
cri
like
one
of
them
is
windows
container,
but
I
think
we
have
a
couple
more
enhancements
in
in
the
queue
so
ci
changes
needs
to
be
done
and
locked,
so
we
need
to
announce
stability
of
cri,
then
documentation.
B
I
started
faq
for
no
this,
this
blog
post
was
published,
and
then
I
started
a
task
migration
task
for
customers,
it's
kind
of
step
by
step
instruction,
how
to
detect
docker
dependency.
How
to
eliminate
this
dependency.
B
There
are
still
missing
pieces
like,
for
instance,
how
to
migrate
to
like
specific
instructions
how
to
migrate
continuity,
for
instance
on
self-hosted,
like
on
a
bare
metal
kubernetes.
That
instruction
is
not
written
yet
we
have
a
task
for
that.
Also,
we
have
like
other
ideas
what
needs
to
be
documented
based
on
customer
feedback
we
receiving
so
far.
B
Finally,
we
we
have
a
graduation
criteria
to
receive
a
user
for
feedback,
and
the
last
year
there
was
a
c-cluster
life
cycle
survey
that
shows
that
docker
shim
is
still
by
far
most
popular
runtime.
I
asked
c
cluster
lifecycle.
They
don't
plan
to
send
any
surveys
this
year.
So
maybe
we
need
to
send
it
ourselves
and
understand
how
many
customers
migrated
after
my
duplication
announcement
and
whether
they
have
an
independency
on
docker.
B
I
also
started
working
with
securities
and
telemetry
agents
I
reached
out
to
a
few
of
them
and
splunk
so
far
replied
his
instruction
how
to
switch
from
dockershame
to
containersy,
mostly
because,
like
I
reached
to
them,
because
this
typically
is
a
dependence
on
docker
that
many
customers
have,
they
may
not
have
direct
dependencies
themselves.
They
don't
build
or
monitor
their
agent
or
their
runtimes,
but
security
agents
and
telemetry
agents
access,
docker
circuit
and
try
to
pull
information
from
it
to
get
gather
some
metadata.
B
Anyway,
I
reached
to
a
few
of
them.
Mike
splunk
wrote
the
documentation,
datadog
promised
to
write
the
documentation,
but
nobody
else
replied
yet
so
I
need
to
like
start
poking
more
people,
and
there
are
a
few
features
like
image
of
s:
medicine,
correct
by
c
advisor,
that's
kind
of
issues
and
yeah
and
testing.
We
still
get
to
clean
up
testing
and
if
you
have
this
situation
with,
docker
stream
is
not
built,
but
still
in
three.
B
B
So,
as
you
see,
there
are
lots
of
work
to
be
done
and
another
complication
that
I
didn't
mention
here,
because
I'm
not
sure
like
I
talked
to
mark
and
here
he
said
that
he
will
give
more
timeline
soon.
B
We
don't
have
a
lot
of
confirm,
like
production
use
of
container
d
with
windows,
so
windows
is
supported
by
contingency
now
and
we
expected
to
see
some
customers
using
that,
but
still
like
there
is
no
major
platform
that
implemented
windows
continuously.
As
far
as
I
know.
So,
if
you
will
start
receiving
feedback,
maybe
in
march,
then
it
wouldn't
give
us
much
time
to
react
on
this
feedback.
B
So
I
think
my
proposal
is
to
get
some
like
keep
working
on
this
work
streams
and
start
the
survey
maybe
closer
to
summer
to
understand
how
many
customers
actually
migrated
to
from
docker
shim
to
container
energy,
because
now,
like
many
providers,
supports
continuity,
images
and
then
making
better
decision.
B
But
for
now
I
I
think
it's
safe
to
assume
that
we
wouldn't
be
ready
by
end
of
year,
and
I
suggest
we
postpone
it
for
what
this
one
release
to
1.4
and
also
12124
will
also
make
us
in
the
situation
where
we
can
safely
finish
the
remove
compilation
of
the
operation
and
don't
and
remove
it
from
entry.
So
you
don't
need
to
keep
all
the
testing.
A
Around
so
this
looks
great
sergey,
I
guess
I
have
no
objection
on
the
shifting
one
release
and,
if
anything,
it
gives
us
more
time
to
ensure
the
ci
work
is
good.
A
B
Yeah,
I
also
have
a
pr
today:
okay
yeah,
it
sounds
great
yeah
and
I
I
will
ask
again
later
I
want
to
start
a
survey
similar
to
possible
life
cycle
did
specifically
like
there
would
be
questions
about
document
removal,
and
maybe
I
will
ask
people
to
review.
B
There
is
one
action
item
that
I
will
bring
back
and
another
one
is
this
sort
of
party
dependencies
on
docker
shim
are
very
annoying
and
it's
not
really
easy
to
find
people.
So
if
you
can
help
with
finding
people,
I
created
a
list
of
some
agents
that
I
know
in
one
of
the
world
documents
linked
from
the
documentation.
B
A
Well,
thanks
for
continuing
to
push
this
forward
sergey,
if
there's
no
other
discussion
on
this
topic,
we
can
move
on
to
adrian
with
checkpoint,
restore.
F
Hey
hi
so
last
time
I
was
here
in
the
meeting
and
we
talked
about
checkpoint
restore.
There
was
just
I
think
we
we
agreed
that
I
should
work
upon
getting
all
pull,
requests
ready
and
right
now
everything
I
did
to
implement
an
end
to
end
implementation
using
checkpoint
restore
for
the
drain
use
case
is
now
done,
so
the
prs
are
ready.
F
What
is
possible
using
this,
and
now
I
would
be
at
a
point
where
I
would
be
ready
to
to
work
on
the
cri
api
changes
to
get
them
to
get
emerged,
and
I
just
wanted
to
give
feedback
here
that
it's
now
ready
and
and
to
give
feedback
that
I
would
be
ready
to
review
or
to
get
reviews
on
the
cri
api
changes
so
that
they
can
get
merged.
As
a
first
step,
like
we
discussed
in
the
last
meeting.
A
F
Okay,
so
the
cap
has
not
been
updated,
the
the
the
text
yet,
but
in
the
last
meeting
I
think
you
weren't
there.
So
initially
I
started
with
the
cap
to
get
this
api
changes
into
the
api,
so
that
I
could
continue
work
on
the
lower
levels
and
the
upper
levels
of
the
c
of
the
api.
F
And
then
there
was
the
discussion
in
the
cap
with
you.
What
this
means
to
the
life
cycle
and
everything
and
then
one
solution
or
one
approach
I
tried
to
go
there-
was
to
give
an
example
and
to
end
implementation
using
checkpoint
restore.
I
did
this
for
the
drain
use
case,
so
I
can
use.
I
can
do
q
control
drain
node
name
checkpoint
and
then
it
will
checkpoint
the
parts
running
on
the
node
and
then
I
can.
F
I
have
a
tool
to
extract
the
checkpoint
from
the
cubelet
or
I
can
just
reboot
the
system
with
a
new
kernel
or
whatever,
and
then
the
node
comes
up
and
restores
the
previously
checkpointed
containers
during
drain.
So
this
was
one
example
how
to
use
checkpoint
restore
in
an
end-to-end
scenario,
one
of
the
many
possibilities,
and
if
I
understood
the
last
meeting
correctly
where
I
was
there
was
the
agreement.
F
If
I
can
show
that
it
works
in
a
demo
and
in
a
pr,
then
the
next
step,
which
would
help
me
would
if
we
could
get
only
the
cri
api
changes.
Merge,
because
my
whole
pull
request
is
really
big
because
it
touches
a
lot
of
things
in
kubernetes.
But
if
we
can
only
get
the
cri
api
changes
merged,
then
I
could
continue
working
on
the
container
engine.
Implementing
those
changes-
and
I
couldn't
can
could
continue
working
on
the
cubelet
implementing
the
cri.
A
Okay,
just
to
make
sure
I
understand
that,
so
your
your
end,
user
scenario
is:
you
want
to
be
able
to
perform
maintenance
on
a
node
where
you
don't
necessarily
throw
away
the
machine?
It's
the
same
machine.
You
want
to
pause
all
running
workloads
on
that
node
to
checkpoint.
It
update
your
kernel
in
place,
do
whatever
you
have
to
do
and
then
put
that
node
back
into
service,
and
if
you
do
that
in
a
sufficient
period
of
time
you
you
do
not
want
to
have
new
pods
be
scheduled.
F
Yes,
so
it's
it's
about
a
stateful
part,
so
something
where
you
have
loaded
something
into
memory.
My
my
examples
and
my
videos
are
always
I
have
a
reddish
database
and
I
have
a
stateful
java
application
and
python
application.
F
So
you
can
do
it
on
onenote
or
you
can
do
it
in
multiple
nodes,
but
the
migration,
if
you
want
to
do
this,
is
all
manual,
so
you
have
to
extract
the
checkpoint
from
the
node
and
copy
it
to
another
node.
The
goal
and
the
reason
why
I
was
using
drain
is:
it
seemed
like
the
simplest
use
case
to
implement
because
it's
free
of
any
policy
scheduling
decisions
and
it's
and
it
doesn't
involve
automatic
file
transfer
from
one
node
to
another.
So
it's
all
about
one
single
node,
but
it's
at
the
same
time.
F
A
Yeah,
so
I
think
we're
getting
a
little,
maybe
potentially
some
some
lines
crossed
so
is
it?
Can
we
make
the
next
step
that
the
hep
describes
the
actual
desired
flow?
I
guess
I'll
happily
look
at
the
demo
and
I'm
sure
others
will
but.
G
Eric
this
is
what
we
discussed
at
the
last
meeting.
I
saw
you
all
also
there,
but-
and
I
I
don't
so.
The
whole
thing
I
think
about
the
use
cases
today
described
is
unclear
and
so
but
the
menu
use
cases
and
on
the
node
to
microgrid
the
stateful
site
and
the
container.
Actually
it
is
there
have
the
desire
so
and
to
solve
that
problem.
So
that's
why
andrew
initially
came
to
the
signal
to
try
to
solve
that
problem
from
top
down
approach
and
that's
really
complicated
use
cases
actually
and
tilted
there.
G
I
didn't
see
that
we
clearly
find
that
use
cases,
but
we
clearly
say
some
like
the
use
cases,
menu
use
cases
being
popped
up
by
some
user.
So
this
way
at
the
last
meeting
and
andrew
came
here
say:
can
he
break
that
complicated
top-down
approach
into
the
multiple
stack
and
then
we
can
focus
on
the
node
side,
or
maybe
it's
lower
bottom
of
the
side
and
break
that
complicate
the
problem
into
the
small
pieces.
You
then
can
have
some
of
the
container
engine
and
cri.
G
So
basically,
they
could,
through
those
kind
of
things
and
the
end
about
this
checkpoint,
this
door,
no
matter
it
is
the
different,
node
or
signal,
but
through
that
one
we
can
figure
out
here's
the
wiring
for
me.
When
he
give
this
proposal,
then
we
can
figure
out
what's
the
per
node
dependency
and
it's
and
also
what's
the
kubernetes
node
dependency,
because
that's
the
checker
point
and
restore
basically
want
to
really
make
that
is
happen
and
a
lot
of
I
know
and
unknown.
G
At
least
I
want
to
see
that
today
I
don't
know
how
to
integrate
with
the
kubernetes
from
high
level
and
the
top
down,
but
I
I
think
about
this
is
kind
of
that
can
help
move
forward
for
this
kernel
feature
move
forward
through
this
node
left.
A
I'm
just
looking
at
this
from
a
perspective
like
I,
wouldn't
anticipate
us
to
merge
changes
to
the
cri
without
merging
the
enhancement
first,
and
that
felt
like
the
first
thing
that
needed
to
be
done
like
this.
A
This
work
wasn't
tracked
in
what
we
were
talking
about
earlier,
so
it
I
think
we
just
need
to
make
sure
we
get
that
flow
right
and
adrian
that,
like
that,
that's
basically
the
expectation
we
need,
and
so
who,
who
wants
to
take
ownership
of
reviewing
this
cap
and
like
ensuring
that
that
process
is
followed
and.
F
I
can,
I
can
definitely
update
a
cap
to
because
it's
it's
like-
probably
it's
four
months
old,
so
a
lot
of
change
since
then,
and
I
can
update
that
it
reflects
what
I
have
actually
done.
I
think
there
are
still
a
few
open
questions
there
and
maybe
if
I
update
the
document
document,
then
it's
it's
easier
to
review
and
I
can
definitely
do
that.
But
right
now
I
focused
on
more
on
coaching
just
get
them
ready.
G
Expected
this
is
any
kind
of
merge
in
the
1.21
perspective.
I
thought
that
this
is
just
ongoing,
and
but
this
is
not
at
least
I
don't
think
what
is
the
target
for
the
1.2
one?
No,
no,
no,
no,
no
yeah!
So
that's
why.
I
think
this
is
why
I
think
that
they're
gonna
have
the
concept.
I
think
this
is
just
missed
the
communication.
G
We
are
not
expecting
anything,
merge,
121
and
the
androids
just
follow
up
based
on
last
discussion
and
to
see
some
dynamo
and
to
see
potential
propose
some
potential
change
under
cri
and
there's
no
agreement,
but
we
just
want
to
move
forward
and
move
forward.
It's
not
meaning
merge
something
move
forward.
It's
just
helpful
to
figure
out
how
the
kubernetes
can
utilize
this
kernel
feature
and,
and
also
like
the
start
from
the
node
level
so
and
the
status
around
us
stateful
site
workload.
That's
all
we
are
talking
about
so
far.
G
D
So
so
what
what's
the
process
for
like
this
is
like
a
new
area?
We
want
to
add
something
at
the
lower
level.
So
then
we
can
figure
out
what
can
be
done
at
the
higher
level.
So
like
do
we
want
to
merge
the
cri
changes
so
and,
like
so
adrian
is
kind
of
unblocked
to
experiment,
and
we
can
figure
out
what
we
can
do
in
the
higher
levels.
G
I
know
actually,
this
is
a
little
bit
similar
as
the
c
group
for
version
two.
We
have
to
figure
out
how
to
using
sql
version.
Two
even
like
I
haven't
seen
the
discrepancy
between
sigma
version,
one
that
was
a
good
question
too.
We
were
already
working
with
the
oci.
We
will
work
with
the
container
engine
right
then
we
we
start
to
look
at
that
how
to
the
kubernetes
level
to
utilize
sql
version
2..
G
So
this
is
kind
of
the
similar
like.
I
personally
have
this
kind
of
kernel
feature
all
the
connectivity
like
this
way
and
we
did
give
the
next
some
space
to
the
single
version
too,
like
the
like
the
three
years
ago,
when
sql
is
really
in
the
poor
state.
Here
we
basically
say:
oh,
let's
have
like
a
system
d
support,
both
sql
version
y
and
version
two,
so
we
can
have
the
more
space
to
later.
Kubernetes
can
move,
there's
the
migration
path
to
move
to
the
sequel
too.
So
this
is
actually.
G
G
So
this
is
what
last
time
when
the
angel
this
is
a
year
ago,
I
forgot
is
andrew
you
on
somewhere
else,
and
we
really
don't
agree
about
the
use
cases
that
kubernetes
blind
take
that
one.
So
that's
why
they
went
back
and
come
here
and
they
they
try
to
figure
out
the
api,
and
so
they
found
that
it
is
hard
like
what
we
said.
It
is
really
hard
to
figure
out
and
use.
So
that's
why
I
started
from
the
node
and
the
menu
and
started
looking
at
the.
I
think
it's
right
approach.
A
But
I
think
we
have
to
work
through
a
couple
of
things
right,
so
minolta
your
question
on
like
should
we
merge
this?
I
think
the
answer
needs
to
be
no
right
like
what
we.
What
we
should
do
is
if
we
do
feel
that
changes
are
needed
to
the
cri
to
allow
experimentation.
A
Those
changes
need
to
go
under
some
well-defined
experimental
part
of
the
cri
like
we
don't
have
a
great
way
of
identifying
that,
and
so,
like
the
current
cap
says:
oh
we'll
just
add
some
new
operations
to
the
existing
runtime
service.
Maybe
we
should
have
a
experimental
service
right
where
stuff
that's
clearly
under
there
is,
is
under
construction,
but
not
fully,
not
necessarily
required
of
a
cri
implementer
to
meet
some
conformance
right
and
so.
D
A
G
Again,
I
don't
think
about
do
we
have
proposed
anything
to
merge
at
this
moment,
but
I
just
want
to
see
the
same
rule
apply
to
the
sake
of
water
too,
and
I
just
want
to
say
that
there's
something
we
have
to
keep
the
same
standard,
so
the
open
source
community
is
harder
to
use
it.
So
this
is
the
kind
of
the
similar
things
and
I
totally
agree
with
you.
G
We
need
to
have
some
experimental
experience,
but
the
same
thing
for
the
simple
version
2
right,
so
we
don't
have
that
experimental
services
to
apply
to
to
this
yeah.
So
that's
kind
of
what
I
want
to
say.
Like
the
we
yeah,
we
should
work
on
the
ci
how
to
mix
those
experimental
features
include
the
other
kernel
of
new
features
in
the
long
run.
So,
but
I
I
just
want
to
apply
just
tell
the
why
we
we
think
about
those
kernel
features
can
start
from
the
node
side
to
help
forward.
A
Yeah
all
right!
Well,
it
seems
like
on
this
item
just
so,
we
can
get
other
items.
Adrian
you'll
update
the
cap
to
the
latest
state
of
your.
A
As
a
part
of
the
cri
api
graduation,
we
talk
about
where
we
could
do
experimental
changes.
We
can
get
that
kind
of
ironed
out
and
then
maybe
we
can
check.
I
don't
use
checkpoint
check
back
once
those
updates
were
made,
but
for
121
we
won't
merge
any
particular
item
just
yet.
F
A
F
Sure,
of
course,
but
so
the
the
the
way
to
get
the
cri
changes
into
kubernetes
is
by
getting
something
ready
to
provide
experimental
features
so
correct.
F
Basically
yeah,
I
just
want
to
get
it,
get
the
cri
changes
somehow
merged
to
continue
to
work,
and,
if
just
just
to
make
sure
I
understood
it
correctly,
you
think
there
should
be
some
experimental
type
which
which
this
can
be
used
without
breaking
anything
existing
things,
and
so
getting
the
experimental
framework
or
whatever
it's
called
is-
is
a
prerequisite.
Okay,
just
okay,
yeah,
okay,
no.
A
Problem
some
other
context
like
the
cri
is
just
the
cube.
It's
a
client
right
and
so
that
we
can
have
a
different,
maybe
perspective
on
it,
but
evolution
of
grpc
apis
that
were
public
within
cube.
We've
been
asking
other
caps
like
when
they
extend
the
pod
resource
api
that
it
needed
to
be.
If
you
were
a
server
of
that
api,
you
need
to
be
behind
a
feature
gate
it
needed
to
be
aired.
If
the
feature
was
disabled.
There
was
like
a
whole
host
of
like.
A
Stuff
that
we
need
to
be
careful
on
so
I
think
just
pairing
up
with
renault
and
like
a
right,
yep
path
for
the
community
to
experiment.
A
Is
a
good
next
step?
Okay,
sounds
good
thanks!
Okay,
let's
see,
I
want
to
be
sensitive
to
items
that
were
on
the
tracking
list,
so
I
guess
vinay.
If
it's
okay,
can
we
allow
herschel
to
talk
about
his
partial
to
talk
about
his
item
next,
because
I
I
think
that
was
on
the
not
tracked
yet
list
or
not
yeah
sure.
B
D
A
Of
getting
a
long
time
on
vertical
scaling.
F
A
Okay,
perfect,
so
bobby
was
there
anything
you
wanted
to
talk
through
on
the
graceful
shutdown
cap
other
than.
I
Hey
yeah
so
for
the
for
the
graceful
shutdown
cap
I
just
kind
of
wanted
to
to
check
in
so
we
had
some
earlier
discussion
last
signaled
about
it.
We
kind
of
want
to
try
to
drive
it
toward
beta
in
121..
We
think
we're
we're
in
good
state.
There
might
be
some
changes
required,
so
we
just
kind
of
want
to
start
have
the
ability
to
work
on
and
be
able
to
graduate
in
121,
and
if
it
turns
out,
we
do
need
more
changes.
C
I
There
is
pr
review
so
yeah.
If
there's
any
concerns,
I
don't
want
to
bring
them
up.
Otherwise
would
be
great
if
could
could
be,
take
a
look
at.
A
Sorry,
I
just
assigned
it
I'll
get
to
the
next
hour.
It
looks
like
it
should
be
all.
H
Yeah
so
so
derek
you
had
one
comment
there.
I
just
messed
in
chat
about
the
metrics
right.
So
since
this
this
is
similar
to
the
credits
provider
plugin,
we
could
probably
have
the
plugin
and
metrics
that
that
plugin
has
it-
and
I
I
kind
of
not
sure
about
that,
because
I
can
give
a
good
explanation
of
it
here:
the
correct
provider
plugin.
H
It
gets
executed
again
and
again
by
cube
later,
whenever
there's
a
matching
image
name
is
there
and
that
kind
of
allows
it
to
have
a
time
series
data
you
know
on
which
which
can
be
exported
as
a
metric,
but
in
our
case
a
node
sizing
provider
it
it.
This
plugin
is
going
to
get
executed
when
the
cubelet
is
starting
and
if
it
doesn't
start
and
that's
all
about
it,
there's
no,
it's
not
going
to
make
any
repeated
attempts
or
anything.
H
G
I
I
just
like
what
I
expressed
last
time.
I
don't
think
they're
still
valuable
for
this
feature
again,
and
so
the
system
reserve
and
the
capital
reserve
is
perfectly
defined,
so
customer
can
specify,
which
is
provider,
can
specify
what
we
want
reserved
that
will
be
not
just
machine
size.
That's
basically
also
rely
on
many
other
dimensions.
I
think
I
mentioned
that
last
time,
could
it
be
curdled
washing
different
kernel
using
the
different
could
be
like
also
like
the
system
demons
so
like,
for
example,
different
production
run.
The
know
the
problem
detector
differently.
G
Some
is
run
as
the
demon
site,
then,
which
means
it's
not.
You
don't
need
the
either
data
overhead
to
the
either
kubernetes
reserve
and
the
system
user,
but
some
some
production
read
that
as
the
native
system
demons.
So
that's
why?
Then,
you
have
to
take
that
one
into
consideration
for
your
system,
reserve
and
also
npd
is
plugged
in
from
day
one
based
on
the
production
and
what
you
have
so
that
also
have
the
different
ads.
So
that's
why
this
is
why
we
added
that
as
the
config.
This
is
also
initially.
G
We
want
to
have
the
dynamic
company
config
and
you
can
dynamically
config
this
one
another.
It
is.
I
think
I
mentioned
that
there's
a
formula
like
the
system
reserve-
that's
basically
it
is
the
it
is-
will
be
scaled
up
and
down
based
on
the
number
of
the
content
and
the
major
number
of
the
part
they
manage.
G
So
this
is
why
we
have
like
the
introduce
magazine,
part
per
node
and
those
kind
of
things,
so
so
this
is
kind
of
why
we
define
those
kind
of
things
is
actually
give
each
kubernetes
provider
a
space
they
based
on
their
production
needs
and
to
configure
that
node
and
to
then
to
provide
the
best
of
the
autofood
resource
handling.
This
is
kind
of
management
services,
part
of
manual
services.
G
There's
no
good
way
for
open
source
community
provide
a
good
thing.
So
this
is
why,
in
the
long
run,
I
was
initiated
like
the
dynamic,
complete
config.
So
then
you
could,
based
on
the
magazine,
part
based
on
the
different,
even
if
the
customer
can
customize
those
kind
of
things
not
just
class
cluster
admin
cheater,
I
think
I'm.
This
is
why
I
have
this.
I'm
not
sure
because,
right
now,
then
we
I
understand
the
intention
where
I
came
from,
and
I
also
see
this
is
connected.
Oh
we
have
provide
another
provider.
G
The
problem
is:
what's
that
value
of
our
provider
and
that's
just
using
more
resources
and
for
that
front
of
your
front
of
your
node,
we
keep
out
more
resources
from
other
nodes,
but
not
real
functionality
or
not
customer
value
to
the
users.
That's
kind
of
the
I
just
want
to
repeat,
though
I
don't
think
about
that's
very
useful
and
especially
provided
by
the
open
source
community.
G
H
Yeah,
I
understand
what
you're
trying
to
say
here.
The
main
motivation
of
this
kept
was
to
actually
address
the
problem
that
we
were
facing
and
we
saw
in
in
production
so
when,
when
we
have
like
so
to
give
a
concrete
example,
we
had
an
incident
where
cubelet
was
using
a
node
lockups
on
azure
platform
and
then
not
so
much
on
gcp,
but
so
so
these
different
platforms
are
working
differently.
H
You
know
so
essentially,
if
you
have
a
part
which
was
a
very
memory
intensive
workload,
you
would
have
on
our
azure
workloads
with
this
or
azure
networks
will
just
not
work.
All
the
nodes
will
start
to
go
in
the
not
ready
state
while
the
same
values
used
to
work,
fine
on
gcp
or,
let's
say,
aws
right,
and
then
we
started
doing
something
which
just
kept
doing
automatically.
So
essentially,
if
you
use
azure,
we
will
try
to
bump
up
the
system
reserve
to
higher
values
and
it
will.
H
The
node
will
not
go
in
not
ready
state
and
then
similar,
but
on
gcp
side
you
didn't
have
to
do
much.
H
What
this
cap
was
trying
to
do
is
allow
those
cloud
providers
to
provide
that
functionality
where
the
cubelet
is
coming
up
and
they
can
probably
have
a
fine-tuned
value
of
a
system
result
that
might
work
with
them.
Essentially
what
we
are
doing
manually
it
just
tries
to
automate
that
that's
all
to
it.
G
So
I
sure
I
understand
all
you.
This
is
exactly
what
you
described.
That's
kind
of
the
ico.
What
I
said
earlier
right
so
there's
the
open
source
community
cannot
give
you
that
one
and
and
because
you
found
the
one
even
at
the
same
machine,
size,
sorry
and
but
it
is
provided
by
different
cloud
provider.
The
the
the
usage
may
be
different,
so
so
that's
kind
of
exactly
like
what
I
say.
G
So
that's
why
this
is
why
we
are
perfect
not
give
out
that
space,
so
then
each
provider
when
they
provide
the
kubernetes
services
they
have
to
do
that
based
on
their
production.
So
that's
kind
of
a
thing.
So,
but
that's
exactly
we
have
that
flag,
not
a
flag
right
now.
It's
config
to
give
the
space
for
all
the
provider
or
vendors
kubernetes
vendor
to
configure
their
offer.
That's
exactly
what
we
are
doing
here.
H
I
would
slightly
defer
on
that
yeah
we
are
providing
that
config
option,
but
it's
still
a
very
manual
process
like
the
cubelet
is
on
that
node
and
it's
not
able
to
take
advantage
that
it's
on
that
note,
and
it's
not
getting
assistant
from
any
other
component
to
arrive
at
that
value
it.
Unless
and
until
someone
manually
puts
it
there
and
it's
not
just
support
cloud
provider,
it
could
be
applicable
to
any
any
private
deployment
as
well-
and
that's
that's
where
we
saw
the
value
in
that.
G
So
hasher,
what
do
you
describe?
Actually
it's
exactly
original
my
original
sort
for
the
dynamic
kubernetes
config,
but
it's
never
really
moved
forward.
So
we
decided
totally
deplete
that
feature.
There's
no
other
provider
interesting
that
way,
but
that's
the
initiation
initial
source.
Would
I
push
that
feature
and
but
that's
like
almost
five
years
ago,
and
then
we
never
even
like
the
gke.
We
are
allowed
to
successfully
have
that
feature.
So
we
we
walk
around
that
way.
So
that's
the
gke!
G
Today
people
saw
that
vdk
did
set
those
kind
of
things
and
sue
the
cloud
provider
and
propagate
those
information
to
the
node.
When
the
loader
come
up,
they
will
pick
up
that
one.
So.
A
Maybe
don
a
macro
question
I
would
have
is
originally
we
configured
cube
with
nothing
but
flags,
and
you
know
that
that
became
a
pain
point.
And
then
we
moved
to
configuration
files.
A
G
G
No,
but
do
we
do
you
want
to
give
that
class
the
iteming,
because,
that's
basically
it
is,
I
have
the
control
plan
level
of
the
config
and
the
two
decided
some
node
or
some
node
pool
and
a
set
of
the
node
or
a
set
of
a
group
of
the
kubernetes
and
a
group
of
the
darker
command
and
a
darker
engine
could
be
configured
their
node
based
on
certain
policy.
That's
originally
in
my
mind
how
to
push.
G
I
know,
that's
the
misunderstanding,
and
this
is
this-
is
this
is
also
concerned
from
the
from
the
people.
Thinking
about
it
is
like
in
the
user,
could
dynamic
configuration
per
node,
and
so
this
is
why
I
have
that
concept,
but.
A
The
the
point
I
was
trying
to
raise
was
not
like
a
particular
challenge
on
the
nature
of
the
implementation
or
thing
more
like
we
had
flags,
then
we've
moved
to
files
files
on
disk,
and
then
we
allowed
files
to
be
retrieved
from
a
a
kubernetes
api
endpoint
conceptually.
We
could
support
the
cubelet
reading
config
from
other
locations
than
just
those
two,
and
so
what
I
view
this
cap
is
trying
to
argue
is
in
many
environments.
Filebase
config
is
also
onerous
or
painful
right,
and
so
then
you
could
explore.
A
Can
I
source
my
config
from
another
remote
location,
or
can
I
source
my
config
from
another
remote
shellable
binary
and
that's
what
I
kind
of
feel
like
this
is
calling
out
as
a
potential
alternative,
in
the
same
way
that
the
keyboard
today
can
source
config
from
static
files
on
disk,
as
well
as
from
api
servers
right,
there's,
multiple
config
sources,
and
I
guess
what
I'm
curious
is
if
anybody
else
has
found
pain
in
certain
fields
being
sourced
exclusively
from
config
files,
where,
even
if
we
don't
move
forward
on
this
cup,
are
there
other
ways
that
we
could
source
config?
A
That
don't
come
from
the
kubernetes
api
server
endpoint
itself
and
what
would
be
other
approaches
for
that,
because
everyone's
management
model
might
vary
or
that
type
of
thing.
But
I
I
have
to
think
that
we're
not
the
only
ones
who
sometimes
have
pain
points
with
only
file
based
configuration
models
so.
G
So
for
the
firebase
also,
it
is
kind
of
the
middle
ground
approach
right
so
like
the
gke,
they
don't
need.
The
firebase
actually
could
be,
from
instance
metadata,
and
but
do
we
didn't
go
that
way?
It's
because
we
consider
of
many
other
production
right
so,
like
open
shift,
we
didn't
consider
after
open
shift
for
bad
mental,
and
so
so.
G
This
is
where
we
make
that
fail,
based
because
there's
another
reason
you
don't
want
to
go
to
connect
the
only
and
the
only
source
it
is
some
provider
or
some
control
plan
and
some
controller
then
which
means
when
the
nodes
come
up.
They
don't
have
that
proper
configuration,
so
the
original
sort.
It
is
you
get
what
you
got
last
time,
so
you
persistent
and
that
config
from
the
previous
word.
So
when
node
come
up
there
also
could
be.
I
have
the
initial.
G
If
you
saw
that
of
initial
communication
between
me
and
the
macrotophan
you
can
see.
I
want
to
take
note
when
the
out
of
box
they
have
like
a
gold
config
base
that
could
be
based
on
the
machine
type
radius
also,
but
then
you
could
have
the
control
and
the
through
that
dynamic,
couponing,
the
dynamic
company
config
controller,
provided
then
you
syrianize
on
the
node.
G
So
when
the
when
the
kubernetes
crash
next
time
they
come
come
up,
they
can
pick
up
the
previous,
the
admin
or
cluster
something,
but
at
least
the
file
based
confirm
configuration.
It
is
to
solve
the
problem,
so
you
can
persistent
previous
the
best
config
and
come
up
with
business
network,
and
so,
but
you
are
not
just
limited
only
based
on
the
file
based
right.
So
there's
many
way
you
could
based
on
today
we
give
the
config
and
through
and
to
get
those
things
your
config,
so
that
I
just
want
to
share
here.
G
So
that's
why
I
have
the
question
because
we
keep
add
the
something
on
the
on
the
node
and
run
something
and
just
using
user's
resource
and
but
not
provide
the
real
value.
I
do
have
question:
what's
the
real
value
for
customer
here.
A
Yeah
sorry.
C
Just
to
quickly
jump
in
with
a
time
check,
I
think
we've
got
a
few
more
things
on
the
agenda
and
not
a
lot
of
time.
Do
we
want
to.
A
See
basically,
I
was
going
to
say
we
can
move
on
from
this
particular
topic,
but
and
that
that's
fine,
so
the
next
topic
was
he
andre.
I
think
it
was.
Are
you
just
looking
for
a
pr
review
or.
A
G
G
I
just
want
to
mention
that
this
one,
the
promoter
died
to
the
beta,
the
pi
submerged,
but
along
that
one
actually
there's
another
type,
also
merged,
which
is.
We
are
going
to
allow
the
power,
the
secretive
contacts
being
applied
to
the
a
formula
container
and
allow
that
is
being
customized
and
there's
the
constant
used
by
the
team
eclair.
G
But
the
concept
is
reasonable,
so
so,
and
he
also
is
okay
with
the
kind
of
approach-
and
this
is
a
whole
things
we
applied
to
here.
Actually
it
is
got
the
feedback
from
the
user.
Try
the
alpha
feature
and
then
and
find
the
cluster
admin
and
also
find
a
new
user
just
want
to
share
here,
and
if
you
have
the
consent,
please
release
even
with
merge.
G
I
did
in
the
in
our
1.12
21
the
planning.
I
did
mention
that
to
merge
this
one
and
we
also
merge
another
cap.
So
so
you
can.
I
have
the
link
to
that
cup
so
so
so
I
just
want
to
share
here
and
the
please,
because
until
I
review
that
the
cap
I
realize
there's
another
cap,
I
have
to
engage
together.
So
that's
why
we
are
spending
some
time
to
figure
out,
but
I
want
to
risk
to
here
for.
C
C
Three,
I
had,
I
guess,
announcements
if
we
don't
have
any
other
business
from
triage
things.
C
Great
so
we
have
worked
for.
I
guess
I
think,
a
couple
of
weeks
now
on
a
triage
process
to
help
new
reviewers
get
involved
in
signo
triage.
So
the
pr
is
there.
I
think
it
just
needs
a
final
approval.
I
don't
think
that
there's
any
more
feedback
that
needs
to
go
in
there,
but
for
the
folks
who
are
interested
in
getting
involved
in
more
signo
triage
signed
pr
review.
C
That
doc
is
now
there
and
you
can
refer
to
that
and
it
will
walk
you
through
all
the
way
from
like
I'm,
a
new
contributor
to
kubernetes,
like
what
things
can
I
do
to.
This
is
how
you
become
a
reviewer
and
approver
and
what
their
roles
are
in
node.
B
Yeah,
so
we
asked
people
what,
as
a
time,
works
for
people
for
everybody
and
we
didn't
get
a
lot
of
responses,
but
based
on
few
who
are
coming
to
meeting
very
often,
we
decided
that
we
want
to
move
to
wednesday
10
a.m.
Pst,
so
I
will
reschedule
the
meeting
not
this
wednesday
next
wednesday.
This
week
we
already
met.
A
Yeah,
that
sounds
great.
I
know
for
myself.
That's
also
a
better
time
as
well,
so
appreciate
that
all
right,
I
think,
there's
some
other
items.
We
can
adjourn
for
the
day
and
we'll
talk
to
y'all
later
bye.
Everyone.