►
From YouTube: Kubernetes SIG Node 20210119
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Yeah,
okay:
today
is
the
january
19th
it's
that
process.
So
it's
our
ragnar
signal
meeting.
I
just
want
before
everything
we
go
with
our
agenda.
A
I
just
want
to
say
that
sergey
and
derek
we
we
discussed
last
week
and
briefly
but
just
separately
and
talk
about
the
build,
a
new
bug
strategy
process,
but
it's
not
fanonized
yet,
and
but
do
we
already
see
that,
like
the
alaina
and
the
sergey
already
started
doing
this
a
while
back
so,
but
we
just
want
to
have
the
build
of
official
process,
so
we
have
like
everyone.
A
The
whole
community
can
contribute
and
help
their
effort,
so
I
just
would
touch
base
a
little
bit
this
one,
but
we
we
are
going
to
broaden
this
topic
for
further
discussing
and
the
finance
process.
So
now
let's
go
to
our
regular
agenda
circuit.
Do
you
want
to
start
the
topic
to
this
topic.
B
Yeah,
I
just
wanted
to
say
that
we
keep
doing
a
good
work,
keeping
our
prs
down,
so
we
not
growing
on
prs.
Last
week
we
immersed
one
of
the
prs
in
a
test
from
a
test
group
that
broke
other
tests
like
it
was
fixing
one
test
and
broken
broke
another
one.
So
thank
you.
Everybody
involved
for
quick
turnaround
and
fixing
it
anyway.
B
This
week
out
of
closed
prs,
there
is
only
two
rotten
ones,
and
one
of
them
requires
caps,
so
likely
like
you
may
want
to
take
a
look
at
it,
but
it's
not
that
critical
another
one
is
fixing
bug,
so
you
may
want
to
pick
up
if
you're
interested
anyway
other
other
than
that
good
work
just
keep
making
progress.
It's
great.
B
And
I
could
go
with
the
next
item
as
well,
so
for
kubecon,
maintainers
track
me
and
elana
suggested
that
we
can
do
a
overview
of
sig
node,
and
this
is
virtual
event,
so
it
will
be
recording
only
I
mean
virtual.
Only
so
no
actual
presence
there
will
be
discussions
in
slack
as
usual.
B
If
you
want
to
participate
in
cesium
agenda
for
maintenance
truck
session,
we
we
think
that
the
good
idea
would
be
to
overview
some
caps
to
or
planned
to
be
implemented.
We
can
definitely
include
us
into
recording
and
work
together.
So
please
reach
out
to
me
or
elana.
If
you
want
to
participate
and
help
to
bring
more
community
to
signal.
C
C
Oh,
I
guess
it's
me
here.
Let
me
share
my
screen.
I
had
two
items
sort
of
in
a
row,
they're
kind
of
related.
I
have
been
doing
a
bunch
of
triage
stuff
to
try
to
get
an
idea
of
where
we
are
at
with
sig
node
things.
And
can
everyone
see
my
screen
great
okay?
So
I
have
two
things
on
here.
C
So
one
is
the
non-ci
pr
board,
because
we
already
have
a
project
board
for
the
ci
subgroup
and
then
like
a
kept
triage
spreadsheet
that
I
had
been
tasked
to
do
not
last
meeting
but
the
meeting
before.
But
we
didn't
get
a
chance
to
review
this
last
week
because
the
agenda
ran
long,
so
I
just
wanted
to
show
people.
So
this
is
what
I've
put
together
for
the
the
sig
node
pr
triage
board.
C
So
I
have
a
sort
of
I
think
five
columns
here
I
can't
count,
and
so
each
column
hopefully
has
a
description
of
like
what
needs
to
happen
on
that
pr
to
give
people
a
better
idea
like
when
they
go
sit
down.
C
They're
like
okay,
I
want
to
review
some
pr's
for
sig
node
like
where
that
pr
is
at
what
action
is
needed,
and
so
anything
that
has
like
a
do
not
merge
needs,
okay
to
test
like
there's
some
some
sort
of
labels
on
the
pr
that
says,
there's
like
something
wrong
with
it.
I
initially
throw
in
this
triage
bucket,
just
to
be
able
to
give
them
a
first
pass,
close
them
if
they're
not
applicable,
fix
the
problems
with
them
and
get
them
ready
for
review.
C
If
there's
anything,
that's
waiting
on
the
author,
such
as
you
know,
they've
been
given
feedback
and
they
need
to
make
changes
or
it's
a
work
in
progress
or
it's
on
hold
for
some
reason,
because
they've
been
given
feedback.
I
put
that
into
this
weighting
and
author
column.
C
If
it
needs
a
reviewer,
so
somebody
actually
needs
to
go
sit
down.
Look
at
the
pr
put
an
lgtm
on
it.
I've
put
it
in
this
needs.
Reviewer
column,
you'll.
C
Note
that
I
haven't
been
like
doing
anything
with
the
needs
triage
labels,
because
I
wanted
to
get
a
chance
to
discuss
that,
like
with
a
wider
group
before
I
started
sort
of
unilaterally
applying
labels,
but
I
think
maybe,
once
things
go
out
of
this,
like
triage
column,
we
can
put
the
like
triage
accepted
labels
on
them
and
then
that
way,
everything
that's
not
in
the
needs.
Triage
column
will
have
that
label
and
we
won't
worry
about
it.
C
C
C
Anything
that's
been
merged
and
sometimes
I'll
throw
things
into
done
if
they
have
an
lgtm
and
an
approved
from
someone
in
node,
but
they
need
another
approval.
So
it's
like.
No
longer
our
problem,
so
this
is
kind
of
what-
and
this
done
column
is
nice,
because
I
can
actually
like
automate
things
going
into
it.
The
rest
of
them
github's
automation
is
not
very
sophisticated,
so
I've
been
doing
them
kind
of
manually,
and
I
guess
you
may
ask
how
do
things
get
onto
this
board?
I
mentioned
the
automation
is
not
great,
and
the
answer
is.
C
I
have
this
query,
which
basically
looks
at
all
the
pr's
checks,
if
they're
open
checks,
if
they
have
a
sig
node
label
and
then
make
sure
that
they're
not
part
of
the
testing
subgroup
and
then
I
can
go
and
add
them
to
the
various
columns
like
that,
one
needs
a
release:
node
I'll,
throw
that
into
needs
triage.
C
This
one
has
an
lgtm
and
it's
just
a
typo,
so
maybe
I'll
throw
that
in
the
approver
column,
and
then
this
one
looks
like
it
needs
a
reviewer,
and
this
one,
I
think,
is
you
know
a
dependency
thing.
It's
got
like
20
million
cigs
on
it.
I
think
I
keep
taking
sig
note
off
of
it
and
it
keeps
getting
reapplied,
so
I
will
just
ignore
that
one.
C
So
that's
kind
of
an
overview
of
the
board,
and
so
I've
been
using
this
to
help.
You
know,
since
I
put
this
board
together,
it's
been
helping
me
keep
a
track
of
like
well
what
pr's.
A
C
I
need
to
review
as
a
sig
node
reviewer
what
things
need
approvers,
that
kind
of
thing,
and
so
since
I
think
I've
like
started
this
board
close
this,
we
now
have
like
35
things
in
the
done
column
and
like
more
things
in
the
needs
improver
column
than
needs
for
fewer,
which
is
kind
of
exciting.
So,
basically
rev.
A
C
Pass
on
reviews
are
no
longer
the
bottleneck
which
is
cool.
I
guess
I've
been
talking
about
this
board
for
a
while,
but
hopefully
that's
a
good
overview
to
people
have
any
questions
about.
B
This
there's
a
small
comment.
It
would
be
great
to
drive
the
number
of
prs
down
and
in
this
case
it
wouldn't
be
such
a
tedious
job
to
keep
the
board
up
to
date.
So
hopefully
we
can
make
it
to
manageable
numbers.
C
Yeah,
I'm
hoping
that
I
mean
honestly,
I
haven't
found
it
too
tedious
to
keep
the
board
up
to
date.
We
do
have
pretty
large
numbers
of
pr's
right
now,
but
when
I
started
working
on
this
board,
I
think
there
were
like
40
things
in
the
needs:
reviewer
column
and
now
it's
down
to
20..
So
that's
a
lot
more
manageable
and
as
we
keep
driving
things
down,
I
think
it
will
become
more
and
more
manageable.
So.
C
We
may
even
be
able
to
ask
the
the
like
cube
infra
folks
for
help
like
they
may
be
able
to
add
some
automation
on
top
of
this,
for
us
so
like
they
can
just
do
an
automatic
label
check,
because
I
know
that
that
doesn't
come
in
the
native
thing.
But
I
think
that
you
can
add
that
on
top
of
github
projects
as
part
of
like
the
api,
so.
D
I
think
lori
apple
from
she's
all
over
the
project.
She
has
a
lot
of
kind
of
guidelines
and
tips
for
triaging
for
trash
workflows
that
sigs
can
use.
So
you
can
reach
out
to
her
on
slack.
C
E
C
I
would
love
to
so
I
would
love
to
propose
that
right
now
we
currently
have
the
ci
subgroup
meeting
on
mondays
and
it's
currently
scheduled
for
an
hour.
But
in
my
experience
like
over
the
past
month,
we
usually
only
use
the
first
half
hour
of
that
meeting
for
the
ci
stuff.
So
I'm
wondering
if
maybe
we
could
like
set
half
of
that
meeting
to
be
like
general
node
triage.
Now
you
may
note
I
have
not
talked
about
any
of
the
bugs.
C
C
I
have
some
ideas,
but
it's
it's
less,
like
you
know,
sort
of
critical,
I
think
in
the
short
term
so,
but
I
think
in
the
short
term
would
be
great
to
basically
add
this
as
a
section
to
the
ci
meeting,
but
I
know
that
it's
maybe
at
a
time
that
not
everyone
can
attend
right
now.
So
I
don't
know
if
that's
something
we
need
to
consider
moving.
A
So
the
themes-
this
is
exactly
what
I
brought
up
at
the
beginning
of
the
meeting.
We
are
still
discussed
of
the
formal
process
how
to
do
so.
We
do
feel
like
the
dark
and
I
almost
feel
we
should
have
like
the
normal
process
to
do
the
backtrack
and
see
I
actually
prove
it
make
a
huge
progress
share
project
in
the
past
couple
months
and
and
also
right
now
is
in
the
healthy
state.
A
So
we
think
about
either
another
meeting.
Maybe
it
is
too
much
for
a
community
like
us
and
also
there's
the
concern
people
list.
They
are
worried
about.
So
that's
why
alec
and
I
a
leila
and
the
direct
and
when
we
discuss
separate,
we
think
about.
Maybe
we
can
combine
this
one
with
the
sar
project,
but
they
also
have
other
concept.
People
worry
about
the
because
literally
people
want
to
talking
about
the
feature
right,
npr
feature
people
want
to
talk
about
or
my
product
issues
so
and
nobody
really
want
to
talk
about
testing.
A
So
what's
the
best
way
to
balance
all
this
kind
of
effort,
we
also
don't
want
to
like
the
there's.
The
one
concern
where
I
ask
behind
people's
thoughts
about
this
one
another
concern
people
raise:
they
worry
about
the
make
the
two
generic
signaled
meeting
two
weeks
in
my
kind
idea.
Initially,
when
I
talked
to
a
ladder,
I
would
want
to
keep
this
meeting
more
focused
on
the
enhancement
feature,
discussing
design,
review,
api
review
at
the
node
level
and
also
critical
back,
which
is
the
back
chart.
A
You
cannot
make
decision
so
bring
it
up
to
here.
Instead,
right
now,
it's
like
the
more
loose
managed
format
like
any
issue,
no
matter
how
critical
it
is
and
bring
here
and
say,
oh
ask
for
review,
and
so
we
have
this
strategy
and
also
we
have
the
pr
charge
board,
can
hugely
help
and
reduce
those
kind
of
things.
So
then,
in
that
meeting
we
can
we
can
focus
on
the
bug
strategy
and
the
pr
triage
and
also
testing
testing
status,
checking
and
the
charge.
A
So
we
we're
having
to
finish
felonies
status
and
also
like
alaina,
said,
like
the,
I
believe,
both
derek
and
I
have
a
little
trouble
to
attend
the
monday
meeting
like
the
time
conflicts,
so
we
yeah,
we
don't
want
to.
Let
them
have
another
meeting,
then
we
said,
oh
still
bring
to
signal.
E
Right
the
thing
is,
we
don't
need
everybody
to
be
attending
not
eating.
This
is
the
big
meeting,
so
this
is
kind
of
like
the
sig
architecture,
where
we
divide
things
into
sub
projects
and
say:
okay,
each
sub
project
go.
Do
something
and
then
come
back
to
us
to
report
right.
So
I
think
so.
I
would
like
to
think
of
this,
like
that,
you
delegate
in
this
case
all
I
was
telling
ilana
was
you're
doing
it
by
yourself.
E
Can
we
do
it
together
right,
so
it
doesn't
have
to
be
everybody,
but
a
set
of
people
who
are
interested
in
doing
this
triage
can
hop
in
it
doesn't
take
anything
away
from
the
technical
leads
or
the
chairs
right,
it's
more
of
like
how
do
we
keep
track
of
this
board
right.
C
Love
to
be
able
to
like
say,
we've
got
this
designated
time.
You
know
it's
the
second
half
hour
of
the
like
weekly
ci
meeting
agenda
and
like
here's,
our
policy
for
how
each
column
on
the
board
works
and,
like
anybody
can
kind
of
come
in
and
like
review.
A
pr
or
you
know,
approve
a
pr
if
they're
an
approver
or
that
kind
of
thing,
and
I
have
been
encouraging
some
folks
in
sharing
the
board
who've,
been
interested
in
getting
more
involved
in
signaled
like
hey.
Why
don't
you
go?
Take
a
look
at
the
triage
column.
C
Make
sure
that
the
you
know
the
pr
is
okay
to
test
that
you
know
like
all
of
these
do
not
merges
are
addressed,
and
then
you
know
when
it's
ready,
you
can
either
give
it
a
quick
look
over
or
if
you
don't
feel
comfortable
with
it,
move
it
to
the
needs,
review
or
column.
So
it
would
be
great
to
like
have
that
all
written
down
super
public
and
give
people
an
opportunity
to
be
able
to
contribute.
C
Although
I
do
worry
somewhat
that,
then
what
we're
going
to
find
is
that
we
will
just
be
super
blocked
on
reviewer
and
approver
time.
E
Which
is
a
good
thing
elena,
because
we
are
like
we
knowing
the
wheat
from
the
chaff
right.
So
then
the
approvers
can
only
focus
on
the
things
that
they
need
to
look
at
at
a
minimum
right.
A
So,
let's
continue
on
this
one.
This
is
really
great
and
the
needs
to
finalize
that
process,
and
I
think
that
at
least
the
front
of
the
dark
and
the
eye
derek
cannot
attend
to.
Even
so
we,
we
briefly
exchange
our
thoughts
before
this
one,
so
we
we
want
to
have
the
formal
process
and-
and
we
want
to
more
people
in
the
community
and
have
the
way
to
to
communicate,
and
so
we
want
to
velocity.
A
So
basically,
we
hope
this
year
we
can
achieve
some
of
the
review
velocity
and
and
give
the
community
developer
more
better
experience,
and
so
so
that's
the
general
goal,
at
least
the
general
goal
from
me
and
the
director,
and
but
we
need
the
earlier.
I
share
some
concern,
that's
with
from
other
people,
so
then
I
want
to
also
make
sure
we
are
like,
like
the
one
make
sure
we
are
kind
of
the
understand,
their
concern
more
and
then
finance
these
kind
of
things
and
find
a
way
to
satisfy
the
all
great
wall.
Here.
Yes,.
C
That
sounds
good
to
me.
Are
we
good
with
basically
like
tabling
this
for
the
next
ci
subgroup
meeting
and
saying,
like
that's,
where
we'll
continue
this
discussion
in
terms
of
formalizing?
This.
G
B
I'm
not
sure
if
audience
will
be
right,
but
we
can
say
that
the
second
half
will
be
about
triage
just
to
make
sure
that
everybody
who
came
just
what
they
think
wouldn't
be
surprised.
C
Yes
well,
and
then
I
guess
I
have
this-
I
have
this
pep
spreadsheet,
which
hopefully
will
be
pretty
quick.
So
since
our
last
last
meeting
two
meetings
ago,
I
went
through
the
caps
repo
and
tried
to
get
like
an
idea
of
where
all
of
the
caps
in
sig
node
were
at
what
state
they're
at,
and
we
have
a
lot
of
them
that
are
kind
of
like
stalled
or
they're
like
there's
a
proposal,
but
they
haven't
been
approved
or
merged,
or
some
of
them
are
just
an
issue.
C
But
basically,
if
you
have
like
a
stalled
cap
or
a
proposed
cap,
or
something
like
that
with
your
name
attached
to
it,
I'm
just
hoping
that,
like
you,
can
take
a
look
at
that
and
say
like
hey,
you
know
this
one,
for
example
the
dynamic
cubelet
configuration
there
was
sort
of
a
call
that
we
should
just
deprecate
that
but
like
for,
for
a
bunch
of
these
other
stalled
ones,
we
haven't
made
a
decision
on
them
like
do
we
want
to
deprecate
them?
Do
we
think
they're
still
a
good
idea?
C
I
don't
want
to
move
forward
with
them.
I
don't
know
the
new
enhancements
process
is
going
to
require
the
sigs
to
take
all
of
their
caps
to
the
release
team
and
say
what
they're
working
on
for
the
next
release,
rather
than
the
release
team
tracking
everything.
Historically,
that's
existed,
which
is
previously
what
they've
been
doing.
So
I
guess
we
kind
of
have
to
have
our
ducks
in
a
row
to
do
that.
So
this
is
just
the
the
pre-work
that
I've
done
to
try
to
keep
track
of
that.
A
Thanks
for
the
figure
out
the
kind
of
status
I
think
a
lot
of
stored,
maybe
it's
just
make
up
the
reviewer
someone's
just
nick,
like
the
people
more
project,
but
there's
some
some
of
them.
At
least
I
I
I
can
see
that
man
not
represent
of
the
kind
of
status
like
the
we
talk
about.
A
For
example,
in
the
debac
part,
I
saw
that
one
is
the
stored,
but
actually
I
misread
my
results
is
better
and
but
for
the
cubanet
resource,
the
matrix
and
the
point
at
least
I
talked
to
the
david,
the
dashboard,
and
he
would
like
to
move
forward
with
that
one.
He
will
own
that
one
and
move
forward.
C
Also
spoken
with
him
on
that
one,
so
not
to
get
too
bogged
down
in
the
weeds
on
specific
ones,
but
this
is
like
the
status
is
set
according
to
what
is
said
in
the
issue,
so
it's
possible
that
that
got
out
of
date,
but
those
issues
like
oh,
I
don't
have
a
release
set
for
this.
I
don't
have
a
release
set
for
this.
I
don't
have
a
release
set
for
this,
and
so
when
I
see
that
I
have
just
marked
them
as
stalled,
it
doesn't
necessarily
mean
the
person
doesn't
want
to
work
on
those.
A
Okay,
so
maybe
we
should
the
next
next
step
is
figure
out
from
not
from
the
cab
figure
out
the
status.
We
want
to
figure
out
the
feature,
current
real
kind
of
status
and
also
our
desired
state
for
that
feature.
A
We
we
basically
know
dependence
the
avenue.
It
means
that
we
call
out.
We
want
that
predicate
of
that
feature
and
it
will
be
new
nobody
using
and
that
feature
partial
of
the
feature
already
been
utilized
by
the
community,
many
people
and
the
kubernetes
component
config.
It
has
been
there,
so
it's
kind
of
a
half
to
serve
that
purpose
already
and
but
we
stored
by
the
name
of
the
people
working
on
the
application
feature,
so
that's
kind
of
kind
of
real
process
real
status,
so
we
can.
A
Maybe
we
could
not
today
is
to
to
have
a
lot
of
agenda.
Maybe
we
can
work
on
like
the
every
week
and
we
discuss
some
stored
some
project
and
store
the
project.
If
we
don't
know
who
is
going
to
be
owner,
can't
own
them,
and
so
then
we
can
discuss
and
and
figure
out
who
going
to
be
on
and
who
is
the
reviewer
and
so
move
forward.
C
Yeah,
that
would
be
great,
I
think
renault
did
a
really
good
job
at
putting
together
sort
of
the.
What
do
we
want
to
do
for
121
document,
which
is
sort
of
like
a
smaller
slice
of
this
one?
This
is
more
for
the
long-term
view
of
like
okay.
A
C
C
We
can
sort
of
slowly
work
through
this
over
time,
but
this
is
just
intended
to
be
a
starting
point,
especially
given
sort
of
how
we're
moving
in
terms
of
you
know,
sigs
have
to
manage
their
own
caps,
including
even
like
picking
the
ones
that
they
want
for
a
given
release
and
making
sure
that
we,
because
we
have
so
many
that
we're
actually
tracking
all
of
them
and
everything
is
represented
here.
C
So
that
was
the
goal
of
this,
and
hopefully
you
know
if
your
name's
on
here
and
you
own
a
cap-
hopefully
it's
in
in
it's
in
a
good
state
and
we
can
figure
out
how
to
move
forward
with
it,
not
necessarily
today,
but
at
some
point-
and
I
think
enhancements
freeze
for
this
release
is
upcoming
on,
I
think
the
9th
of
february
sometime
that
week.
So
it's
soon
so
just
wanted
to
give
everybody
the
heads
up
in
advance.
A
G
H
Yeah
so
alana
harsha
and
I
have
been
looking
at
this
issue
at
red
hat,
where
the
problem
statement
is
when
a
pot
is
being
created,
the
sandbox
is
in
the
creation
state
and
then
when
it
gets
deleted,
the
pod
does,
and
so
errors
happen
in
that
instance,
when
that
scenario
happens,
and
so
we're
seeing
this
json
message
eof
that
on
the
issue
and
in
some
cases
I
believe
the
pod
doesn't
restart
as
well,
but
basically
with
cryo
and
system
d
and
how
the
cubelet
is
working.
H
A
I
didn't
see
this
problem
a
lot
at
the
earlier
we
before
we
have
the
ci,
but
after
we
have
said,
I
hardly
heard
this
problem,
maybe
because
people
didn't
report
the
problem
and
the
people
over
the
problem.
But,
as
you
see
this
a
lot
at
the
earlier
kubernetes.
C
I
have
a
suspicion
as
to
why
we're
not
seeing
this
a
lot
and
the
suspicion
is
all
of
the.
So
basically
what
it
looks
like
to
me
is:
we
have
like
a
go
routine
like
a
sink
loop
in
the
cubelet,
and
we
have
a
separate
one
in
the
cube
runtime
manager,
and
I
they
are
not
synced
like
the
critical
sections
are
not
synced.
C
So
we
have
like
we're
running
into
basically
race
conditions
where
there
are
assumptions
made
about
the
state
of
things,
but
the
state
changes
from
underneath
them,
and
so
we've
got
these
time
of
check
to
time
of
use
bugs
where,
like
you
checked
it
earlier
in
this
sync
loop,
but
this
other
sync
loop
is
still
going
and
now
it's
changed
from
under
you
and
the
pods
gone,
but
you're
still
trying
to
create
the
sandbox,
which
is
resulting
in
like
kind
of
the
spread.
C
The
reason
that
I
don't
think
we're
necessarily
seeing
this
on
a
wider
spread
basis
is
this
is
all
in
the
cube
runtime
and
I
think
the
majority
of
kubernetes
users
are
still
using
docker
shim,
which
is
a
totally
different
code
path,
they're
not
going
through
any
of
this
code.
So
we're
only
really
seeing
this
on
like
cryo
and
the
other
runtimes
going
through
there.
E
Elena,
I
I
was
looking
through
it
because
you
ping
me
before
the
meeting
so
two
pieces
of
information.
One
is
the
the
json.
E
You
know
the
file
closing
thing
seems
to
come
from
cryo,
that's
one
and
the
second
one
is
I.
I
was
looking
for.
The
prefix
create
port
sandbox
for
pod
in
our
aggregated
logs,
and
I
don't
see
it
either.
Our
upstream
ci
does
have
a
whole
bunch
of
containery
stuff,
so
I
would
have
expected
to
see
some
at
least
a
few
hits
in
the
last
two
weeks.
So
though
you
know
this
might
be
some
additional
information
which
will
help
you
nail
this
down
a
little
bit.
E
I
have
a
feeling
that
container
d
has
already
taken
care
of
it.
That's
why
we
are
not
seeing
it
and
crio
hasn't
yet,
which
is
probably
something
to.
A
So
at
least
after
cri
and
we
switched
to
using
docker
share
and
also
we
we
and
also
we
have
like
the
container
d
product
usage.
I
didn't
see
this
problem.
That
means
the
deal
reporting
me
and
in
the
past
I
see
this
a
lot.
That's
that's
for
sure.
So
we
spend
a
lot
of
time.
If
you
dig
into
the
old
bug,
you
can
see
a
lot
tons
of
those
people
report
this
one,
and
so
we
have
to
we
even
have
to
in
the
product
and
suppose
we
have
to
work
around
that
issue.
C
Yeah
there's
a
few
things
that
I've
seen
like,
for
example,
there's
another
bug
that
was
reported
with
like
showing
similar
issues
with
like
a
race
on
volume,
tear
down
when
you
have
like
a
pod
like
created
and
then
quickly
deleted,
and
I
don't.
A
A
C
A
So
try
to
working
on
the
know,
the
shutter
and
expect
each
other
and
how
to
force
to
detect
around
it
and
then
many
others,
some
genetic
issue.
So
there's
a
couple
proposal
couple
opened
and
tried
to
address
that
problem
and
also,
I
believe,
david
porter.
Do
the
node
shutdown
also
touch
base,
some
of
that,
but
not
alt,
but
but
to
me
that
is
a
slight
difference
from
this
problem.
A
This
problem
is
more
like
the
what
we
know
how
we
are
going
to
make
the
file
file
system
operation,
because
it's
not
the
atomic
and
it
is
by
design.
It
is
to
have
this
problem,
then
how
the
application
use
the
space
layer
to
manage
to
working
on
the
need
mitigate
that
risk
condition
on
the
return
of
the
system
cause.
A
B
I
haven't
seen
the
reports
of
this
problem,
but
yeah
I
will
take
closer
look
we'll
keep
an
eye
for
sure.
A
Thanks
thanks
for
reporting
this
one,
but
we
need
to
stay
closer
on
this
one,
the
most
concern
it
is:
it's
not
a
regression.
Hopefully
it's
not
a
regression
in
the
1
out
of
20.
So
that's
why
I
want
a
thankful
report
and
we
need
to
pay
extra
attention
if
there's
no
regression-
and
I
think
we
need
to
look
at
the
crowd
hopefully
and
to
drill
down
into
the
detail
like
that.
I
Yes,
I
can
do
that
so
hi,
so
I've
been
working
on
on
checkpoint,
restore
in
kubernetes
since
about
last
summer,
and
I
opened
the
cap
some
time
ago.
I
did
a
minimal
implementation
and
then
there
was
the
first
feedback
that
it
would
have
been
nice.
I
So
that's
what
I'm
currently
working
on
and
my
first
implementation
or
the
first
thing
I
tried
to
touch
with
checkpoint,
restore
because
there's
this
there's
this
pot
migration
issue
open
since
2015.
So
it's
something
people
talk
about
for
some
years
and
and
my
my
idea
was
to
get
for
for
reviewers
to
get
an
overview
of
an
end-to-end
implementation.
I
tried
to
I'm
currently
trying
to
implement
it
as
as
as
part
of
drain,
so
you
can
say,
cube
control,
drain
checkpoint
and
all
the
running
containers
will
be
check
pointed
and
up
on.
I
Then
you
reboot,
your
node
or
whatever,
and
up
on
reboot
and
and
cube
cube,
let
restart
all
the
containers
will
be
restored
again.
I
I
currently
have
this
almost
working,
so
the
containers
are
back
up
after
reboot.
I
I
just
have
still
some
metadata
issues,
because
so
the
containers
are
running,
but
then
the
cubelet
kills
them
because
it
doesn't
know
them
yet.
So
I
think
the
hard
technical
problems
are
solved
there
and
one
of
the
feedbacks
I
got
from
my
different
tickets
and
pull
requests
I
opened
was
and
to
bring
the
the
the
checkpoint
restore
discussion
here
to
this
meeting
and
that's
basically
what
I
wanted
to
do
to
give
people
a
chance
to.
I
Maybe
ask
me
directly
something
if
there's
some
obvious
questions
someone
has
which
is
easier
to
answer
here
than
in
one
of
the
pull
requests
or
tickets.
So
I
just
wanted
to
to
to
be
here
for
people
to
ask
questions
if
there's
something
you
want
to
know
about
my
work
here.
A
Andrew
this
is
I'm
looking
for
this,
I'm
waiting
for
people
to
update
this
feature
for
long,
but
I
want
to
have
the
some
basic
question.
Please
forgive
me
for
the
huge
question
so
how
I
did
all
I
have
to
look
at
your
cap
here,
but
how
this
is
going
to
work
with
the
controller
or
scheduler.
A
That's
kind
of
the
important
things
have
we
discussed
with
like
the
control
plan
and
also
so
so
to
to
create
often
more
clear
of
the
use
cases.
Using
this
feature.
A
This
feature
could
be
incorporated
with
the
deployment.
This
feature
also
could
be
incorporated
with
the
upgrade
by
what's
the
upgrade
story,
and-
and
so
you,
because
today's
upgrade
it
needs
the
what
we
told
upgrade.
We
are
not
doing
the
in-place
upgrade
so
at
least
the
I
think
the
signal
opens
for
signal.
Didn't
implement,
increase
upgrade
so
you
are
going
to.
We
are
suggest,
you
are
send
a
signal.
You
are
going
to
shut
down
the
node
and
and
then
you
are
going
to
do
the
search
upgrade
all
those
kind
of
things.
So
this
can
help.
A
I
said
next,
even
you
shut
down
the
you.
Basically
don't
need
the
code
shutdown
of
the
workload
you
can
make
it
another
new
node,
which
is
have
the
new
worship
and
the
reduce
of
the
magnification
or
upgrade
the
cost.
So
this
is
really
cool
feature,
so
this
is
exactly
we
know
when
we
first
start.
We
said
it's
not
to
focus
on
the
in-place
upgrade
and
it's
nice
to
see
that
what
we
can
do
is
the
checkerboard,
but
didn't
we
work
with
those
story
yet
do
we
have
like
the
pick
up?
A
One
use
cases
and
thinking
about
the
czech,
pine
and
restaurant
in
the
future,
and
here
it
is
the
end
to
end,
describes
it
and
then
like
the
top
level
cug.
If
you
want
to
use
in
this
feature,.
I
Yes,
so
so
so
one
of
my
problems
is
that
the
the
topic
is
is
huge,
so
it
it
when,
when
the
when
a
pod
is
running
and-
and
you
start
about
automatic
migration
from
one
node
to
another,
it
gets,
it
gets
really
complicated
and
that's
one
of
the
reason
I
I
first
looked
at
the
drain
scenario,
because
this
is
basically
what
it
happens.
When
the
user
says,
I
want
to
shut
down
the
node
and
I
want
to
restore
it
and
it.
I
It
has
little
very
little
at
least
how
I
understand
very
little
policy
or
scheduling
implications,
because
it's
it
happens
when
the
node
is
shut
down
and
it
comes
back
when
the
node
is
restored
and-
and
my
my
idea
is
if,
if,
if
we
can
find
an
initial
implementation
which
is
acceptable
to
get
merged
into
kubernetes,
then
it
is
easier
to
start
discussing
about
migration
policies.
I
If
that's
how
I,
if
I
understood
you
correctly,
so
how
can
a
container
or
pod
migrate
from
one
system
to
another
and
that's
the
reason
why
I
I
I
selected
the
drain
use
case
because
it's
it's
it's
user
triggered
or
I
don't
know
the
the
node
is
shut
down.
So
it's
somehow,
but
it's
it's
not
there's
no
scheduling
involved
or
anything
more
complicated.
It's
just
it's!
I'm
I'm
afraid
it's
so
much
code
I
have
to
dump
in
in
my
my
pr
and
it
I
think
it's
already
now.
I
It
feels
already
really
big
and
hard
to
review.
So
I'm
I'm
trying
to
find
a
minimal
implementation
which
gives
an
overview
what
it
does
and
what
could
happen
if
this
is
integrated
in
kubernetes,
but
it
still
gives
the
reviewers
the
possibility
to
review
it
and
understand
what
I'm
changing
the
error
that.
A
That's
totally
makes
sense,
so
do
you?
Is
there
any
way
doing
that
earlier
prototyping
and
you
found
the
there's
the
like
the
kind
to
kubernetes
or
kubernetes
at
least,
and
the
cri
they're
preserved.
Some
have
like
the
local
node
dependency
to
to
break
this
checkpoint.
First,
to
really
make
the
czech
pioneer
restore
work.
You
cannot
have
like
the
local
dependency
right
now
to
know
the
dependency
at
least
make
that
happen
abstract.
A
I
I'm
not
sure
so
so
you're
right.
If
there
is
a
dependency
on
the
node,
then
it's
not
possible
and
and
and
the
checkpointing
used
below
and
which
is
which
is
based
on
preview.
I
So
whenever
the
the
dependency
on
the
node
is
a
dependency
on
a
certain
hardware
device,
then
you
basically
cannot
checkpoint,
restore
or
migrate
it
because
you
depend
on
the
hardware
state
of
it.
So
if
it's,
I
don't
know
some
some
kind
of
accelerator
or
or
I
don't
know,
then
then
it's
not
possible.
I
What
I
know
from
what
I
know
from
working
on
checkpoint
restore
on
on,
for
example,
podman,
and
what
we
do
there
is.
If
you
have
have
a
volume
mounted
into
the
container,
then
we
either
remount
it
upon
restore
or
if
it's
I
think
it's
called
the
named
volume,
then
the
named
volume
is
included
in
the
checkpoint
and
restart
later
with
the
container
when
you
restore
it.
So
that's
the
thing
how
we
deal
with
dependencies.
So
if
it's
a
hardware
dependency,
then
it's
it
doesn't
work
at
all.
I
If
it's
a,
if
it's
a
storage
dependency,
then
if
the
storage
directory
device
is
available
on
the
destination
node,
it's
it's
doable
and
if
it's,
if
the
storage
volume
is
included
in
the
container,
then
it's
also
doable.
That's
that's
the
thing.
What
I
can
say
about
dependencies
right
now,.
I
I
I
can
update
my
pull
request
and
I
already
split
out
the
the
cri
api
changes
into
a
separate
pull
request,
and
if
there
is
contents
that
this
is
a
feature
which
is
will
be
merged
or
can
be
merged
into
kubernetes,
then
I
think
the
the
the
point
to
help
me
move
it
forward
would
be
somehow
to
get
the
cri
api
changes,
reviewed
and
and
merged
in
so
I
can
get
my
patches
for
cryo
and
and
cry
control
merged
which
depends
on
on
these
api
changes
and
once
the
api
changes
are
all
there
and
implemented
the
the
the
the
pull
request
to
actually
implement
the
the
feature
in
kubernetes
would
be
the
next
step,
and
initially
I
thought:
that's
that's
how
how
it
I
I
would
like
to
do
it.
I
But
then
there
was
the
the
the
the
comment
that
it
would
be
nice
to
see
the
end-to-end
implementation
before
merging
the
cri
api
changes,
and
but
I
think
the
the
cr
the
implementation
will
take
much
longer
to
review
than
the
cri
api
changes.
I
So
I
would
just
come
back
here
to
the
meeting
when
I
think
I'm
ready
to
have
the
final
api
changes
reviewed,
bring
it
up
once
more
here
to
the
meeting,
and
if
the
group
is
okay
with
it
to
moving
this
forward
and
and
if
the
api
changes
makes
sense,
then
I
think
this
would
be
the
way
for
me
to
move
forward.
If
the
api
changes
can
be
can
be
merged.
All
the
other
work
on
top
of
it
can
continue
from
there.
J
Oh
hi
adrian
this
is
ritesh.
Thank
you
for
being
bringing
this.
I
think,
I'm
the
one
quick
question
current
looks
like
with
the
drain
approach.
The
checkpoint
is
tied
to
a
node
and
there
is
this
extra
step
of
moving
that
checkpoint
and
then
restoring
it.
I
J
A
Quality
is
pretty
bad
yeah.
J
Switched
off
the
video-
maybe
that
was
the
issue
so
yeah.
So
my
question
was
currently,
it
looks
like
a
checkpoint
is
tied
to
a
node
once
a
checkpointed
that
particular
checkpoint
is
available
in
that
particular
node,
and
the
extra
step
involved
is
to
move
the
checkpoint
to
a
different
node
like
even
in
case
of
drain,
and
then
only
you
can
restore
it.
J
I
I
was
just
juxtaposing
it
with
the
whole
container,
how
it,
which
is
those
things
the
image
is
available
in
the
registry
in
some
central
location,
and
you
could
always
like
when
you
are
draining
it.
It
looks
like
the
pod
could
be
running
in
different
node
and
you
wouldn't
notice
that
question
to
adrian.
Maybe
I
don't
know
if
you
had
given
a
thought
of.
J
I
A
Try
to
rephrase
it
so
basically
what
he
he
asked
is
the
similar.
What
I
initial
ask
so,
have
you
talked
about
the
control
plan,
how
to
make
that
is
end-to-end
user
experience?
Next,
I
saw
some
couple
use
used
key
things
earlier
I
said:
can
we
do
that?
So
you
kind
of
try
to
you.
You
answer
my
question
actually,
but
what
he
follow
up
is
more
likely
to
say.
A
Oh
right
now
we
know
how
to
check
a
point
and
this
container
during
the
drink
cases,
and
and
so
but
there's
the
lara
haven't
restored
process
how
we
are
going
to
restore,
and
so
maybe
is
the
possible.
So
he
just
asked
you
have
you
thinking
about.
A
Is
this
is
checkpoint
and
can
be
restored
on
the
target?
Note,
sorry
and
retest.
This
is
what
I
understand
when
you
say
that
your
question
is
kind
of
the
similar
question,
but
to
follow
up
even
more
without
like
the
what
I
ask
for
how
kubernetes
ends
to
end
management,
the
container
orchestration.
Do
we
talk
about
the
scheduling
talk
about
the
control
plan?
He
basically
is
like
most
mourners.
They,
like
okay
check
a
point.
You
already
checked
by
that
is
ready
during
the
job.
A
I
So
so
so
it's
a
question
how
you
can
move
a
checkpoint
from
one
system
to
another
one:
yes,
yes,
this
is
something
I
actually
don't
know,
because
I'm
I'm
missing
the
the
background.
How
how
how
it
could
look
like
to
because,
right
now,
what
I
get
is
a
it's.
Basically,
your
tar
archive,
which
I
get
on
on
disk
and
from
my
previous
knowledge
with
container
engines,
not
orchestration.
You
just
copy
the
tar
archive
from
one
system
to
another
one,
and
then
you
restore
it,
I'm
I'm!
I
Unfortunately,
I
don't
know
how
to
move
something
in
a
kubernetes
cluster
from
one
and
from
one
node
to
another
node.
This
is
not
something.
I'm
currently
have
any
idea
how
that
works.
So
this
would
be
something
where
I
I
would
need
help
from
from
someone.
I
think
I
I
heard
something
about
mentioning
a
registry
so
that
the
checkpoint
could
be
pushed
to
a
registry.
I
This
is
probably
doable
and
I
haven't
worked
a
lot
with
container
d,
but
I
think
container
d
has
something
where
they
combine
a
container
image
with
a
checkpoint
and
are
able
to
push
it
to
a
registry.
At
least
I
I
think
I
read
about
it
so
so
a
registry
would
be
something
and
where
how
this
could
how
it
could
work,
but
also
a
shared
file
system.
I
But
I
I
think
I
I'm
not
aware
how
to
correctly
solve
it
right
now,
but
my
my
assumption
is
that
that
this
shouldn't
be
such
a
hard
problem
in
the
end
to
solve,
and
if
we
are
once
at
a
point
where
we
actually
can
do
migration,
I
think-
or
at
least
that's
my
assumption-
that
it
should
be
doable
somehow
to
move
data
from
one
node
to
another
with
some
mechanism
but
which
I'm
not
aware
right
now
how
to
do
it.
K
I
can
say
something
so.
Basically,
there
was
a
presentation
two
years
ago
at
the
linux
forms
conference
and
I
think
pablo
and
roswell
were
describing
the
task
migration
in
google's
work,
which
is
using
the
page
server.
So
we
can
basically
use
the
same
mechanism
we
currently
have
with
the
archive,
but
we
only
save
the
state
and
one
of
the
processes
without
migrating
the
manual
pages
and
then
use
the
page
server,
for
example,
to
migrate.
Another
pages,
and
what
do
you
think
about
this.
I
I
They
used
first,
something
which
is
not
a
posix
file
system,
and
then
they
had
some
additional
layers
and
the
year
after
that
they
reported
that
they're
using
some
some
streaming
mechanism
to
stream
the
cont
the
checkpoint
somewhere
and
later
restore
it
yeah.
I
But
it
doesn't,
as
I
think,
as
I
said
it,
it
doesn't
sound
like
an
unsolvable
problem
to
exchange
data
between
two
notes
and-
and
maybe
it
will
get
clearer
over
time,
at
least,
but
in
the
end
I
currently
I'm
I'm
not.
I
don't
know
the
the
the
storage
back
and
good
enough
to
to
give
a
real
answer
here.
Right
now,.
A
Yeah
so
andrew,
I
totally
agree
with
you:
it's
maybe
that
scope
is
too
big
and
until
we
prove
we
can
do
that
right
so
like
on
the.
If
I
give
you
two
node
and
there's
the
container
running,
and
we
just
say:
oh,
we
can
check
point
and
then
I
can
on
purpose
they
restore
that
container
running
container
and
to
another
node.
So
all
the
other
discussion,
control
plan
and
end
to
end
can
be
followed.
A
It's
just
too
big
at
this
moment
and
there
is
too
many
policies
to
solve
so,
but
can
we
still
have
like
the
keep
this
scope?
I
saw
that
when
we
first
talked-
and
we
did
mention
that
cuba
cattle
have
like
either
some
like
the
small
flag
and
they
say
oh,
this
is,
can
be
checkpoint
and
then
can
we
build
some
tools
like
this
say.
Oh
this
is
being
checkpoint
and
can
we
restore
on
some
node
like
the
like?
It's
not
like
the
container
like
the
kubernetes.
Do
the
ultrasound
automated
right?
A
So
let's
not
have
this
one,
it
glow
and
at
least
we
have
e2e
test
or
some
demo
cases.
I
check
a
point
and
all
the
containers
running
on
this
node.
I
want
to
successfully
and
restore
this
one
to
another
node
which
it
is
have
the
same
motion
and
the
kubernetes
and
have
the
same
os
and
the
others
and
the
same
size
like
capacity.
So
if
we
can't
do
this
one,
I
think
it
is
good
things.
Then
we
can
talk
about
more,
because
this
is
right.
A
Yeah,
so
let's
make
the
scope
is
smaller.
So
next
right
now
just
give
you
two
identical
note
and
roughly
identical
note,
and
can
we
prove
have
a
demo
cases
or
some
e2e
test,
and
we
can
check
a
point
and
restore
on
another
identical
roughly
at
any
goodness.
So
then
I
I
I
really,
I
personally
really
looking
forward
this
move
forward.
I
just
want
to
say
that
and
the
reason
I
asked
earlier
ask
no
code,
no
the
dependency,
because
I
try
to
avoid
kubernetes
big
front
day
when
we
design
kubernetes
api
and
the
node.
A
We
try
to
don't
have
too
much
local
dependency
because
we
think
about
the
checkpoint,
restore
from
daylight
and
borg
have
to
spend
a
really
long
time
to
remove
those
local
dependencies.
So
that's
why
I
know
that
then
we
can
make
that
restore
work.
So
that's
why
I
have
concern
earlier
so,
and
I
ask
you:
do
we
can
help
you
to
remove
those
dependencies
here?
A
So,
let's,
let's,
let's
make
the
scope
and
the
smaller,
and
then
we
can
demo
and
to
the
community
this
this
team
and
also
bigger
community-
and
I
think
many
people
will
come
up.
The
use
cases,
and
so
that's
the
value
here.
C
So
any
other
we,
I
think,
have
three
minutes
left.
A
Definitely
so
we
need
to
find
the
reviewer
for
your
cri
related
and
kupakato
ci
tool
related
all
for
topic
change
and
we
need
to
find
reviewer,
and
maybe
we
can
follow
up
offline
on
this
one
and
then
looking
forward.
You
come
back
to
give
the
demo
and
make
progress
and
let's
talk
about
what's
the
target
ring
is
and
separately,
we
only
have
the
two
minutes.
We
have
two
and
sorry
about
that
thanks
and
then
check
the
time
so
windows.
D
I
can
go
real,
quick
I'll
keep
these
general.
The
first
is
kind
of
a
class
of
issues,
so
this
is
just
a
specific
like
the
issue.
D
I
link
to
is
just
a
specific
kind
of
example
of
this,
but
as
we're
kind
of
maturing
the
windows
platform
more
and
more
we're
seeing
that
a
lot
of
there's
some
fundamental
differences
in
even
the
oci
spec
about
how
windows
and
linux
containers
are
are
managed,
and
so
in
this
case,
the
way
that
you
attach
a
device
to
a
container
is
a
lot
different,
and
I
was
just
kind
of
looking
for
some
general
feedback
about
the
best
way
to
bring
these
discussions
up
to
the
community
and
who
to
involve
in
these
discussions,
should
should
we
create
issues
or
open
kind
of
some
work,
progress
prs
to
start
discussions
and
who
insigned
is
the
right
kind
of
audience
for
some
of
these
cri
specific
changes.
L
In
overall
regarding
device
support,
you
might
want
to
join
the
container
orchestrated
devices
working
group
on
the
seagram
time
and
cncf
and
bring
within
those
use
cases.
We
desperately
need
one
discussion
as
well.
D
Okay,
I
will
make
note
of
that
to
do
that.
Is
there
anybody
kind
of
from
sig
node
who
would
help
be
able
to
rationalize
some
of
the
this
discussions
here?
We've
noticed
that,
like
some
aspects
of
the
cri,
particularly
the
security
contexts
are
pretty
well
segregated,
they're
all
have
like
linux,
specific
names
and
with.
I
think
the
intention
of
eventually
having
the
windows
once
coming,
but
we're
seeing
a
lot
of
the
other
parts
of
the
cra
just
have
like
the
generic
defaults
are
kind
of
linux
influenced
so.
A
Mark
I'm
going
to
ask
ugn
the
nintendo
to
to
assist
if
they
can
help
on
this
one,
because
I
I
I
I
understand
your
concern
yeah
so
so
it's
kinda,
like
the
people,
have
to
understand
the
ci
and
also
have
some
understanding
about
the
windows
container
and
and
also
have
some
understand
for
other
areas
that
can
help
here.
So
that's
why
you
you
I
I
heard
your
concern,
so
I
will
ask
those
two-
and
this
is
if
they
have
the
benefits,
to
help
you
to
move
forward
on
here.
D
Okay-
and
I
did
try
and
link
to
the
oci
specs
for
the
differences
there
and
then
the
next
quick
topic
is
that
we
have
two
caps
that
we're
hoping
to
start
some
implementation
on.
One
is
the
windows
privilege
container
cap
that
was
initially
kind
of
proposed
in
120,
and
we
didn't
make
progress
on
that
due
to
a
lot
of
networking
concerns,
I
think
all
of
the
overlap
between
sig
node
has
largely
remained
the
same.
D
A
L
And
if
possible,
I
also
will
go
really
quick.
It's
practically
just
announcement.
Like
few
months
ago,
we
were
talking
about
devices
which
can
be
used
in
non
privileged
containers
or
known
route,
where
we
uncovered
the
situation
where
container
runtimes
are
copying.
Permissions
from
the
host
is
so
based
on
previous
discussions.
We
have
proof
of
concept,
budgets,
available
document,
updated
issue,
updated
all
the
people
who
previously
participated
in
this
document
also
pinned
it,
but
if
somebody
else
wants
to
review
it,
comments
are
more
unwelcome.
A
Yeah
thanks
and
we
have
one
topic
related
from
angry
access
and
we
unfortunately
yeah,
and
can
we
switch
to
continue
that
conversation
next
week.
L
Well,
if
it
will
be
something
to
discuss
yes,
but
my
my
intention
was
to
to
show
a
link
with
people
to
a
document
and
ask
to
review.
But
okay.
A
A
A
And
please
and
I'm
going
to
follow
up
on
another
topic.
I
added
the
slack
thanks
bye.
Everyone
have
a
good
day.