►
From YouTube: Kubernetes SIG Node 20201208
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
All
right
well
welcome
everyone
to
the
december
8th,
signed
meeting.
I
have
two
primary
items
on
today's
agenda
as
well
as
just
our
typical
update
on
our
pr
health,
and
so
for
that
first
one,
I
don't
know
sergey.
Do
you
want
to
talk
through
them?
If
you
were
able
to
collect
any
data.
B
I
just
wanted
to
talk
about
aprs
that
been
caused
and
merged.
So
out
of
closed,
dr
there
is
one
that
you
there
closed
about
ephemeral,
containers,
transition
state
and
you
suggested
that
we
track
it
as
part
of
ephemeral
container
ga.
So
if
you
want
to
j8
soon,
I
don't
know
who
want
to
if
somebody
plans
to
work
on
this,
ga
please
pay
attention
and
take
a
look
at
this
pr
other
than
that.
There
is
no
surprises.
B
Everything
like
I
mean
slow
creation
of
prs,
but
you're,
also
in
in
code
free,
so
only
merged
pr
cherry
peaks,
yeah.
A
Okay,
so
thanks
for
that
update-
and
I
think
the
discussion
around
ephemeral
containers
or
any
time
we
discuss
new
container
types-
is
a
nice
lead-in
to
rodrigo
the
topic
you
put
on,
which
I
admittedly
had
not
seen
tim
hawkins.
D
Yes,
I
can
sum
it
up,
but
so,
but
maybe
first
it's
very.
I
wanted
to
ask
if
others
wanted
to
create
other
pre-proposals
or,
if
I
think,
maybe
you,
derek
or
seth,
wanted
to
explore
some
other
approach
with
with
a
without
that
directly,
but
I'm
not
sure
yeah.
I
want
to
know
if
someone
can
review
teams
proposal.
D
I
of
course
have
a
look
at
it
and
yeah
just
know
what
are
the
next
steps,
because
I
I
thought
different
people
wanted
to
create
different
proposals
and
we
wanted
to
see
the
pros
and
cons
of
different
proposals
and
see
which
path
we
want
to
continue,
but
yeah,
I'm
not
sure
what
yeah
the
next
steps
of.
If
I
misunderstood
something.
B
So
we
had
a
meeting
before
with
a
lot
of
people,
so
it
was
like
a
lot
of
man.
Hours
spent
on
this
meeting
discussing
various
options
how
to
get
from
initial
proposal
that
we
wanted
to
merge
in
122
where
we
want
to
get
in
the
in
the
end,
and
we
threw
in
few
ideas
and
ideas
were
gradual
from
a
little
bit
of
improvement
over
what
we
had
with
rodriguez
and
then
all
the
way
to
graph
defined
in
a
post
specification.
B
So
team's
hooking
proposal
is
something
somewhere
in
the
middle.
So
it's
it's
just
a
like.
We
split
initialization
into
named
stages,
and
these
names
are
predefined.
B
In
this
proposal
there
is
a
like
it
alludes
to
the
possibility
in
future
to
extend
it
with
the
user-defined
stages,
but
for
now
it's
a
predefined
stages
that
have
a
defined
order
and
stages
are
like
before
network
created
after
network
created
and
like
initialization
of
user
code,
something
like
that
and
then
inside
every
stage
there
is
a
possibility
to
either
run
container
to
completion
or
run
container
continuously.
So,
like
so
imagine,
you
need
containers
that
can
stay
alive
after
they
started
so
yeah.
B
So
there's
a
kind
of
like
meat
of
the
proposal
and
the
reason
for
this
proposal
is
because
like
as
opposed
to
graph
is,
we
believe
it
will
be
a
little
bit
more
user-friendly,
so
it's
easier
to
operate
with
it.
So
you
don't
need
to
like
users,
would
make
less
mistakes,
defining
it
and
define
the
graph
and
all
the
dependencies
and
to
be
easier
for
injectors
to
inject
containers
into
specific
places.
So
you
know
that
you
need
to
execute
before
network.
Then
you
inject
into
before
network
stage.
B
What
this
proposal
doesn't
solve
is
relationships
inside
the
stage.
So
if
you
have
multiple
containers
like
setting
up
a
network
like
one
doing
some
specifications
another
doing
some
ip
tables,
then
you
cannot
define
ordering
of
these
containers.
So
this
is
not
solved
in
this
proposal
and
it's
intentionally
so
because,
like
it,
complicates
the
design
and
but
may
not
have
a
practical
use.
A
Yeah,
so
I
think
I
don't
know
seth
if
you
want
to
give
an
update
on
where
you
were
on
your
thoughts,
but
I
think
it's
important
that
we
all
realize
that
we
are
still
chasing
down
bugs
with
just
the
init
container
in
primary
container
separation
today.
A
So
even
in
my
first
review
of
tim's
pr
like,
I
think
we
need
to
appreciate
as
a
community
like
we
still
have
bugs
on
just
these
two
types,
and
so
I
know
seth
himself
rather
than
writing,
a
proposal
was
probably
chasing
down
bugs
on
wire
and
nick
containers
being
rerun
when
not
expected.
So
like.
Maybe
to
that
point
seth,
do
you
want
to
talk
through
like
the
types
of
things
we
see
that
in
the
code
today
might
make
it
hard
or
how
we
could
expand
the
phases
to
actually
be
working
in
the
code.
E
Here
it
looks
like
for
you
all
right,
the
bike's
warming
up
all
right
right,
so
I
mean
we
still
have
bugs
and
a
lot
of
them
center
around
the
fact
that
I
mean
so.
The
cubelet
is
getting
state
about
the
containers
from
the
container
runtime
right
and
it's
doing
that
in
a
loop
asynchronously.
E
It's
reporting
status
to
the
api
server
asynchronously
from
that,
and
then
it's
making
decisions
about
what
state
the
pod
is
in
asynchronously
from
that.
So
you
know
we
have
these
situations
where,
like
I've,
got
a
pr
open
right
now,
where
I,
in
an
attempt
to
reclaim
space
on
my
nodes,
I
deleted
all
of
the
completed
containers.
E
Well,
when
I
do
that
it
deletes
completed
init
containers
and
turns
out
the
cubelet
reruns,
all
the
init
containers.
Now,
even
though
the
main
containers
are
running
which
basically
it
once
the
containers
are
removed
from
the
runtime,
it
updates
the
container
status
and
it
goes
oh,
it
doesn't
exist
in
the
runtime
anymore,
so
it
erases
the
container
status
and
then
all
the
code
that
determines
what's
what
state
we're
in
says.
A
E
That's
the
way
I
can
trigger
it
manually,
but
this
can
happen
at
any
any
point
in
time
now
we're
you
know
we're
somewhat
covered
in
the
docks
and
that
containers
are
supposed
to
be
high
and
potent,
and
you
should
be
able
to
rerun
them
without
side
effects.
But
you
know:
do
people
really
do
that?
Probably
not.
E
We
had
a
situation
in
our
product
where
that
wasn't
the
case
and
it
caused
problems
so
and
and
so
the
the
the
code
that
determines
what
state
the
pod
is
in
and
where
we
are
in
the
pod
life
cycle
right.
It's
like
you
know,
so,
we've
run
init
container
one
and
two
we
haven't
run
it
at
container.
Three.
None
of
the
main
containers
are
running,
so
we
need
to
run
it
at
container.
Three,
that's
all
very
complicated.
It's
spread
out,
it's
not
real
centralized.
E
So
I
mean
those
are
things
that
we
haven't
even
gotten
our
current
state
machine
working
working
out,
and
so
I'm
really
hesitant
to
expand
it.
When
we,
we
don't
really
when
no
one
really
rocks
what
it
currently
does
and
I
and
I'm
not
sure
what
the
path
forward
for
that
is.
I
mean
I'm
not
sure
if
we
need
to
clean
up
the
code
so
that
our
our
current
state
machine
is
understandable
and
documented.
A
So
on
that
point,
though,
seth
like,
I
think
we
never
really
got
checkpointing
in
the
cubelets.
So
the
fact
that
we
try
to
reconstruct
state
of
the
pod
life
cycle
machine
by
asking
the
container
one
time
you
know
what
its
understanding
of
state
was,
which
in
some
cases
could
have
been
historical,
and
then
we
had
other
subsystems
going
to
lead
it.
As
you
talk
through,
like
I'm
wondering
to
make
it
more
advanced.
A
Do
we
need
checkpointing
where
the
cubelet
can
write
a
log,
that's
not
dependent
on
the
container
runtime
kind
of
detective
analysis
for
lack
of
a
term
that
we
seem
to
do
right
now,
but
that
that
feels
like
in
tim's
proposal
or
in
any
of
those
proposals
is
the
type
of
thing
that
we
might
be
missing.
To
make
this
thing
really
work.
Do
you
do.
E
You
feel
differently
yeah,
I
I
agree,
and
in
this,
in
this
particular
case,
we
have
to
like
the
cubelet
historically
has
seen
the
container
runtime
as
the
authoritative
source
for
container
status.
E
So
I'm
going
to
now
I'm
going
to
cache
this
state,
the
the
state
of
this
container
and
regardless
of
what
happens
in
the
container
run
time
or
you
know
if
it
gets
garbage
collected
or
whatever
else,
I'm
not
going
to
mirror
that
up
to
the
api
server,
because
in
in
my
design
this
is
the
terminal
state
and
the
condition
should
never
change.
After
this.
F
F
There
are
many
people
want
to
simply
fix
simplification
because
they
worry
about
adoption
right
so
because
back
then
there's
the
many
of
the
darker
use
cases
and
also
many
people
using
d
raising
of
the
power
all
like
the
using
system
d
in
capture
into
the
container
and
describe
the
container
dependency
even
after
that,
I
believe,
openshift
have
that
some
of
the
similar
use
cases.
So
nobody
really
understand
what
it
is,
the
dependency
between
part
and
the
reasoning
part.
F
So
if
we
have
like
the
check
point-
and
then
basically
it
is-
is
so
hard
for
us
to
move
the
next
stage,
no
matter
it
is
any
change
into
the
part
of
spec
will
be
much
harder
or
anything
in
the
node
spec,
especially
the
also
resource
resource
requirement,
like
the
quantity
of
the
services
always
moving
target
for
us.
So
that's
why
we
make
the
decision
not
to
check
a
point
pound
to
each
component,
which
is
in
the
node.
They
maintain
their
temperature
of
the
of
the
checkpoint.
F
So
we
do
see
that
the
single
eye
and
the
mini
cubonite
they
may
need
their
checkpoint,
caused
some
trouble
chain
for
our
release.
But
then
they
quite
quickly
take
action
and
move
forward.
So
that's
why
we
want
to
understand,
but
we
never
see.
We
never
want
to
check
a
point.
I
just
want
to
share
here.
So
this
is
exactly
like,
for
example,
teams.
The
proposal:
if
we
check
a
point,
the
team
proper
so
basically
is
almost
impossible.
We
could
move
to
next
stage,
but
right
now,
actually
we
don't
have
the
checkpoint.
F
So
then
we
now
we
start
thinking
about
the
more
serious
of
the
dependency
or
introduce
some
fish
vision
off
the
part,
and
we
can
talk
about.
Maybe
we
should
we
should
be
checkpine,
but
but
once
we
have
the
checkpoint
like,
for
example,
some
people
still
think
about.
Let's
not
capture
all
the
use
cases,
then
we
need
to
think
harder.
So
is
that
really
our
worship
like
the
first
evolution
we
seriously
want
to
checkpoint,
because
the
deprecate
will
be
much
harder.
A
So
I
don't
think
don
I'm
trying
to
say
that
we
want
a
checkpoint.
I'm
saying
I
just
want
to
make
sure
that
people
appreciate
that,
like
there
are
people
spending,
time
and
energy
trying
to
get
the
existing
system
working
which,
when
we
can
point
the
bugs
that
have
existed
in
the
containers
that
are
just
related
to
the
fact
that
we
garbage
collect
these
things.
Right
and
people
run
these
machines
and
they
see
things
rerunning
and
it's
surprising,
and
we
have
a
two-phase
system
to
go
to
an
end
phase
system
that
tends
proposing.
A
A
Does
the
cubelet
try
to
reconstruct
state
by
pulling
the
runtime
as
it
does
now,
or
does
it
capture
what
it
did
and
that
feels
like
a
requisite
need
for
any
of
the
solutions.
F
But
derek
I
I
haven't
looked
at
that
the
bag
in
detail,
but
I
just
want
to
say
that
also
our
garbage
collection
used
to
be
just
leave
with
existing
of
the
docker
behavior.
So
it
is
not
well
defined
because
we
do
know
when
we
work
on
the
container
runtime
interface,
we
did
a
talk
to
the
container
d
community.
We
want
to
define
that
api
after
finish
the
runtime
stage,
and
we
want
to
do
more
image
stage
and
also
continuities
that
have
like
the
checkpoint.
G
F
That
to
continue
gabby
collection,
actually,
in
the
past
nanta
and
david,
and
I
discussed,
we
have
to
redesign
that
graphic
collection.
I
just
want
to
say
even
later
we
have
the
container
runtime
interface
and
we
didn't
evolve.
Of
course,
the
things
change
people
move
on,
but
back
to
front.
They
were
we
talked
about
and
we
even
communicate
with
the
continental
community.
F
Try
to
working
on
new
garbage
collection-
I'm
not
I'm
not
talking
about-
is
the
kubernetes
not
doing
graphical
action
still
kubernetes,
but
we
have
one
to
the
container
runtime
interface,
definitely
interface
and
also
have
like
the
uc
utilize,
more
utilize,
the
container
this
checkpoint,
because
they
did
check
point.
We
shouldn't
remove
those
kind
of
things,
basically
the
one
so
there's
something
we've
been
talking
about,
but
the
people's
move
on.
So
a
lot
of
things
move
out,
and
so
that's
why
we
haven't
done
those
things.
F
My
partner,
it
is
just
not
like
the
kubernetes
have
to
do
the
checkpoint
to
solve
that
problem
or
whatever
things
there
are
other
things
that
already
been
discussed
and
maybe
even
half
cooked.
But
we
haven't
finished
the
last
couple
of
minutes
here.
A
Given
the
issues
we
do
know
we
see,
so
I
I
don't
know,
I
I'm
I'm
curious
if
the
types
of
things
that
we're
raising
now
rodrigo
that
we
hear
in
three
new
instances
of
even
with
the
two
styles
of
containers
we
have
today
are
these
things
that
you
had
seen
as
well
or
are
we
over?
A
I'm
curious,
like
do
you
it's
possible,
I
might
be
over
overthinking
this,
but
the
fact
that
we,
I
I
still
see
us
struggle
with
init
and
primary
containers,
and
we
have
problems
with
that
alone.
Like
is
this
something
that
you
had
maybe
thought
through
on
like
how
we
could
maybe
better
capture
phases
between.
I
don't
know
what
what
what
thing
have
you
done
relative
to
tim's
proposal
or
or
maybe
tim
himself,
who
may
or
may
not
be
here
to
think
about
like
how
we
capture
across
these
phases.
D
Well,
I
haven't
thought
in
advance
how
we
capture
the
faces.
I
wanted
to
know
if,
if
the
overall
direction
looks
fine
before
putting
more
such
into
this,
like
into
this
road
like
because
maybe
we
don't
want
phases,
we
don't,
we
just
want
explicit
dependencies
between
containers
and
then
like
I
wanted.
I
wanted
to
know
where
the
what
direction
do
we
want
to
take
before
making
a
design?
D
But
I
think
what
I'm
not
following
is:
if
we
use
the
path
status
to
know
when
a
container
was
run
or
not,
we
were
still
missing
information.
D
D
Implementation
that
I
did
using
annotations,
I
relied
on
the
on
the
pod
status
and
that
worked
quite
well
for
for
our
use
cases.
A
So
maybe,
with
the
pursuit
of
this,
I
have
to
finish
reading
tim's
proposal.
I
think
my
comments
got
through
the
first
half,
I'm
not
sure
if
everyone
else
has
had
a
chance
to
review
so
like
it's
good
that
we
continue
to
look
at
this
and
maybe
give
our
feedback
on
there
and
just
be
transparent,
like
I
have
not
had
time
since
our
last
meeting
to
put
together
a
in
more
thought.
This
is
a
hard
area,
so
yeah.
B
B
Look
at
it
from
a
perspective
that
we
have.
I
B
Enough
use
cases
that
we
believe
that,
like
maybe
faces,
will
be
a
solution
or
we
can
reject
and
say
doug
will
be
solution,
but
then
even
this
is
the
solution.
We
can
only
implement
our
first
few
steps.
We
believe
that,
like
are
crucial
so
in
case
of
phases
proposal
we
can
implement
unique
containers
that
stay
forever
and
then
split
everything
to
phases
with
checkpoints.
B
So
we
can
like
I
mean
there
may
be
stages
in
any
proposal
so
like,
but
I
think,
what's
important
to
understand
where
we're
going
is
a
north
star-
and
this
is,
I
believe,
seems-
was
concentrated
on
like.
Are
we
confident
that
this
number
of
end
user
scenarios
will
be
enough
to
solve
all
the
problems?
That's
why
he
put
like
a
login
and
network
as
a
first
class.
Like
I
mean
he
started
with
the
user
scenarios
and
with
like
implementation,
specific
things.
F
How
can
we?
How
can
we
agree,
that
is
the
law
sister?
I
I
read
the
proposal
I
like
it
and
but
the
problem
is
like,
for
example,
rorigo.
I
also
read
a
comment,
many
comments
and
I
found
the
people
comment.
That's
not
enough,
but
I
also
see
some
other
arguments
say,
of
course,
the
argument
from
team
also.
I
personally
also
believe
that
argument,
because
in
the
past,
when
every
time
talk
people
talk
about
me
more
complicated
use
cases,
I
will
using
similar
like
the
fixed
proposal
there.
F
So
then,
who
have
the
more
complicated
use
cases,
and
can
we
just
think
about
this
proposal
is
not
enough
trick?
Try
to
poke
a
fist
scenario
is
not
cover,
so
we
really
need
a
more
complicated
system
d
like
of
the
dependency
and
describe
that
one,
because
we
also
talked
about
that
in
the
past.
F
So
so
can
we
just
have
the
cases,
someone
and
dig
into
the
cases
and
say
that's
not
enough
more
concrete,
these
cases,
that's
not
enough
and
also
we
then,
on
other
hand,
how
to
migrate
it's
possible.
This
proposal
is
possible,
like
that.
I
like
what
you
say
the
circuit.
What's
the
milestone,
what's
the
stage
how
we
might
get
to
that
stage,
we
need
to
think
about
how
to
step-by-step
guide
to
today's
existing
of
the
production
environment
too.
That's
a
new
stage
in
new
stuff,
so
that,
but
first,
let's
connect
the
rory
go
see.
D
I'm
not
sure
there
is
a
problem
mandrel,
because,
basically,
if
you
have
a
stages
when
you
want
to,
when
you
move
to
attack,
you
can
just
replace
the
stages
from
all.
Containers
in
this
stage
depend
on
all
containers
in
the
next
stage,
and
you
can
just
just
look
at
it
as
a
graph.
D
So
I
don't
see
any
big
issues.
You
will
have
a
problem
if
you
want
to
do
it
the
other
way
around.
If
you
have
explicit
dependencies
and
you
want
to
move
to
faces.
But
if
you
have
usually
couple
phases
sure
you
can
just
yeah
just
represent
in
the
dependencies
all
containers
in
phase
a
depends
on
all
containers
in
phase
b
and
you
get
the
same
behavior
just
with
dependencies.
A
So
the
one
thing
this
might
be
in
the
proposal,
but
it
wasn't
when
I
got
there
was-
or
at
least
as
far
as
I
had
read,
was
I
still
feel
like
in
the
phase
discussion,
and
we
say
a
certain
container
can
keep
running
during
the
life
of
a
pod.
Any
one
of
these
phase,
sidecar
containers
can
can
stop
right
and
the
primary
container
could
still
be
running,
and
I
wasn't
hearing
a
use
case
that
captured
the
pod.
A
Sandbox
should
be
torn
down
if
one
of
these
sidecars
ceases
to
run
and
if
I
think
about
these
side,
cars
providing
network,
egress
or
other
capabilities
potentially
to
these
primary
containers,
some
type
of
shared
fate
among
them
seems
like
something
I
could
see
being
potentially
desirable,
but
maybe
that's
in
the
rest
of
the
dock
that
I
haven't
seen,
but
I
feel,
like
our
definitions
are
rather
loose
on
what
to
do
when
the
container.
A
D
Yeah,
I
think
the
proposal
mentions
that,
like
fatal,
critical
containers
or
that
kind
of
thing
is
are,
are
on
purpose
not
not
solved
in
this
proposal.
D
I
think
it
mentions
something
like
it
like
that,
not
much
reasoning,
but
in
any
case
I
agree,
even
with
critical
con
containers
or
not
that
the
failure
mode
modes
are
something
that
needs
more
exploration
or
undetermination
phase,
because
yeah
in
some
cases,
if
the
service
mesh
that
gives
you
network
fails,
maybe
it's
a
bigger
issue
than
if
a
loading
container
fails
and
is
restarted
and
you
just
queued
up
a
little
bit,
I'm
not
sure
in.
D
A
Okay,
well,
I
think
we
all
have
more
to
read,
and
hopefully
we
can
sync
up
with
sam
as
well
in
the
future
if
he
wants
to
come,
discuss
it
as
well.
F
Next
time,
next
time
can
we,
I
will
invite
the
team
here
if
next
time
we
don't
need
to
rush
next
week,
and
I
think
give
us
some
time
and
signal
the
discuss,
because
I
think
this
topic
is
complete.
Topic
is
being
discussed.
Many
many
many
times
even
from
the
beginning
of
the
kubernetes
has
found
it
right.
F
So
we
don't
need
a
rush,
but
when
we
write
the,
I
think
that
we
invited
the
team
and
several
other
folks
here
and
then
we
can
more
seriously
talk
about
accept
this
proposal,
move
forward
or
reject
the
proposal
and
want
to
do
some
other
direction.
Even
we
could
just
say.
Okay
hold,
we
just
say
we
are
just
like
the
direct
earlier
you,
both
you
and
the
assessments
that
even
implicit
physics
stages.
We
today
we
already
have
implicits
right,
so
we
are
cannot
handle
very
well
and
we
don't
have
confidence
the
morning.
F
B
B
I
thought
that
this
pr
that
cells
is
working
on
is
it's
just
an
edge
case
and,
like
I,
I
was
treating
it
like
this,
but
it
feels
like
to
me
that,
after
this
discussion,
if
you
have
more
issues
internally
and
dawn
also
mentioned
that
we
need
to
formalize
and
maybe
even
redesign
garbage
collection.
So
can
we
surface
this
issue
somehow,
like?
Can
we
make
it
more
visible
to
community
as
well?
I.
A
Mean
we
can
sergey,
I
think
the
way
I
look
at
this
is
like
in
the
same
way
that
everyone
who
works
within
a
vendor
who
offers
kubernetes
to
their
users.
We
all
get
bugs
right,
and
sometimes
we
see
these
bugs
and
we
don't
give
them
the
attention
they
deserve
and
other
times
we
get
bugs
that
take
take
time.
A
We're
making
a
big
push
at
red
hat,
to
take
a
look
at
every
bug
right
to
try
to
improve
the
quality
of
the
product
that
we
put
out
as
well
as
the
quality
in
the
community
upstream,
and
so.
C
A
I
don't
know
if
seth
has
the
history
of
like
when
we
also
discovered
it
had
another
unintended
bad
side
effect,
but
like
I,
I
would
be
remiss
if
I
didn't
say
that
I
feel
like
at
red
hat
we've
been
trying
to
keep
init
containers
running
since
their
inception,
and
it
has
been
a
constant
source
of
maintainership
pain
to
just
keep
going,
and
it
is
not
like
one
of
those
areas
that
makes
everyone
feel
great.
So
it's
hard
to
get
these
things
to
just
like
surface
up
to
be
visible
to
the
community.
A
Unless
you
have
like
user
reports
coming
in
saying
I
see
this
weird
thing
and
we
all
spend
the
time
to
look
at
it.
So,
like
I'm
curious,
if
we
went
through
the
cube
issue
repo
would
we
have
seen
similar
strange
reports,
but
when
you
start
to
see
enough
of
them,
you're
like
oh,
this
is
weird.
Why
are
we
seeing
innit
containers
rerun?
A
F
I
just
want
to
follow
after
what
derrick
is.
Thank
you.
So
this
is
why,
in
the
past,
like
the
I
prefer
signal,
the
contributor
always
have
like
their
vendor
work
because
they
support
their
vendor.
It's
not
just
add
a
new
feature,
so
then
they
will
see
more
complexity
into
the
kubernetes
and
and
also
node
the
complexity.
It
is
the
big
a
lot
of
new
feature
added.
It
could
cause
a
lot
of
the
side
effects
and
to
the
node
also
production
rental.
F
So
this
is
why
not
just
like
the
people
don't
want
to
share
knowledge.
It's
just.
We
share
knowledge.
You
may
grow
unless
you
really
do
the
production
support.
You
have
the
ownership
end
to
end.
You
won't
share
the
pain
as
us.
I've
been
here
since
day,
one
even
it
could
be
full
kubernetes
names
formed
and
also
I
used
to
be
in
a
book
so
in
the
working
as
the
leader
for
the
borg
environment,
the
things
they
like
the
node
today.
F
So
so
many
people
talk
about
a
new
feature
and
I
have
to
hold
the
bar
because
that's
the
have
the
set
effect
to
many
many
users.
So
it's
not
so
so
I
just
want
to
share
here.
So
that's
why
a
lot
of
time
we
kind
of
like
the
sometimes
maybe
a
little
bit
harsh,
maybe
like
to
reach
the
bar
too
high,
because
that's
affect
the
many
workloads
and
the
production,
and
so
this
is
why
we
kind
of
require
of
the
contributor
also
have
the
more
production
experience
to
support
their
own
production
into
some
extent.
B
So
the
answer
is,
it
will
come
to
me.
Naturally,
I
I
got
it,
but
maybe
we
can
yeah.
A
So,
like
it's
just
a
matter
of
I'm
sure
everyone
has
different
cues,
they
can
go
through
and
we
we
personally,
I
think
that
red
hat
have
seen
a
lot
of
pain
with
broad
and
knit
container
and
primary
container
usage
with
dealing
with
side
effects
that
we
didn't
account
for
in
the
initial
design,
and
we've
just
been
trying
to
like
poke
at
them
for
the
last
couple
years.
F
Yeah
and
also
even
elite
container,
we
actually
both
derek,
and
I
give
a
lot
of
hard
time
to
you
with
the
first.
We
basically
spot
many
issues
and
we
didn't
make
the
sound
trade-off
because
there's
the
meaning
of
the
use
cases
we
need
that,
indeed
container
for
sure
for
sure,
even
for
book
use
cases.
I
know
we
definitely
need
something
similar,
but
the
the
problem.
It
is,
there's
the
mini
incompatible
with
the
initial
part,
the
spike.
So
that's
why
we
push
a
lot.
The
buys
very
high
take
a
really
long
time.
F
F
Cases
like
the
static
container
for
the
services
match
that
scheme,
along
with
those
network
naming
space
involvement,
so
so
kubernetes
have
to
also
evolve
to
catch
the
game,
and
so
this
so
that's
the
problem,
but
at
the
other
hand,
there's
the
production,
and
so
this
is
why
we
a
lot
of
time.
We
basically
have
to
risk
the
bar
and
have
to
define
the
category
too
here.
A
By
the
way,
I'd
like
to
be
positive,
like
we're
not
trying
to
be
inert
forever,
I
just
want
to
make
sure
that
people
understand
that
we
were
still
fixing
problems
and
what
we
have
today
and
I'm
amazed
every
two
weeks
we
found
a
new
unintended
side
effect,
so
I
think
we've
said
enough
on
this
stuff.
If
we
can
go
into
the
the
next
topic
sergey,
I
think
you
had
them.
D
Yeah,
maybe
one
last
thing:
I
wanted
to
stress
that,
of
course,
I'll
be
here
to
fix,
bugs
and
improve
the
code.
You
can
count
on
me
with
that,
so
also,
maybe
just
to
close
like
do.
D
We
want
to
discuss
this
again
then,
sometime
next
year
and
just
to
see
if,
like
team's
proposal,
is
the
north,
where
we
want
to
go
or
and
of
course
afterwards
like
when
we
know
where
we
want
to
go,
we
can
create
a
plan
with
confidence
and
see
what
is
realistic
to
achieve
in
different.
A
Yeah
so
I'd
say
everyone
should
read
tim's
latest
proposal.
I
have
to
finish
reading
the
second
half
and
we
should
cue
a
time
up
dedicated
to
this
topic
in
early
january
to
hash
out
our
feelings
on
it
and
see
if
we
can
define
the
north
star
for
how
we
want
to
end
2021
and
rodrigo.
Don't
take
any
of
my
comments
as
personal
it's
more
just.
A
I
100
agree
you're
here
to
support
and
make
the
project
good,
and
I'm
completely
aware
that
a
few
of
us
are
jaded
by
having
to
continue
to
find
edges
that
were
surprising
to
us,
and
so
I
think,
independent
of
that.
I
think
we
all
want
to
just
make
the
code
more
reliable,
and
maybe
that
does
mean
we
need
to
reassess
how
we
pull
state
to
a
run
time
or
or
checkpoint
on
that
type
of
thing.
F
F
We
basically
what
we
want
to
find
a
home
it
is,
I
mean
what
I
was
want
to
say
is
like
the
years
that
I
use
cases
teams
proposing,
don't
support.
I
don't
want
to.
I
really
think
about
it,
because
direct
and
I
or
and
seek
know
the
community
most
can
find
a
you
say.
Oh
here
is
maybe
it's
not
work
with
this,
the
kubernetes
or
how
complicated
for
kubernetes
or
signal
the
environment,
but
I
hope
someone
have
used
cases
in
mind
and
they
can
find
a
hole.
This
is
not
a
support.
F
Then.
We
can
see
that,
like
that,
that's
what
the
new
earner
said.
He
worried
about
the
we
are
start
from
this
one
mark.
This
is
north
star
and
then
later
we
want
to
go
to
graph
dependency
again
and
that's
right.
So
that's
the
more
important
for
us
to
figure
out
the
cover.
Most
of
the
educators,
which
educators
don't
cover
also,
I
want
to
explicit
the
document
if
we
agree
on
israel.
D
Yeah
yeah
I
I
can
have
another
path
on
teams
proposal.
I
think
I
already
did
that,
but
I
can
double
check.
Yes,
thank.
F
B
Last
week-
and
I
think
all
the
bad
misunderstandings
were
diverted
by
blog
posts
and
stuff,
and
we
see
more
people
being
aware
of
the
situation
and
start
migrating
so
now
few
things
that
may
complicate
it
right
now
is
we're
discovering
more
and
more
vendors
who
using
docker
from
like
from
their
monitoring
or
security
agents,
so
they'd
be
in
like
taking
a
direct
dependency
on
it,
and
I'm
trying
to
reach
out
to
speech
to
vendors
right
now
and
understand
how
and
when
they
will
support
container
d
or
any
other
runtime,
so
maybe
like
jim's,
proposing
to
have
a
table
with
different
runtimes
and
they
will
like
fill
out
which
runtimes
they
support
and
how
to
migrate
from
one
to
another.
B
We
also
discussed
with
mark
who
is
on
call
here
as
well.
What
was
the
plan
with
windows
so
with
windows?
The
situation
right
now
is
we
just
supported
container
d
in
120
and
we
will
start
migrating
customers
like
once
we
like.
I
mean
there
is
a
google
situation,
microsoft
citation
so
in
google.
It
will
be
a
little
bit
later
because
we
need
to
implement
the
support
internally
and
start
using
it
in
microsoft.
B
It
will
be
a
little
bit
faster,
so
we
hope
to
get
some
real
production
users
to
start
using
it
and
start
giving
us
feedback.
So
what
we
discussed
is
we.
We
have
all
tests
green
and
we
believe
that
we
tested
everything,
but
we
are
not
sure
because
production
is
always
surprising.
Us
we're
also
not
sure,
because
on
linux
container
d
was
was
the
runtime
behind
the
docker
for
windows.
It's
not
the
case
so
for
windows,
it
will
be
a
totally
different
runtime.
B
That's
why
we.
There
is
a
like
a
little
bit
of
worry
that
with
the
current
plan
of
removing
docker
shiman
122,
it
may
be
a
little
bit
too
fast
and
we
can
get
in
the
situation
when
customers
on
windows
wouldn't
be
able
to
upgrade
and
like
they
will
start
with
a
version
and
may
not
get
important
updates
or
important
feature
for
them.
B
So
we
may
like
what
we
discuss
with
the
markets
like,
and
I
want
to
discuss
with
vendors
telemetry
vendors
as
well
like
how
much
time
they
need
and
like
make
sure
that
they
are
aware
that
they
will
be
cut
off
so
like
once.
We
like
remove
docker
shim
and
they
want
to
upgrade
customers
on
the
newer
version
of
kubernetes.
Then
they'll
be
cut
out.
They
wouldn't
be
able
to
work,
and
with
this
that
said,
I
think
we
can.
B
We
want
to
propose
to
review
this
plan
again
plan
of
duplication,
maybe
early
january,
like
late
january,
so
we
will
have
some
production,
customers
and
we'll
have
more
information.
B
J
One
more
thing
that
I'd
like
to
add
is
at
least
for
windows.
When
I
looked
back
when
the
initial
pull
requests
and
discussions
around
deprecating,
docker
shim
were
arisen,
the
cri
docker
d,
shim
didn't
fully
work
with
windows
and
the
reason
for
that
was.
There
was
a
handful
of
places
in
cubelet
code
outside
of
docker
stream,
specific
code
paths
that
have
conditional
logic.
J
If
the
go
os
is
windows
and
the
container
runtime
is
set
to
not
remote
and
most
of
those
were
around
fixing
up
paths
for
the
windows
paths,
but
if
we
do
need
to
if,
if
docker
shim
is
going
to
be
removed
from
the
tree,
we'd
need
to
make
sure
that
all
of
those
checks
made
it
into
either
like
a
windows.
Specific
cri,
docker,
d,
docker,
shim
and
val
do
do
proper
validation
on
that
as
well
and,
like
sergey
mentioned,
I
think
from
sig
windows
standpoint.
J
We
feel,
like
we've
done
a
pretty
thorough
job
of
testing
all
the
different
scenarios.
We
could
think
of
many
cni
scenarios.
Many
csi
scenarios
lots
of
different
container
run
scenarios
in
sig
windows
with
container
d
running
as
the
cri,
both
in
azure
and
gke,
but
we
don't
really
have
that
the
big
volume
of
customers.
So
we
don't
really
like.
Like
what
sergey
mentioned.
I
mean
things
always
come
up
in
production
and
that
you
may
not
be
able
to
anticipate
so.
A
Yeah,
so
I
don't
think
anybody
wants
to
break
anybody,
so
I
think
that
should
be
our
our
guiding
principle.
The
part
that
I'm
confused
on
was
sergey
for
your
comment
on.
A
We
basically
said
dr
shim
would
remain
until
the
end
of
2021.
If
I'm
not
mistaken,
are
you
basically
advocating
that
it
might
need
to
extend
to
2022
or.
B
No
current
plan
is
to
have
it
in
three
but
not
compiled
and
we
don't
produce
any
artifacts.
I
assume
we
don't
test
it
as
well
in
122.,
and
it
will,
after
that
it
will
still
be
in
three,
so
you
can
compile
it
itself
for
a
year,
and
this
plan
is
a
little
bit
too
aggressive.
I
mean
it
may
be
too
aggressive.
We
need
to
evaluate
if
everything
goes
smooth
and
like
windows
works
and
telemetry
renders
switched
from
docker
to
other
runtimes.
B
Then
it's
great,
I
mean
we're
all
done
like
nobody
wants
docker
to
keep
docker
system
any
longer.
A
Yeah
I
mean
so
I'm
I
don't
think
anybody
wants
to
break
anybody
right.
So
I
think,
if
the,
if
there's
a
concern
that
the
community
would
be
too
aggressive
I'll
be
the
first
to
say,
raise
your
hand
if
you're
going
to
be
broken
and
we
will
work
a
plan
to
adjust
it,
and
I
said
with
both
my
own
community
and
my
own
vendor
head
on
right.
So
like
red
hat
offers
windows
container
support
too,
and
all
these
types
of
things
are
interesting
things
to
navigate
through.
A
So
I
don't
think
anyone
should
be
feeling
like
they're,
being
aggressively
pushed
in
a
path
that
is
not
tolerable,
like
we
should
find
something
works
for
everybody,
so
yeah.
F
Sorry
so
I
totally
agree
with
you.
This
is
internal.
When
I
meet
with
the
circuit,
I
also
say
the
similar
things.
I
said
we
don't
because
communities
don't,
but
on
other
hand
we
do
want
to.
We
don't
want
to
forever
support
the
two,
and
we
also
don't
want
a
docker
ship.
This
is
also
real
right.
F
A
I
guess
I'm
not
saying
10
years.
I
think
I
I
in
my
mind
I
thought
we
had
said
123
but
like
calendar
2021,
I'm
not
even
sure
if
we're
doing
three
or
four
releases
next
year
right
as
a
community
like
unknown
unknowns
happen.
So
I
basically
if
if
people
are
getting
uncomfortable
and
they
want
to
gather
more
facts
and
change
the
course
of
action
like
I
don't
think
that
that's
outside
the
realm
of
us
listening
so.
F
I
think
we
need
to
have
the
plan
and
for
this
because
otherwise
it
is
the
cost
of
it
to
carry
two
container
rental,
and
I
mean
both.
I
mean
we
already
carry
three
right
now
right,
so
so
what
I'm
talking
about
the
two
continent
runtime
is,
is
me
like
the.
F
For
any
change,
the
no
matter,
it
is
the
continuity
to
both
community
and
also
the
also
the
darker
even
like
that
effectively
there
is
so
that's
why
it
is
really
expensive
and
and
even
for
the
vendor
people
and
actually
that's
the
expensive.
So
this
is
where
we
also
can
now
to
remove
our
for
open
source
signal.
The
darker
testing.
J
Yeah
I
agree,
and
I
like,
like
I've,
mentioned
we're
like
for
windows
2.
We
want
to
move
to
container
d.
I
just
feel
a
whole
lot
better,
knowing
that
there
were
some
kind
of
major
users
using
this
in
production
before
we
completely
remove
all
the
code.
So
I
think
I
agree
with
everything
that's
been
discussed
here.
F
I
totally
agree
with
you
mark.
Even
at
the
internal
discussion,
I
try
to
total
the
circuit
decouple
windows
container,
the
topic
of
whatever
deprecation
plan
even
like
for
our
internal.
I
want
to
decouple
so
so
that's
why
con
windows
is
a
little
bit
different,
this
topic,
but
I
do
want
to
have
the
have
the
milestone
over
time.
We
can
remove
all
four
dependency
on
the
darker
engine.
Intercept
of
I
mean
darker
engine
itself
as
the
container
antenna,
not
darker
itself.
I
I
think
it's
also
important
to
recognize
that
there's
been
some
people
that
are
willing
to
support
the
external
doctor,
gem
that
works
on
cry,
and
I
think
that
might
be
a
more
viable
alternative
to
just
you
know
not
having
any
support.
It'd,
probably
be
better
to
when
we
say
cry,
implementations
to
start
working
more
towards
the
you
know
the
cry:
docker
shim,
this
external,
that
uses
the
cri
interface.
I
F
F
Yes,
I
believe
that's
the
business
strategies
not
like
the
before
they
came
to
the
signal
and
approached
me
and
said
they
want
maintenance,
so
I
didn't
introduce
them
so
then
I
saw
their
business
plan
changed
until
all
of
a
sudden.
I
saw
that
well
over
weekend.
So
another
thing
is:
we
didn't
really
talk
about
the
deprecated
talker.
We
only
talked
about
remove
darker
shim
from
the
entry
right.
The.
I
We
we
can
probably
make
that
okay,
when
is
it
equivalent,
and
certainly
mr
rossetti's
windows.
You
know
efforts
are
going
to
be
involved
there.
Where
he's
got
to
decide
what
does
he
want
it
to
be
on
windows,
container
to
your
docker
or
cry
doctor
shem.
J
B
F
K
And
probably
for
those
things
what
was
identified
as
workarounds
without
this
past
month
link,
it
should
be
moved
out
of
kubler
to
corresponding,
like
either
sierra
shrimp
or
to
some
other
places
where
it
really
belongs.
L
So
I
I
pulled
this
in
I
put
this
on
future
agenda
and
then
I
was
hoping
we
might
have
a
few
minutes
to
discuss
today.
So
for
those
that
don't
know
me
hi,
I'm
alana,
I
am
chair
of
sig
instrumentation
and
I
am
getting
more
involved
in
sig
node,
which
I'm
super
excited
about,
and
one
thing
that
I
have
seen
other
sigs
doing,
such
as
instrumentation
api
machinery
is
running
like
regular
triage
meetings
where
the
sig
goes
through.
L
Basically,
the
big
backlog
of
here
all
the
open
issues
hear
all
the
open,
pr's,
let's
make
sure
someone's
assigned.
Let's
make
sure
that
we're
giving
attention
to
the
high
priority
things
that
kind
of
thing
and
as
far
as
I
can
tell
sig
node,
is
not
currently
doing
this
and
I
was
chatting
with
dimms
a
few
weeks
back
and
he
said
you
know
it'd
be
really
great
if
sig
node
had
a
triage
meeting.
L
So
I
wanted
to
raise
that
in
the
regular
meeting
for
the
sig
note
community
to
see
if
there
was
any
interest
in
that
I'd
be
happy
to
help
out
or
maybe
like
try
to
schedule,
find
a
time.
That
kind
of
thing
see
how
that
goes.
If,
if
people
are
interested.
A
Yeah,
so
one
welcome
alana
and
please
don't
be
quiet
in
our
future
meetings.
We
would
probably
welcome
any
and
all
help
to
coordinate
we.
A
We
had
discussed
running
triage
meetings
in
the
past
and
we
had
instead
tried
to
focus
on
two
things:
one
like
keeping
a
frequent
update
on
like
how
we
were
doing
with
respect
to
pr
health
and
that
type
of
thing,
and
so
that's
what
some
of
the
stuff
that
sergey
has
been
talking
through
and
then
the
other
one
was
trying
to
improve
the
state
of
our
testing
infrastructure.
A
A
I
continue
to
be
open
on
the
right
way
of
handling
this
and
having
volunteers.
Help
shepherd
things
is,
is
extremely
helpful,
so
maybe,
in
our
next
meeting
we
can
get
an
update
on
how
the
the
testing
health
activities
we're
going
and
then
see
if
we
want
to
now
find
a
right
form
to
maybe
transition
that
into
more
of
a
continual
triage
operational
meeting.
So
can
we
maybe
cue
that
up
for
next
meeting.
F
A
Okay
and
then
the
other
thing
to
think
through
is
that
I'm
not
sure
everyone's
calendar
availability
of
the
remainder
of
the
year.
I
know
this
is
the
time
of
year
that
everybody
starts
heading
out.
I
wanted
to
suggest
that
we
cancel
at
least
the
22nd
and
29th
and
needed
to
make
sure
that
someone
was
actually
going
to
be
around
next
week
or,
if
folks
were
going
to
start
taking
off
earlier.