►
From YouTube: Kubernetes SIG Node 20210608
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
B
Oh
yeah,
okay,
recording
it's
on
it's
june,
8th
2021,
it's
signature,
weekly,
meeting,
hello,
everybody
and,
as
usual,
we
want
to
start
with
what
you
missed
section.
So
we
in
terms
of
prs
there
are
not
much
happening.
Many
merged
pr's
are
not
related
to
signals.
B
You
can
look
at
statistics,
we're
growing
on
prs
slowly,
but
hopefully
we
can
reverse
the
trend.
I
hope
we
will
reverse
the
trend
soon
and
also
elan
is
scheduling
the
issues
triage
session
for
ish
and
tuna.
I
believe
so
maybe
it
will
also
help
to
like
sparkle,
joy
and
contributions
that
we
can
accept
faster.
C
Yeah
so
as
a
reminder,
we've
scheduled
a
bug
scrub
for
june
24th
and
25th.
I'm
hoping
I
should
have
more
details
on
that
by
the
end
of
this
week,
as
well
as
recruiting
people
to
sort
of
lead
in
each
region,
since
we
want
to
ensure
that
it's
open
for
all
time
zones
and
yeah.
As
far
as
prs
have
been
going,
I've
been
going
through
sort
of
the
review
backlog.
C
There's
a
lot
of
stuff.
That's
just
kind
of
you
know
stuck
like
stuff.
That's
receive,
review
feedback
has
not
moved
stuff
that
still
needs
review
stuff
that
has
received
review
but
hasn't
had
a
chance
to
get
an
approver
triage
is
going
along
pretty
quickly.
So
I
think
things
are
moving.
I
have
not
seen
a
lot
of
like
feature
work
yet
for
the
release,
so
I
guess
as
a
same
reminder
as
last
week.
C
A
Thanks
alana
initiate
and
also
leading
of
this
effort,
and
also
I
just
want
to
call
out
june
20th.
Second,
we
want
to
review
the
the
pr
status
for
the
cap
for
this
fitness.
So
just
we
we
discussed
last
last
week.
B
Yeah,
my
first
item
is
just
a
discussion
of
how
we
can
normalize
the
behavior
like
document
the
behavior,
that
people
start
taking
dependency
on
so
right
now
we
don't
define
any
order
of
startup
of
pots
of
containers
in
the
pot
and
that,
even
though
we
do
that
sequentially,
furthermore,
we're
not
only
doing
it
sequentially,
we
also
like
first
port
first
container
will
start
and
then
this
container
will
execute
the
three
start
hook
and
only
after
that,
second
category
start
and
author
of
tools,
like
sidecar
containers,
already
taking
dependency.
On
that.
B
I
found
at
least
a
couple
examples,
and
my
question
is
like
I
know
that
there
were
a
couple
attempts
before
to
start
port
start
containers.
In
parallel,
I
found
at
least
two
attempts
in
in
terms
of
prs
they're
trying
to
exchange
this
logic,
but
at
this
stage
since
people
taking
the
balance
of
that
yeah
containers
yeah,
I
see
comments
from
dawn
on
the
document
yeah.
Since
people
taking
depending
on
that,
I'm
thinking
we
need
to
formalize
it
somehow
like
either.
B
I
mean
I
don't
think
we
can
break
it
at
this
stage,
because
people
already
taking
depends
on
it.
So
I
propose
to
make
make
it
official
so
just
say
that
this
is
our
startup.
B
Order-
and
this
is
how
just
create
a
conformance
test
for
that,
so
I
wonder
what
is
what
are
other
opinions
here
like
if
you're
not
documenting
it,
but
we
still
behave
this
way
and
people
to
break
it.
I
will
still,
hopefully.
D
I
think
yeah
yeah
only
worry
is
like
we
had
like
that
proposal
from
tim,
and
we
had
discussions
around
like
defining
a
dependency
graph.
So
if
we
somehow
formalize
it,
we
need
to
worry
about.
If,
if
we
are
kind
of
putting
ourselves
in
a
corner
that
we
can't
get
out
get
out
of
to
make
those
changes.
B
Yeah,
but
if
you
okay,
we
likely
will
need
to
change
it
under
feature
flag
and
then
somehow
duplicate
this
behavior
over
time.
Otherwise,
like
we
will
break
some
production
workloads
for
customers
right.
So
I
was
thinking
like
yeah.
My
the
reason
I'm
bringing
it
to
this
forum
is.
I
want
to
understand
what
is
our
pain
limit
for
this
kind
of
changes?
B
We
changed
exact,
prop
timeout
before
and
the
exact
timeout
starts
being
respected,
and
now
I'm
finding
so
many
users
affected
by
that.
It's
just
like
I
mean
people
using
jpc
checks
using
exact
top
timeouts,
exact
props,
so
grpc
may
run
slower
than
one
second,
and
suddenly
ports
will
become
unready
or
like
being
killed
because
of
liveness
probe.
So
we
change
this
behavior
and
then
now
we
need
to
suffer
consequences
because
people
took
a
dependent
back
in
kubernetes,
so
this
is
undocumented
here,
but
people
taking
depends
on
the
sound
dependency
here.
B
E
This
is
lantau,
and
I
think
even
if
we
introduce
the
odor
is
only
it
only
applies
during
start-up
right,
because
if,
when
you
restart
container,
of
course,
you
won't
have
restart
order
or
also
you
will
never
restart
the
container,
because
until
the
it's
parent
it's
dependency
does
right.
So
it
will
only
be
useful
during
startup
and
if
it's
only
useful
during
startup,
I
I
guess
it's
usually
only
for
initialization
right.
You
just
have
an
initialization
dependency.
E
B
Yeah
you
need
containers
are
different
because
they
don't
start
anything.
But
I
agree
with
you
like
restart
is
different
question.
We
need
to
address
it
somehow
differently,
but
this
be
here
is
already
taking,
depending
on.
A
I
want
to
share
the
decision
we
made
it.
Maybe
we
actually
document
that
decision
somewhere,
maybe
just
people
delete,
because
the
reason
I
found
a
lot
of
older
content
being
without
consultant
without
our
maybe
I'm
not
included
it's
just
removed
accidentally.
A
So
the
initial
before
we
don't
before
we
introduce
elite
container,
we
decided
there's
no
starter
order,
there's
no
dependency.
How
to
the
water
understand
any
container
missing
off
the
part
there's
no
is
and
which
container
is
a
start
first
and
the
second
to
follow
each
other.
Besides
the
powder
info
container,
that's
the
one
hold
of
the
name
space,
a
lot
of
things,
and
so
all
the
other
content
are
actually
arbitrary
order.
So
it
could
be
implementation
detail
and
we
change
that
order
or
concurrent
start
and
all
those
kind
of
things
initially.
A
Actually
we
even
tried
concurrent
start
and
the
later
we
found
actually
not
have
much
of
the
performance
again.
So
then
we
start
the
one
by
one.
We
we
we're
doing
those
kind
of
things,
but
it's
not
that's
just
all
trade
as
impermanent
detail,
then
later
we
introduce
the
indeed
container.
This
is
why
you
need
a
counter.
Also
between
each
other
and
they
have
like
the
order-
we
just
say:
oh
absolutely-
follow
that
order.
Until
all
the
elite
contender
start
successfully,
then
we
can
start
the
rest
of
the
application
container.
That's
also
no
water.
A
I
know
we
introduce
a
lot
of
hook
and
people
play
the
trick
with
the
hooks
and
then
try
to
force
inject
some
water,
but
there's
the
other
things.
Also.
We
try
to
define
the
water,
but
that's
the
separate
project
like
the
cytokine
container
and
also
there's
the
order
between
how
we
start
and
like
also
restart
the
really.
We
didn't
really
agree
upon.
It's
just
complicated
because
there's
so
many
different
use
cases
and
also
a
lot
of
use
cases
even
controversial.
C
Yeah,
I
would
be
worried,
given
that
it's
an
implementation
detail,
I
would
be
worried
about
like
artificially
constraining
ourselves
if
we
wanted
to,
for
example,
do
a
big
cubelet
refactor,
adding
that
as
a
conformance
detail
like
it
just
doesn't,
I
we
might
end
up
painting
ourselves
into
a
corner,
so
I
would
say
the
more
flexible
we
can
be
on
that
the
better.
I
think
it's
fine
to
say
this
is
currently
how
it
works.
Could
change
like
this
is
not
a
guarantee.
B
So
I
mean
we
already
painted
ourselves
in
the
corner.
If,
from
very
beginning
we
would
randomize
the
order,
then
it's
clearly
implementation
detail,
but
since
we
don't
randomize
the
order
and
in
fact
we
wait
for
pre-start
hook,
we
basically
made
people
depend
on
that
and
now
we
have
production
workloads
for
people
not
even
realizing
that
the
production
workload
of
their
applications
and
this
application
may
not
be
even
belong
to
the
customer.
It
may
have
been
developed
by
some
third
party
vendor
and
like
with
upgrade
of
kubernetes.
B
C
Well,
I
think
that
I
mean,
as
far
as
you
know,
just
because
people
are
relying
on
a
behavior
doesn't
mean
it's
like
a
documented
feature
right
and
that's
the
sort
of
thing
that
I
would
expect
like
would
not
break
in
a
patch
release,
but
could
easily
break
in
a
like
a
major
release.
So
I
would
be
very
hesitant
to
like
add
conformance
testing
or
I
mean
effectively.
I
think
that's
already
reflected
in
our
unit
test
that
we
have
a
startup
order
like
that,
so
I'm
not
sure
what
we
would
be
adding.
B
Yeah,
so
I
guess
the
general
question
is:
what
is
our
philosophy
of
breaking
things
and
making
them
stable
so
like?
If,
like
can
major
release
changes
without
without
notice?
I
mean
we
only
notice
that
the
release
note.
B
This
is
a
big
question
so
right
now
there
is
another
pr
that
introduces
like
hooks
being
like
adjusted
to
like
hooks,
didn't
support
https
before
and
now
they
like,
there
is
apr
to
fix
it,
and
the
question
is
a
legitimate
question:
is
the
what,
if
customer
by
mistake,
used
https
there
and
now
we're
breaking
them,
because
they
didn't
expect
that
this
https
work?
It'll
just
put
us
by
mistake,
maybe
copy
paste,
some
some
where,
if
we
try
to.
C
If
we
try
to
have
compatibility
for
every
possible
mistake
that
a
customer
could
make
we're
never
going
to
be
able
to
make
forward
progress,
that
seems
very
clearly
like
a
thing
that,
like
is
reasonable
to
break
in
a
major
release,
and
when
I
refer
to
major,
I
mean
like
one
eight
to
one
nine
or
121
to
122,
because
we
use
them
for
kind
of
weird
in
kubernetes
like
if
we
can't
do
that
in
a
major.
When
can
we
do
that?
C
C
F
F
So
it
would
be
nice
to
be
able
to
tag
that
that
behavior
in
those
tests
says
please
don't
change
this
behavior
without
a
cap,
for
example,
we
had
we
had
a
recent
change
where
you
know
heartbeats
were
thrown
in.
You
know
some
into
into
one
of
the
to
the
streams
and
that
kept
the
streams
up
so
another
feature
that
we
had,
which
was
to
provide
timeouts
if
nobody's
using
the
streams
got
broken
and
because
we
didn't
have
a
conformance
test
or
any
test.
You
know
we
lost
that
order
of
operations
for
that
ability.
F
C
Than
conformance
test
like
I've,
looked
at,
I've
recently
been
spending
a
lot
of
time
on
unit
test
coverage
in
the
cubelet.
We
don't
have
full
coverage.
I
think
the
entire
cubelet
is
at
56
so
like.
Let
us
not
worry
about
conformance
tests
which
are
really
not
supposed
to
be
implemented
in
details.
They're
supposed
to
be
focused
on
like
end
user
expectations
of
behavior,
like
let's
get
the
unit
test
done
before
we
start
working.
B
Okay,
yeah,
I
think
we
covered
that
and
thank
you
everybody
for
discussion.
I
will
see
what
we
can
do
in
terms
of
weather
testing
and
making
sure
we're
not
breaking
it
in
future
reviews
unintentionally,.
A
And
the
sergey
and
essex
brought
this
up
because
we
have
so
many
new
people.
So
so,
if
you
think
about
documentation
missing
something
because
I
do
found
a
lot
of
dark
with
dark
in
the
past
and
people
just
sort
is
not
up
to
date,
then
just
delete,
but
those
actually
a
lot
of
its
policy
with
this
idea
and
the
delete.
Those
kind
of
things
actually
need
to
confirm
with
us
and
the
enter
came
to
the
signal.
So
so,
if
you
found
the
documentation
and
clear
we
can
please
make
that
more
clear.
A
A
A
Okay
looks
like
no
one
is
here
talk
about
this
way.
I
quickly
before
the
meeting.
I
quickly
read
this
through
of
the
cut
list
through
the
cap
and
ask
some
questions,
high
level
question
and,
and
maybe
we
can
carry
on
from
the
cap.
G
G
The
timing
for
the
for
this
meeting
is
a
bit
uncomfortable,
so
the
question
is:
if
we
can
have
an
extension
meeting,
one
in
a
month
meeting
more
friendly
to
apec
to
people
in
the
apec
area,
china,
japan,
australia,
things
like
this-
and
the
other
comment
is
the
other
comment
I
want
to
make
is
yeah.
I
I
I've
been
following
this
work
and
I'm
waiting
for
basically
more
more
answers
in
the
comments
I
made
in
general.
G
A
Okay,
thanks
francesco
and
the
review.
I
also
asked
some
questions.
Maybe
we
can
we
can
discuss
so
is
this
is
the
enhancement
for
existing
cpu
management,
or
it
is
the
separate
from
that
cap.
I
didn't
see
that
the
proposed
additional
policy,
or
it
is
enhancement
to
existing
policy.
So
that's
why
it's
more
like.
Okay,
here
is
the
new,
and
we
have
this
anchor
cache.
We
need
the
support
and
how
we
are
going
to
support.
So
it's
kind
of
like
additional.
G
A
A
F
A
Okay,
so
you
also
answer
my
third
question
in
the
cab,
so
I
already
asked
those
from
the
high
level
and
we
used
to
be
like
the
we
used
to
be
around
also
pacific,
friendly,
signaled
meeting.
I
think
he's
by
awakening
correct
me,
ninten.
I
keep
you
honestly
there
and
at
the
end
initially
have
a
lot
of
people
attend,
but
quickly
only
nanta-
and
I
was
those
meetings.
So
that's
why
we
cancelled
if
they
can
come
after
time,
it
is
tuesday
right
now,
it's
tuesday
we
still
actually.
C
So
we
just
added
an
apac
friendlier
time
for
the
node
triage
session
and
that's
going
to
be
8
am
pacific
on
thursday
and
you
may
say
well
8
am
that
doesn't
sound
very
apec
friendly,
but
I
think
a
lot
of
people
in
apac
are
used
to
working
rather
late,
but
right
now
the
10
a.m.
Pacific
time
is
just
simply
too
late.
I
think
it's
like
two
in
the
morning
or
something
like
that.
C
So
maybe
let's
see
how
the
apac
time
works
for
the
triage
session
and
then
maybe
we
can
consider
adding
apac
time
for
sig
node.
A
Because
we
you
spin
at
11
p.m,
10
p.m.
Nanta
and
I
host
well
quickly
only
like
the
two
of
us
and
we
don't
need
to
be
so
late
and
we
can
talk
all
the
time.
So
that's
why
we
at
the
end,
we
decided
that
we've
been
running,
that
for
more
than
half
a
year
right
now
or
maybe
even
longer.
So
then
we
quit
so
so
we
basically
say:
oh,
initiate
need
them
to
initiate
those
meetings
and
the
proposal
agenda.
A
A
Thanks
francesco-
and
I
also
just
noticed
that
sweaty
also
revealed
okay
thanks.
We
we
should
treat
this
as
part
of
our
cpu
management,
request
effort
and
that's
that's
the
real
use
cases
that
we
should
support.
H
Okay,
yeah
sure
this
is
mostly
in
context
of
adding
new
ci
image
for
memory
swap
during
the
initial
research.
I
pretty
much
reviewed,
like
all
the
current
image
specific
tests
and
it
seems
they're
like
a
multiple
different
approach
that
folks
are
using
to
like
to
do
the
end-to-end
testing.
I
guess
start
from
high
level.
H
I
wonder
if
there's
any
existing
policy
regarding,
say,
onboarding,
a
new
image
and
say
after
the
image
is
onboarded
how's
the
light
cycle
for
this
say
feature
specific
images
are
handled,
who's
in
charge
of
like
say,
doing,
upgrade
for
each
of
the
video
components,
for
example
the
kernel
versions
and
container
d
version.
And
finally,
there
was
a
also
on
specifically
what
kind
of
test
coverage
are
we
expecting
for
say
more
image
oriented
feature.
I
found,
like
a
quite
a
few
examples,
say
for
huge
pages
and
cpu
manager
and
memory
manager
schedule.
H
It's
mostly
say
we're
focusing
on
cover
the
the
feature
specific
tests,
where
there
are
also
some
features
that
are
more
running
like
a
a
huge
smoke
test.
For
the
example,
like
c
group
of
v2
was
pretty
much
run,
like
all
the
all
the
serial
tests
and
ignoring
the
flaky
and
whatnot
so
yeah,
just
trying
to
get
like
more
context
and
understanding
and
alignments
for
this.
A
D
A
D
Herschel
got
like
fedora
ended
up
using
fedora
coreos
for
that,
because
it
was
available
on
google
cloud
so
yeah,
but
that
worked
for
us.
We
did
not
have
to
like
manually
bring
in
a
fedora
image,
so
that's
already
maintained
by
fedora
core
os
and
they
publish
images
to
google
cloud
and
we
ended
up
using
that.
A
Okay,
so
we
used
to
have
like
the
nanka
was
here
and
also
please
keep
me
honestly
here.
We
used
to
have
like
the
since
nintendo
actually
in
chicago
this
part
initiated
this
part.
So
we
do
have
that
signal.
The
e2e
test
image
project
and
we
used
to
be
open
to
many
was
okay
and,
and
so
so
so
we
used
to
be
open
to
a
lot
of
basically
all
the
os
digital
and
but
we
need
you
need
to
have
to
identify
the
owner
for
each
os
distro.
A
At
the
end,
a
lot
of
owners
didn't
just
provide
support,
so
I
I
think,
I'm
using
course
in
in
example,
they
didn't
keep
update
to
cos,
and
people
didn't
make
sure
cos
immediately
passed.
Our
minimum
confirmed
conformance
test
at
the
node
level,
not
the
class
level.
It
just
only
know
the
level
so
at
the
end
we
have
to
remove
this.
Is
we
discussed
this
last
time
when
we
talked
about
the
for
the
sql
version?
Two,
so
I
hope
we
can
follow
that.
A
So
it
looks
like
we
didn't
request
or
know
the
image
policy
and
remember
we
have
some
know
the
image.
What
kind
of
image
you
can
get
into
the
into
the
that
test
project
and
people
have
to
propose
and
have
the
maintain
and
update?
So
that's
we
have
those
policy
defined,
but
I
couldn't
find
this
is
where
I
said
like
some
of
the
doc
is
just
accidentally
deleted.
So
so
do
you
want
to
change
which
we
are?
A
We
could
redefine
a
lot
of
those
policy
and
so
make
sure,
there's
the
owner
make
sure,
there's
the
how
frankly
we
want
to
upgrade
and
then
we
can
push
the
image,
give
the
grinder
of
the
permission,
push
the
image
to
out
to
that
project.
E
Yeah,
I
think
even
I
mean,
if
I
remember
correctly,
we
don't
have
a
policy
covering
everything
you
mentioned.
So
I
think
at
that
time,
because
the
node
e3
test
was
presumed
and
and
as
you
mentioned,
if
and
we
had
coast
image
there
and
whenever
cool
as
image
has
an
issue,
it
will
block
all
the
everyone's
pr.
That's
why
everyone
will
notice
that
and
eventually
we
decide
to
remove
it
so
and
the
only
thing
I
can
remember
that's
related
to
this.
E
It
was
the
the
the
test
defined
by
usually
the
the
the
the
node
e3
conformance
and
the
node's.
U
conformance
and
node
e3
feature
test.
I
think
that's
the
the
the
latest
thing
we
had
before
we
switched.
I
mean
before
before
I
stopped
working
on
signal
for
a
while.
I
I
don't
remember
why
there
are
other
things
defined
after
that.
A
So
no
the
e2e
test
scenario
just
add
comment
there,
which
is
the
ugni
defined.
I
think
it's
still
last
year
we
discussed
and
the
request
that's
still
up
today,
but
that
one
time
to
include
of
the
image
I
remember
we
did
define
the
image
policy
when
we
decided
to
remove
cos
because
we
need
to
have
the
policy
and
then
community
decided,
remove
the
policy,
remove
the
cost
and
also
have
defined
the
requirement
how
they
can
re-add
the
bag,
so
so,
but
anyway
fine.
So
so
so
we
could.
We
could
redefine
this.
A
D
D
A
H
Right,
I
think,
for
the
image
I'll
try
to
like
try
with
like
current
existing
images,
and
there
might
require,
like
some
kernel
level,
changes
that
are
foreseen,
because
this
would
be
a
requirement
for
the
swap
account
for
the
swap
accounting
but
yeah,
I
guess
so,
but
right
now
I
think
I
got
enough
to
move
forward,
so
I
guess
like
for
more
in
the
future
type
of
say,
image
of
life
cycle.
H
A
So
so
so
far
we
already
have
ubuntu
cost
and
also
fedora
cost
right
in
our
test,
so
the
feature
network
swap.
Can
we
make
sure
that
is?
I
share
the
the
testing
policy.
This
is
the
new
feature
right,
so
it
could
be
a
peg
as
the
alpha
or
feature
test
so
so
make
sure
that
it
is
past
existing
existing
image.
So
then
there's
any
other
consonant.
Can
we
add
more
image?
So
can
we
identify
what
kind
of
the
image
and
also
owner
who
is
response
for
that
image?
Is
that
okay?
H
And
in
terms
of
the
test
coverage
itself,
I
guess
right
now:
it's
more
of
a
left
as
a
discretion
of
each
say
feature
only
is
a
correct
understanding.
A
So
far
this,
at
least
this
is
the
alpha
hr.
So
it
may
be
eventually
you
we
can
graduate
this
and
either
this
conform,
but
at
least
for
now
can
only
be
the
alpha
or
just
a
feature
pack
so
which
is
app,
especially
if
you
read
our
policy
and
or
it
is
a
feature
so
so
so
we
have
to
type
like
that
way
and
if
you
need
some
other
image,
please
make
sure
that
image
you
have
the
owner
can
make
sure
the
update
maintenance
and
the
charge
it
is.
C
So
this
might
have
been
a
little
bit
my
fault,
one
of
the
things
that
kept
specifically
said
was
ensuring
that
we're
testing
on
at
least
two
different,
like
families
of
image.
So
initially
a
pr
came
up
with
ubuntu,
and
I
said
I
think
that
we
need
something
like
relish
flavored
as
well
like
fedora,
core
os
or
whatever
images
that
we
happen
to
have
in
the
test
pipeline.
We
just
need
to
make
sure
that
we're
testing
on
more
than
one
that
makes
sense.
A
Really
don't
need
I
I
don't
agree.
At
least
the
more
than
one
image
should
be
should
be
required,
but
is
not
not
osd,
so
specific
things,
it's
more
like
kernel
right.
So,
like
a
word,
sql
version,
two
swag
may
have
the
enable
some
different
implementation
different
features.
So
we
do
need
the
testing
on
something
like
fedora
costs
that
have
the
seekable
version
too
and
another
one
is
sick
version.
A
One
make
sure
the
swipe
work
for
both
the
c
group
versions,
but
maybe
not
that
os
digital,
specific
things
for
open
source
here
and
and
there's
a
certain
feature
long
time
back.
Have
the
osd
show
necrophobia
linux.
So
we
we,
we
have
to
ask
her.
This
is
open
director,
node
image
discussed
initially
that's
the
many
years
ago,
because
we
want
to
test
the
c
linux,
but
we
don't
have
any
image
and
support
the
c-linux
so
that
have
the
osd
show
dependency
but
most
features
so
far.
A
D
A
C
I
Yeah
the
this
is
a
quick
question,
but
it's
been
hard
to
get
a
determinative
answer
on
this,
which
is
we've
had
a
major
regression
in
the
last
patch
releases,
a
crash
involving
kublet
streams.
I
F
E
Yeah,
I
think
we
oh,
I
need
to
double
check
because
I
remember
I
mean
initially,
we
start
with
the
streaming
server
but,
okay
later,
I
remember
they
replace
it
with
some.
They
replace
some
other
thing.
They
build
themselves.
E
D
D
I
Sounds
like
definitely
effects
docker
shim
might
affect
probably
affects
cryo
might
affect
container
d,
but
we
don't
know
yet.
On
the
last.
F
I
F
F
C
Okay,
cool,
then,
should
I
go
yes,
okay.
I
just
put
this
on
the
agenda
again,
so
this
is
sort
of
a
follow
up
from
last
week,
the
discussion
with
clayton
about
the
pod
life
cycle.
Rework,
so
I
just
linked
the
pr
there.
It's
quite
a
large
pr,
so
I
mostly
just
wanted
to
put
this
on
the
agenda
to
make
sure
that
this
has
visibility,
because
he's
been
working
on
basically
like
refactoring
the
whole
pod
life
cycle
to
try
to
fix
some
of
these
race
conditions.
So
it's
pretty
invasive.
C
So
I
wanted
to
make
sure
there
were
lots
of
eyes
on
this
and
as
well.
There
has
been
some
confusion
about
the
google
doc
with
the
discussion,
so
I
have
the
google
doc
there
clayton
accidentally
linked
an
internal
doc,
so
I
can't
make
that
one
shareable,
but
the
public
doc
is
the
one
in
the
minutes,
and
I
I
also
put
it
in
a
comment,
but
the
comment
is
now
being
hidden
because
there
are
so
many
comments
on
the
pr
so
yeah.
C
If,
if
anybody
has
any
questions
about
this,
I
don't
think
we
have
clayton,
but
I'd
be
happy
to
try
to
help
out
with
answers,
but
there
have
just
been
like
the
cubelet
by
design
is
kind
of
a
giant
race
condition,
and
it's
particularly
bad.
If
you
create
a
pod
and
then
rapidly
try
to
delete
it,
we
see
lots
and
lots
of
pods
get
stuck
in
pending
sort
of
no
matter
what
fixes
we
do.
C
B
C
Great
lentil
I'll
make
sure
that
you're
assigned
on
the
pr
just
so
you'll
be
able
to
find
it.
C
C
Oh,
I
guess
I
should
mention,
while
I
type
about
back
ports
and
patch
releases,
the
cherry
pick
deadline
is
this
friday.
So
I
think
that
we
have
like
a
few
very
small
fixes
out
if
you
want
to
backboard
something
for
the
june
patch
release,
make
sure
that
you
have
your
back
ports
up
by
this
friday.
C
Make
sure
that
it
fixes
something
that
is
critical,
is
a
relatively
low
risk
change
and
has
been
landed
for
at
least
a
week
already.
So,
if
you're
merging
something
this
week,
it
will
not
be
eligible
for
the
june
cherry
picked
up
line
and
I
will
try
to
get
a
thread
up
and
sig
node
for
that.