►
From YouTube: Kubernetes SIG Node 20210824
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Okay,
so
I
think
that
if
you
want
to
talk
to
the
team
here,
also
that's
good
so.
B
So
it's
just
like
so
I
already
discussed
this
once
a
couple
of
months
ago,
like
not
coming,
I
think,
maybe
like
six
months
or
one
year
ago,
but
and
now
clayton
has
a
different.
He
has
a
different
opinion.
I
think
mostly
to
do
because
of
the
new
feature
which
he,
which
was
merged
to
for
optimizing
deletion
of
ports
on
the
ap
server.
B
Basically,
lock
should
be
available
until
the
end
of
the
body
as
long
as
the
plot
is
present
in
the
qbp
server
right,
so
there's
some
changes
which
he
had
done
like
where
he
wants
for
the
loss
we
present.
He
wants
the
sandbox
port
sandbox
to
be
present,
which
means
you
can't
the
sandbox
has
to
always
be
deleted
in
asynchronous
fashion.
It
can't
be
deterministic
in
a
synchronous
fashion,
meaning
the
pod
sandbox
will
always
be
stopped,
but
not
deleted,
and
once
the
pod
is
deleted
from
the
api
server.
B
This
is
for
the
I
think,
to
support.
He
was
mentioning
to
support,
I
think
docker
docker
shim,
where
it
needs
that
sandbox
to
be
present
for
the
logs
to
be
accessible
to
the
user.
If
not,
I
think
his
point
is:
when
user
is
trying
to
delete
the
pod,
there
is
some
specific
state,
some
short
period
of
time,
where
the
pod
is
in
terminating
state,
but
user
can't
see
the
logs
because
the
sandbox
is
getting
deleted
in
the
back
so,
but
the
issue
which
I
have
is
since.
B
Cubelet
is
talking
to
cri
and
cr
is
talking
to
cni
it's
in
a
two
level,
two
level
it
is
talking
indirectly
to
the
cnn.
B
Cni
what's
happening
is,
as
part
of
see
our
crs
top
part
sandbox
call
if
the
container
runtime
is
not
able
to
delete
the
or
stop
the
network,
meaning
it's
not
unable
to
delicate
the
ip
or
whatever
like
clean
up
that
ip
properly
the
service
will
happen
is
like
now,
the
pod
is
actually
being
removed
from
the
kubernetes
api
server,
but
cna
still
has,
or
basically
the
network
plugin
and
the
host
still
has
that
ip
linkering
and
it
has
to
like
now
clean
up
asynchronously
using
some
external
means,
as
it
has
to
take
some
as
far
you
can
see,
it
has
to
see
it
okay,
what
is
present
in
kubernetes
on
this
node
and
was
it
not
present
and
that
kind
of
stuff
to
clean
up
instead
of
relying
on
the
cna
calls.
B
So
like
two
things
what's
happening,
one
it
has
to
now
do
this
additional
thing,
but
even
there
like
the
one
issue
which
which
I
was
facing
is
like,
there
were
other
issues
which
I
created
like.
I
think
a
year
or
two
back
where
currently
only
pod
name
and
name
space
are
being
passed
as
labels
to
this
container
network.
B
But
when
I
raised
the
signature
can
all
decide
like
the
right
way
like
you
get
the
basic
details
like
the
sandbox
id,
the
namespace
name
and
pod
name,
and
if
you
need
anything
extra
like
you
need
to
like
meaning.
If
you
needed
additional
annotations
for
labels,
you
need
to
make
the
cube.
B
Now,
in
this
scenario
like,
I
can't
do
that
because
the
part
is
already
deleted
from
the
api
server,
but
still
the
network
is
still
lingering
like.
Another
thing
is
like
from
the
administrator
point
of
view.
B
What's
happening
is
like
he
doesn't
see
any
part
running
on
the
host,
but
unless
he's
created,
unless
there
is
some
failure
because
of
cni
thing
happening
on
that
part,
he
would
know
that
there
is
a
leak
happening
unless
there's
an
external
monitoring
which
is
present
for
the
network,
because
from
administrator
he
sees
that
okay,
there's
no
parts
running
on
this
host,
but
all
the
ips
are
all
not
usable,
because
either
they're
not
cleaned
up
properly
or
of
that
kind.
B
So
that's
that's.
That
was
the
issue
like
which
I
was
facing
like
we
are
facing
internally
and
at
that
time
we
the
solution
was
we
can
similar
to
how
we
are
waiting
for
c
groups
and
storage
and
other
things
other
active
resources
to
be
cleaned
up
before
we
remove
the
pod
from
the
ap
server.
We
thought
like
we
can
do
the
same
for
the
network
like
meaning
like
we
can
wait
for
the
sandbox
removal
of
sandbox
will
guarantee
that
part
is
removed.
B
Networks
are
removed
because
right
now,
there's
no
other
way
how
the
cubelet,
through
its
interface
to
the
cra,
okay,
know
that
a
network
is
actually
cleaned
up,
except
that
sandbox
is
removed,
but,
like
I
was
talking
to
later,
so
if
that
is,
if
both
things
are
like
against
each
other,
meaning
like
in
one
feature
we
want
sandbox
to
be
removed
before
pod
is
cleaned
up.
In
other
things,
we
want
sandbox
to
not
be
removed
before
pod
is
cleaned
up
from
the
aps
server
he
was
telling
like.
B
Should
we
discuss
something
about
a
different
way?
How
like
a
stop,
sandbox
actually
are
like.
We
know
that
stop
sandbox
is
actually
taking
care
of
that
network
cleanup.
If
not,
there
is
some
way.
Some
status
which
see
the
cubelet
can
know
that.
Okay,
all
the
results
are
not
all
the
active
resources
are
not
cleaned
up
so
that
we
can
wait
at
least
the
active
resistor
cleaned
up
before
we
can
delete
with
the
go
ahead
and
delete
the
api
server.
B
This
is
the
only
the
reason
is
network
is
networks
are
the
only
thing
which
are
active
resources
which
are
indirectly
being
happened
because
which
qubit
is
not
tracking?
B
If
you
see
others
like
c
groups
or
storage
or
what
or
even
the
containers
right
we
make
sure
the
container
is,
the
pod
is
stopped,
which
means
all
the
containers
which
are
not
running
like
so
there's
nothing
actively
consumed
on
that
host.
Once
this
pod
is
stopped,
I
mean
the
part.
Sandbox
is
stopped,
except
that
network.
A
Actually,
it's
not
just
it's
only
limited
on
the
lighthouse.
This
is
a
kind
of
things,
topical
come
and
a
couple
times
right,
so
so,
for
example,
in
the
past,
we
also
have
like
the
attached
wellness
have
this
issue.
Then
we
address
that
issue,
so
so
it's
kind
of
like
the
node
holding
one
attached
to
that
host
and
we
didn't
claim-
and
we
didn't-
we
didn't
detach
and
so
other
remote
node
cannot
reclaim
we
also.
Even
today
we
still
have
some
problems.
A
I
mean
I
don't
think
his
problem
is
the
cautious
decision,
what
we
made
it
by
sigmoid
unless
you
actively
hold
that
disgust
resource
or
cluster
resource
all
it
is
or
maybe
no,
the
resource
like
the
active
running
process
hold,
which
is
applied
to
the
cpu
memory
and
also
like
the
cluster
resource,
nectar
warning,
and
you
hold
that
one
and
you
have
to
release,
and
so
it
looks
like
here
it
is
network
resource
some.
So
so
we
still
have
like,
for
example,
and
the
local
storage,
like
even
like
a
powder
delete
right.
A
So
it's
not
all
those
kind
of
the
disk
usage
it's
been
cleaning
up.
I
just
want
to
say
it's
kind
of
lazy
cleaning
up.
You
cannot
waiting
for
all
the
all
the
disk
space
reclaimed
and
then
you
say:
oh
and
you
can
schedule
right
so
schedule
new
stuff.
You
just
cannot
even
today
and
we
we're
basically
doing
something.
So
this
is
a.
I
think
I
can
see
that
the
debate
working
for
so
I
my
understanding
is.
A
I
think
that
I
sorry
I
have
to
read
the
latest
update
from
the
comments
made
by
the
clinton
day,
but
I
think
the
clinton
didn't
disagree
about
the
about
the
release
of
the
lightroom
related
resource
right.
He
just
want
to
figure
out
how
to
best
we
release
and
but
it
looks
like
the
previous
proposal
waiting
for
sandbox.
B
There's
no
disagreement
on
the
issue.
It
was
more
okay
like,
since
this
new
feature
has
come
now,
since
both
can't
like
go
like
those
solutions
can't
go
together
so
like
we,
I
was
just
like
looking.
Okay,
oh,
are
we
like?
Do
you
want
to
go
like
you're,
saying
right
where
okay,
like
network
deletion,
can
happen
synchronously,
and
we
should
like
see?
We
have
to
take
something
similar
to
storage
the
example
which
you're
giving
like
where
it's
not.
B
You
can't
wait
for
the
part,
or
should
we
go
in
a
different
direction
like
where
can
cri
like
give
some
kind
of
a
status
of
the
network
and
just
wait
as
part
of
stops,
and
obviously
you
can
just
wait
for
the
network
to
be
stopped
or
that,
like
I
just
like
trying
to
see
which
direction
we
want
to
go
that
way
like
I
can.
B
If,
if
you
go
in
the
storage
direction
right,
then
I
can
close
the
pr
and
update
that
okay,
the
direction
which
we
go
and
then
look
for
a
different
how
to
solve
the
actual
issue.
B
If
so,
I
was
just
looking
for
a
direction
like
if
you
can
communication,
so
I
can
just
close
the
issue
apr
and
then
follow
that
approach.
C
My
understanding
that
I
think
there
seems
to
be
a
problem
in
cri
definition
today,
because
for
container
we
have
all
kind
of
state
like
running,
stopped,
deleting
so
there's
different
stages
of
the
state,
but
for
for
for
sandbox,
there's
only
ready
and
not
ready,
yes,
and
actually
the
continuity
implementation.
C
Once
you
tear
down
the
pulse
container,
no
matter
whether
you
clean
up
the
network
or
not
as
long
as
the
net
the
container
is
tearing
down,
we
will
treat
the
the
sandbox
as
not
ready
and
for
cubelet
qubit
doesn't
know
whether
it's
we
can't
delete
the
sandbox
or
not
whether
somebody's
fully
tear
it
down
now,
because
it's
only
not
ready
it
could
be
partially
cleaned
up.
It
could
be
fully
cleaning
up,
but
cuba
doesn't
know
that.
C
So
that's
one
issue
so
to
address
that
we
do
need
to
change
this
there
a
little
bit
to
make
sure
that
candidate
or
other
canary
can
calculate
that
or
everything
is
cleaned
up
now
you
can
save
continue
deleting
the
part.
So
that's
one
thing,
and-
and
another
thing
I
want
to
point
out
is-
I
think
creator
mentioned
in
his
commenting
that
even
though
even
we
have
that
cubic
now
guarantee
that
when
you
stop
the
past
amounts,
the
part
is
always
there
in
the
case
server.
C
So
there
are
all
kinds
of
corner
cases
that
the
part
is
already
gone
from.
The
test
server,
but
kubernetes
still
need
to
clean
up
the
part,
some
sandbox
and
network,
and
I
think
just
because
of
that,
when
we
add
the
networking
logic
for
docker
ship,
we
introduced
the
checkpoint
because
we
need
to
clean
up
the
network
without
the
without
api
server
being
a
part
in
api
server.
C
It
feels
to
me
like
you're,
treating
it
as
some
actual
state
and
checkpoint
for
you,
but
maybe
you
shouldn't
do
that.
Maybe
you
should
just
consider
as
a
desired
state
and
if
it's
gone,
it
means
that
you
need
to
clean
it
up.
But
what
to
clean
up?
You
need
to
look
at
your
actual
state
instead
of
looking
at
api
server
to
get
that
information.
A
That's
exactly
the
until
you
brought
up
the
checkpoint,
I
saw
the
I
saw
the
checkpoint.
I
did
the
first
thing
I
actually
solved
some
of
those
similar
risk
condition
issues
here,
because
it's
not
all
the
state
that
goes
to
the
aps
server,
so
we
preserve
at
the
know
the
liable.
So
that's
why
it
kind
of
confused
me
like
the.
Why
still
because
I
read
the
original
back
before
today's
meeting
when
we
need
all
those
metadata
states
and
in
the
api
server
studio,
but
initially
we're
doing
this.
A
We
we
don't
want
to
do
the
in
general,
know
the
level
of
checkpoint
back
then,
because
the
power
of
the
aspect
keep
changing.
Containers
back
to
the
cia
expect
also
change,
so
we
don't
want
to
do
that.
But
do
we
allow
the
cli
and
a
couple
other
components
at
the
checkpoint
just
to
serve
that
purpose,
so.
B
I'm
not
sure
a
checkpoint.
I
can
look
into
that
like
maybe
I
can.
I
can
look
it.
I
I
look
into
that
like
what
the
checkpoint.
B
The
part
today,
mostly
some
labels,
so
we
currently
have
a
couple
of
annotations.
So
in
last
I
think
an
era
year
or
two
back.
B
I
think
not
just
a
couple
of
like
requests
from
different,
I
think,
from
multiple
persons
and
teams,
so
they
are
looking
for
if
we
can
extend
cni
spec
to
or
like
make
more
labels
or
like
or
more
annotations
are
like,
or
maybe
the
whole
parts
pick
itself
be
sent
from
cubelet
to
the
cni,
because
at
that
time
it
was
mentioned
that,
like
you,
can
put
that
metadata
in
the
analysis
and
then
you
can
ask
from
the
ap
server.
B
You
know
the
pod
name
and
namespace,
so
you
call
make
a
call
to
the
api
server
and
get
the
details
of
the
pod
additional
details
like
okay,
if
you
have
to
like,
do
some
subnet
like
some
features,
additional
features
right.
Okay,
if
this
particular
additional
feature
are
not
like,
if
you
have
to
put
those
in
a
kind
of
an
example,
but
there
are
other
other
things
which
we
use
for,
which
you
put
in
that
annotations
depending
upon
few
annotations
in
the
pod.
The
way
the
network
is
getting
credit
differs
on
the
host.
B
I
can
point
to
that
like.
Let
me
try
to
get
that
issue,
but
it
was
mostly.
There
was
a
need
for
different
cna
plugins,
where
they
need
bit
more
metadata
from
api
server
to
create
the
network
into
the
network,
and
there
was
a
proposal
at
the
time
to
see
if
you
can
expand
a
spec
to
pass
that
more
from
upstream,
instead
of
just
the
namespace
and
board
name
and
id.
A
I
think
the
I
think,
the
part
name
and
and
the
container
there's
the
information,
that's
exactly
checkpoint.
I
remembered
it,
but
those
annotations
you
ask
for
additional,
maybe
it's
not
there
and
yeah.
So
obviously,
like
the
container
state
and
the
rent,
a
runtime
state,
we
didn't
put
there
because
yeah
right,
so
we
are
on
purpose,
not
put
there
because
that's
a
single
addition.
Those
are
those
right,
maybe
cause
more
problems
like
a
multiple
writer.
A
So
so
so
that's,
but
I
think
they
are
have
like
a
lot
of
read-only
state
just
serve
the
purpose
of
what
you
are
asking
for
here,
but
missing.
Maybe
a
question
you
want
so
so
just
want
to
let
you
know
and
but
yeah
I
agree
with
you
and
how
we
also
don't
have
the
state
to
indicate
like
the
current
part,
is
terminated
or
stopped.
B
D
A
For
the
checkpoint,
you
can
ask
you
can
ask
the
min
the
freehand.
Is
that
freehand
freezer
he's
the
ldap?
I
mean
his
github
account
because
he
introduced
the
thing
eye
checkpoint
yeah.
You
have
a.
A
Defined,
what
kind
of
what
kind
of
things
have
been
to
find
and
who
is
the
should
be
writer
and
what
kind
of
things
can
be
reader
and
consume
that
that's
kind
of
a
principle
we
defined?
Maybe
this
time
we
can
document
those
principles
and
to
make
sure,
because
if
we
have
dependency
on
this
one,
yeah
yeah.
A
So
do
you
have
the
consent,
mono
and
also
mac
those
things.
G
Yeah,
I
can
talk
about
it
a
bit,
so
this
is
like
one
of
my
first
issues
that
I'm
looking
at
so
I'll
need
some
help
essentially
like
I
was.
I
have
been
able
to
send
prs
and
people
have
been
able
to
review
it.
However,
when
I
was
going
through
the
code,
a
lot
of
the
code
goes
to
like
different
segs.
I
I
was
wondering
like
how
would
we,
how
should
he
go
about
it?
That
is
one
but
other
than
that
yeah.
G
That
was,
though,
that
was
one
question
that
I
had
specifically
and
then
I
also
wanted
to
talk
about.
Like
some
of
the
comments
that
I
made,
I
I'll
send
the
link.
G
This
one
sh
is
it
like
okay
to
ignore
test
files
and.
G
The
other
question
that
I
had
was
like
does
it
make
sense,
since
this
issue
has
been
open
for
like
two
years?
Does
it
make
sense
for
us
to
put
it
on
the
next
release,
and
I
can
make
the
code
changes
and
address
reviews
as
soon
as
possible
and
make
sure
like
it
gets
to
a
close,
regardless
of
whether,
because
it's
the
complexity
is
not
a
lot.
A
I
I
I
think
you
have
two
question
one
question:
it
is
this
one's.
The
change
belong
to
maybe
different
diff,
because
in
the
different
files
belong
to
the
different
segue
right.
So
so
I
think
I
think
the
one
of
the
there's,
the
normally
I
think
about
there's
two
ways
to
handle
this
problem.
Other
people
please
chiming
and
the
way
it
is.
You
could
create
after
separate
pr
like
the
small,
pia
and
then
target
for
each
signals,
each
sig
groups
and
find
the
owner
for
them
another
one.
A
Is
you
just
squid
off
the
big
one
and
but
but
try
on
purpose,
no
assign
find
the
reviewer
and
approach
from
different
stake,
but
that
maybe
take
longer
time,
if
once
again,
to
response
quicker
enough,
so
I
would
suggest,
like
the
smaller
one,
and
it
goes
to
each
stack
and
also
second
issue.
You
just
mentioned
that
this
is
like
the
two
years
old
back,
but
it
is
a
high
priority
which
lead
the
help
and
from
the
from
the
community.
A
So
so
you
don't
need
the
you
don't
I
I
don't
think
you
need
a
type
for
this
one,
but
we
can
mark.
This
is
1.20
23
venus
part
of
the
ring
is
part
of
the
enhancement.
G
I
Not
specific
to
to
this,
but
just
hello,
my
name
is
ray.
I'm
the
123
release
lead,
so
I
just
want
to
to
open
in
the
chat
the
enhancements
tracking
sheets
for
123,
for
for
for
this
sig
to
opt
in
any
enhancements
for
123.
A
And
yes,
yes,
we
have
one
document,
because
that
document
is
keep
you
up
and
we
have
the
last
two
weeks.
We
reviewed
all
the
caps
for
the
1.23
and
try
to
identify
the
risk
and
also
size
and
also
other
current
status,
and
also
try
to
find
the
review
and
the
pool
we're
going
to
update
yeah
thanks
thanks
for
this
the
link
and
then
we
once
we
that
settle
down,
and
we
are
going
to
update
to
your
to
to
your
spreadsheet.
A
J
J
So,
like
everybody
knows
in
the
sig,
node
knows
that
we
are
struggling
with
the
sidecar
cap
from
long
and
there
have
been
lots
of
proposals
on
site
car
containers,
so
this
cap
is
kind
of
similar
to
that,
but
we
do
not
want
to
touch
the
startup
and
shutdown
ordering
as
the
sidecar
caps
other
sidecar
cap
kept
for
proposing
in
this
cab.
It
is,
we
are
proposing
a
simple
idea
just
to
initiate
pod
termination
on
the
status
of
your
core
application
containers.
J
So
what
happens
if
we
use
if
we
try
to
use
kubernetes
jobs
with
site
car
containers,
then
like,
for
example,
the
logging
containers
or
the
service
mesh
containers,
they
will
always
be
running
and
the
other
main
job
container
will
be
completed
after
sometimes,
and
the
job
status
will
never
be
completed.
J
So
to
solve
this
particular
job
issue,
we
are
proposing
to
first
add
an
annotation
in
pod.
That
annotation
will
specify
the
number
of
will
specify
the
names
of
the
main
containers
or
the
core
application
containers,
and
we
will
use
the
restart
policy
and
exit
code
from
containers
to
determine.
If
this
part
is,
we
should
terminate
the
spot
we'll
do
this
in
a
reconcile
loop.
So
in
the
cap
we
have
mentioned
the
exact
code
details.
J
A
Sorry
and
last
week,
when
we
discussed
dark,
commit
he's
going
to
take
a
look,
so
I
anyone
do
you
have
the
consent
and
but
unfortunate.
Today's
director
is
not
here
so
anyone
I
asked:
have
you
reviewed
that
proposal.
A
So
last
time
we
discussed
because
none
of
us
reviewed
and
take
a
look,
so
we
haven't
committed
this
one
for
the
1.23.
If
you
look
at
our
the
link
below
in
the
agenda-
and
we
have
long
proposal
long
list
of
the
proposals,
so
so
so,
and
so
this
one
is
the
pending
state.
So
I
think
that
if
I
recall
correctly
directly
said
he
is
going
to
commit,
he
is
going
to
take
a
look
and
come
back
together.
A
So
unfortunately
he
is
busy
and
he's
not
here
so
so
maybe
we
can
talk
offline
and
we
need
to
identify
at
this
moment.
I
have
to
look
at
that
here,
so
we
need
a.
If
we
want
to
do.
There's
nothing
is
blocked.
This
continue
discussing
and
the
review.
But
if
we
want
this,
let's
get
into
the
1.23,
we
need
to
identify
reviewer
and
approver
for
this
one.
So
far,
I
have
to
say
that
the
volunteer
committee.
J
Yeah,
so
that
is
getting
a
reviewer
and
approver
is
one
thing.
Another
thing
I
am
looking
for
is
the
like,
as
a
community,
do
we
want
to
implement
this
or
like
we
want
to
go
to
the
complicated
proposals
that
have
been
in
the
past?
So
exactly
like,
I
am
looking
for
a
direction
so
like.
Is
it
the
right
direction
to
go.
A
Can
I
ask
you:
what's
the
complicated
process,
if
you
talk
about
the
complete
process,
you
are
referenced
to
the
completed
set
container
yeah.
Maybe
you
put
a
narrow
down
the
scope,
but
if
you
think
about
the
complete
process,
it
goes
through
like
the
pipe
process
review
process,
all
the
things
sorry,
maybe
for
today
I
will
have
to
go
through
those
process
so
depending.
J
A
J
A
Yes,
we
do
think
about
the
state
car
container
is
too
over
complicated
and
the
problem
all
the
problems
listed
there.
It
is
the
the
problem,
but
a
lot
of
our
problems
also
have
a
alternative
way
and
also
we
we
for
all
those
kind
of
the
solution
being
proposed
there.
We
see
the
other
problem
like
the
even
more
bigger
risk.
A
So
this
is
why
communities
signal
the
community,
especially
especially
as
the
maintainer
and
for
the
for
the
overall
healthiness
and
all
those
kind
of
things
we
are
kind
of
hesitated,
take
touch
a
big
proposal,
statical
container
at
that
time.
Back
then,
so
we
always
try
to
attack
the
problem
piece
by
piece,
so
we
can
continue
if
we
can
narrow
down
the
scope
on
this
one.
But
again
I
have
to
say
that
I
have
to
look
at
the
kind
of
proposal.
A
Yes,
so
I
basically
say
I
like
to
to
see
the
problem
more
concrete
problem.
We
try
to
attack
it
and
the
more
narrow
down
api
surface,
all
those
kind
of
things
I
just
want
to
share
here
and
that's
the
high
level
menu.
Please
go
ahead.
J
A
So
we
usually
have
like
the
also
some
other
things
like
the
lord.
We
need
to
take
a
look
of
the
proposition
yeah
yeah
sure,
no,
no
problem,
no
problem,
yeah.
Thank
you
for
that
yeah
manu
and
says.
Have
any
one
of
you
look
at
the
naked
proposal?
Yet
do
you
want
to
have
some
like
an
earlier
feedback
level
comment
yeah,
please.
D
A
K
Yeah,
hey
hello,
so,
and
we
we,
I
know,
I
understand
we
we
we
did
this
review
last
time,
but
we
we
so
long
story
short.
We
we
we,
we
ran
the
the
we
did
a
pass
to
the
issues
we
were
aware
of,
and
we
believe
that
I
believe
actually
that
this
recap,
which
is
about
the
new
pod
resources
api,
could
be
promoted
to
better,
provided
those
fixes
and
will
be
mad
and
can
are
addressed
and
merged.
K
So,
yes,
it's
for
your
information
about
those
fixes
which
could
use
some
reviewer
in
some
cases.
Approval
because
a
couple
of
them
got
looks
good
to
me.
So
some
approval
and
some
review
could
be
helpful
and
if
we
are,
if
we
are
still
the
capacity,
if
you
are
still
on
time,
I
will
just
ask
you
to
add
this
promotion
to
better.
We
already
have
end
to
end
test,
so
I
think
it
should
not
be
too
much
work,
so
I'm
I'm
asking
if
we
are.
K
A
Thanks
yeah,
okay,
so
if
not,
is
there
any
objection
on
this
one?
I.
L
Yeah
this
marvelous
lesson
from
intel.
I
was
just
looking
at
these
fixes
and
I
was
just
kind
of
also
wondering
the
general
process
of
kind
of
doing
doing
the
kind
of
api
changes,
because
I
I
saw
that
one
of
these
fixes
changes
to
api
api
and
the
cap
is
not
in
sync,
I
think,
with
the
kind
of
api
that
is
that
is
really
implemented
in
the
in
the
in
the
source
course.
So
I
was
wondering
like.
L
A
You
just
need
to
update
your
cap
and
the
request
of
the
api,
reviewer
review
and
and
the
update
added
the
added
and
until
and
here
the
test,
and
it
goes
to
the
normal
process
and
the
api
review,
because
this
is
required
after
some
of
the
api
changes.
So
we
will
may
take
the
longer
time,
but
I
think
we
are
okay,
because
the
this
this
release
have
the
longer
release
cycle.
If
we
plan
better
and
play
better
and
collaborate
better,
I
think
it
should
be
okay.
M
H
D
A
Another
thing
is,
I
also
just
want
to
call
out
the
the.
I
think
that,
because
the
orange
officer
is
not
here
so,
but
he
did
reach
out
to
me
and
about
promote
informal
container
debug
part
and
to
the
backer
this
one
don't
have
api
change.
So
mostly
it's
just
like
the
look
at
add
more
text,
and
actually
they
already
have
some
tests
already
there,
but
they
want
to
add
more
tests
and
then
promote
the
better.
A
E
D
E
A
A
A
A
No,
not
today
yeah
not
today,
so
I
looks
like
that.
We
finished
all
the
topic
and
we
have
some
like
action
item
to
do
when
it
is
came
from
the
sig
release
and
we
just
asked
us
to
synchronize
all
the
123
venus
opt-in
feature
to
based
on
our
document
and
manual
quit,
and
we
are
going
to
work
on
that
one
and
then
there's
another
thing
is
we
need
the
sync
up
with
clayton
about
the
removal.
The
part
is
removed
before
the
api.
A
That's
the
lung
lung,
due
back
london
issue
and
the
release
of
the
cni
resource
hold
resource
on
the
node.
So
we
have
the
new
proposal.
We
need
the
sync
up
with
clayton
and
the
c
and
the
c,
and
also
sync
up
with
the
original,
also
about
the
checkpoint
cni
checkpoint
on
the
node
nsc
is
the
feasible
to
address
that
problem
and
then
there's.
Another
action
I
think
is
basically
is
just
come
back
to
talk
about
the
keystone
container
and
the
review,
and
also
there's
the
tumor
type
is
being
mentioned.
A
N
So
one
of
the
kepts
that
we've
listed
before
for
for
windows
was
being
able
to
identify
windows
pods
at
api
admission
time,
and
that
cup
has
been
kind
of
being
discussed,
been
discussed,
a
lot
in
sig
windows
and
with
some
of
the
sig
auth
folks,
and
that's
mainly
so
that
pod
security
admission,
the
replacement
of
pod
security
policy,
can
act
on
pods
with
that
have
and
enforce
os
specific
security.
N
Like
constraints,
the
interesting
of
the
bits
that
might
be
relevant
to
this
audience
is
we
were
working
with
derrick
to
figure
out
the
best,
like
kind
of
a
canonical
way
of
identifying
the
windows,
os
windows,
containers
or
pods,
and
I
think,
we've
kind
of
reached
a
conclusion
or
a
consensus
across
members
of
sig
windows,
at
least
with
derrick,
from
cygnode,
jordan,
leggett
and
david
eads
and
a
number
of
other
folks
to
finally
go
ahead
and
add
an
os
field
to
the
pod
spec.
N
So
there's
I
can
link
to
the
cap
where
that's
being
discussed,
we
had
a
another
meeting
that
ravi
from
red
hat
set
up
yesterday
and
unfortunately
that
wasn't
recorded
and
but
most
of
the
notes
are
most
of
the
discussions
from
there
are
being
captured
to
this,
so
derek
seems
on
board.
Derek
was
actually
pushing
for
this,
but
definitely
probably
needs
a
lot
of
thought
from
our
considerations.
A
A
I
think
if
you
can
convince
the
sickos
and
the
dark
to
either
always
destroy
information
there,
I
think
because
there's
the
strongest
pushback
initially
right.
So
if
at
that
one,
I
think
from
the
community,
I
I
cannot
represent
the
whole
community,
but
but
I
think
that's
the
in
the
past
that
we
try
to
add
os.
A
This
show
information
and
information
and
they
are
the
strongest
pushback
with
a
concern.
N
They're
all
on
board
now
at
least
jordan
and
david
eads
are,
and
they
were,
the
they
were
some
of
the
the
main
authors
around
the
pod
security
mission
and
figuring
out
how
to
identify
windows
and
linux.
A
Okay,
so
then
should
it
be?
Okay,
I
do
say
the
people
abuse
such
information
in
the
past
and
because,
if
I
remember
when
we
first
have
the
cube,
actually
I
did.
I
did
os
this
choice
information
there,
but
I
do
see
the
people
abuse
that
information
and
but
because
today
we
do
support
the
windows
and
the
linux
posts.
So
I
think
that's
not
abusive
use
use
right
so
because
you
used
to
be
like
the
people
even
ask
for
more,
like
always
a
kernel
version
there
and
they
want.
A
I,
I
don't
think
about
that's
kinda
like
the
abusive
use
and
to
me,
so
I
personally
don't
have
consent
on
that
one.
So
I
think
the
if,
if
they
agree,
we
are
basically
at
least
the
in
the
past.
They
are
the
strongest
yeah
against
that
one.
N
Okay,
yeah
I'll
drop
the
kept
where
all
the
discussions
are
being
tracked
into
the
meeting
notes.
Here
there
was,
there
was
a
lot
of
discussions
around.
Should
this
be
defaulted,
and
the
answer
is
no,
and
also
some
of
the
other
conversations
around
like
do.
N
We
have
any
extra
information,
like
you
mentioned,
like
the
like
specific
kernel
versions,
and
the
kind
of
consensus
was
that
no,
those
would
still
need
to
be
specified
by
labels
either
built
in
labels,
or
people
can
add
labels
to
them
on
them
by
themselves,
and
the
main,
like
the
main
reason
for
adding
this
at
this
point
in
time,
is
so
that
pot,
security,
admission
and
other
admission
controllers
can
just
decide
if
they
want
to
enforce
linux
or
windows.
Specific
kind
of
policies.
A
Sure
so
can
you
do
me
a
favor?
Can
you
add
this
summaries,
whatever
discussed
and
into
the
our
meeting
notes
here
and
what
it
is
being
so
people,
because
not
all
the
people
can
attend
this
meeting
as
people
can
review
that
yeah
thanks.
F
A
F
Okay.
Okay,
so
sorry
about
that,
like
I
was
trying
to
speak
during
the
fm
container
discussion,
but
for
some
reason
I
wasn't
able
to
like
the
mic.
Wasn't
working
but
anyway,
just
like
a
quick
heads
up.
There
might
be
like
a
a
small
update
to
the
cap,
to
like
add
another
piece
of
information.
F
So
currently
the
cap
states
that
the
way
to
identify
that
a
pod
has
has
created
an
ephemeral
container
is
through
the
addition
of
an
annotation,
but
that
hasn't
been
implemented
yet,
and
that
was
like
we
were
hoping
to
like
try
and
implement
it
as
part
of
beta.
So
like
just
as
a
small
heads-up.
There
might
be
like
a
small
edition
like
detailing
out
the
final
details
of
that
onto
the
cap
during
the
using
1.1.23
release
so
yeah.
Thank
you.
A
Thanks
for
heads
up
and
thank
you
for
for
playing
with
in
terminal
container
and
also
testing
those
things
at
the
new
feature,
so
it
looks
like
the
people
start
playing
with
this
one,
a
lot
of
features,
basically,
people
waiting
for
the
beta
and
the
start
to
play
and
enhance
thanks
for
that
says
earlier,
I
saw
you
want
to
say
something.
Sorry,
I
no.
J
I've
been
in
the
meeting
I
just
accidentally
turned
on
my
video
for
a
second,
but
I
don't
have
anything
thanks.
A
H
A
quick
call
out
this
is
so
a
few
weeks
back,
I
presented
a
few
slides
around
this
topic,
called
runtime
assisted
mounting
around
cutter,
and
there
were
some
open
questions,
so
I
have
submitted
a
work
in
progress
pr
cap,
so
sig
storage
is
michelle
and
sad
we're
taking
a
look
at
it.
They
had
some
initial
comments
and
then
it's
in
a
github
pr
form.
So
just
a
heads
up
that
it's
it's
there
and
it's
two
eight
five,
seven.
I
can
also
write
it
in
the
in
the
notes.