►
From YouTube: Kubernetes SIG Node 20200204
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
B
Sure
so,
we've
we've
had
a
chat
about
this
issue
at
a
couple
of
signo
meetings
and
haven't
been
able
to
get
to
a
resolution
to
kind
of
get
everybody
up
to
speed
from
the
beginning.
What's
going
on
is
we're
talking
about
the
container
lifecycle
hooks?
These
are
hooks
that
execute
when
containers
start
and
stop,
and
currently
these
hooks
are
being
executed,
synchronously
or
at
least
the
post
start
hooks
are
being
executed
synchronously
in
the
context
of
a
pod
in
in
cubelet.
B
So
you
know,
you'll
you'll
execute
one
container
hook,
wait
till
that
finishes
synchronously
before
you
move
on
to
the
next
container
hook,
etc,
and
also
before
sending
any
pod
status
updates
for
that
pod.
Now,
crucially,
that
pod
status
update
contains
the
pod
IP,
which
Calico
never
can
plug
in
when
it's
when
it's
running
just
in
policy,
only
mode
looks
for
that
pod
IP,
so
that
it
understands
how
to
set
up
network
policy
for
the
container
so
because
that
that
pod
IP
is,
is
not
being
updated
as
being
blocked
behind
these
synchronous
post
start
hooks.
B
That
means
that
the
post
start
hook
itself
is
unable
to
use
the
network
and
it's
going
to
block
from
from
using
the
network,
because
calico
can't
find
out
about
the
IP
and
can't
set
up
Network
policy.
So
it's
just
blocking
that
container
from
doing
any
networking
so
that
that's
the
issue
and
my
my
ask
is
for
that
pod
status
with
the
pod
IP
to
not
be
blocked
behind
these
hooks
either
by
making
the
hooks
asynchronous
or
by
sending
the
pod
status
update
before
starting
into
this
synchronous.
A
B
I
think
we
just
ran
out
of
time
on
the
discussion.
You
know
there
was
some
fretting
about
whether
or
not
we're
sending
like
adding
another
pod
status.
Updates
to
the
to
the
startup
sequence
of
a
pod
is
going
to
be
a
problem
and
also
about
whether
or
not
moving.
These
things
to
be
asynchronous
is
going
to
be
a
problem,
so
kind
of
neither
solution
was
was
kind
of
accepted
as
as
being
the
right
one.
A
A
B
B
You
know
communicating
across
the
CNI,
and
you
know
it
adds
a
lot
of
complexity
for
us
to
have
to
like
BSC
and
I
plug
in
or
and
or
talk
to
cubelet
locally
to
find
out
this
this
information,
and
then
you
know
broadcast
it
into
into
the
rest
of
the
cluster,
because
it's
not
just
the
the
node
that
is
hosting
the
pod
that
needs
the
pod
IP.
The
rest
of
the
cluster
needs
to
know
it
as
well.
I.
Remember.
C
B
But
you
know,
I
would
really
prefer
not
to
go
down
those
routes,
because
then
that
that
adds
a
new
layer
of
complexity
to
two
calico.
You
know
we're
not
we're
not
learning
this
information
via
any
of
those
routes
today
we're
not
seeing
I
plug
in
and
if
it
wasn't
for
this,
you
know
this.
This
life
cycle
hook
thing.
A
You
know
if
anybody
else
is
having
a
similar
problem.
I
guess:
we've
never
quite
made
any
guarantees
on
the
state
of
the
pod
as
manifest
in
the
PI
server
when
codes
were
executed,
I'm
Frank
Rees,
if
you're
aware
of
any
other
users
that
have
had
this
problem,
because,
basically,
if
we
do
proceed
on
this,
we'd
have
to
basically.
A
A
D
B
Yeah
the
the
post
start
hook
itself
is,
is
you
know
in
general,
in
the
in
the
general
case,
not
talking
to
the
API
server,
the
the
post
start
hook
itself
doesn't
necessarily
care
what
state
the
API
server
is
in.
It
just
wants
to
talk
on
the
network.
You
know
sort
of
at
all
and
right
now
it
can't
because
calico
doesn't
know
what
the
pods
IP
address
is
and
so
can't
sort
of
safely.
Let
it
out
into
into
the
network.
B
C
So
the
post
at
work
waiting
for
the
pod
right
to
the
current
our
implementation.
It
ends
we
don't
update
after
the
POTUS
post
estado,
we
have
like
enough.
The
state
resistant
it
is
created
and
is
running.
We
won't
update
the
powder
stickers,
so
you
also,
it
is
just
an
extra
powder
staples
after
we've
created
office,
any
box,
so
we
already
have
the
product.
C
That's
so
that's
the
only
constant
thing
about
is:
that's
the
most
simplified
approach
to
give
them
and
also
most
cunning
way,
but
it's
a
skill
being
it
here.
That's
the
concept
so
to
change
off
the
posters
that
make
the
post
status
is
asynchronous
and
then
there's
the
bigger
other
bigger
concern,
because
the
obvious
use
kisses,
because
when
I
stop
blocking
right,
so
so
we
worry
about
neck.
The
container
status
is
maybe
is
we
couldn't
do?
C
C
So
when
things
we
talk
about,
it
I
think
that
you
just
already
mentioned
that
the
yeah
is
because
it's
not
the
thing
I
so
cannot
get
the
information.
And
another
thing
is
women.
Tonight
we
can
Cori
running
contr
state
because
and
if
I
remember
correctly,
because
category
is
not,
you
may
try
to
Cori
the
pad
IP,
which
is
maybe
on
the
remote.
No,
it's
not
a
lesser
days
and
Anoka
node
and
for
the
for
resource
consumption.
So
you
may
end
up.
C
B
If
you
imagine,
you
know
a
pod
coming
up
in
and
it's
in
its
post
art
hook,
but
it
needs
to
talk
to
another
pod
and
there's
a
policy
that
says
you
know,
allow
this
connection
from
this
label
right
that
that
remote
node
needs
to
understand
that
that
there's
this
IP
that's
associated
with
this
label.
So
it
can
allow
the
connection.
So
you
know
if,
if
we
had
to
query
locally,
we
would
still
then
need
to
kind
of
get
that
information
across
the
cluster
by
by
writing
it
into
into
the
data
store,
I.
E
C
C
Restart
in
town
of
the
part
include
us,
you
me
and
up
there's
the
couple
air
okay
strike.
We
may
restart
the
food
container.
So
then,
basically,
you
may
end
up
have
the
new
pod
LP.
So
when
should,
if
I
start
all
the
governor
in
guy
to
a
user
application
container
everything's
once
if
that
fail,
so
Cuba
needs
to
use
responsibility
based
on
the
restart,
a
policy
will
restart.
C
Sorry,
you
basically
you're
the
same
sent
about,
but
when
you
first
to
create
off
the
prototype
is
not
guarantee.
That
party
will
stay
with
that
one.
You
may
end
up.
We
try
to
start
the
content
in
the
needle
and
the
fail
certain
operation
and
we
restart.
So
this
is
the
second
concept,
but
what
did
the
majority
constant,
because
the
passive
would
so
what
I
have?
This
could
be
an
idea.
I
mean.
B
C
B
C
E
Yeah
I
think
it
would
be
interesting
to
see
what
updates
we
already
do
and
win,
and
it
could
be
possible
that
we
can
just
move
an
update
that
we're
already
doing
to
after
this
point.
I
don't
know
I'd
have
to
look,
but
that
could
be
an
idea
where
we're
not
adding
a
net
one
we're
just
delaying
one
for
a
little
while.
C
That's
quite
right:
now
we
basically
doing
that
after
you
try
one
more
round,
no
matter.
It
is
a
fail
or
succeed
your
updated
I.
So
so
so
you
I'm
not
sure
how
we
are
going
to
deal
everyone,
because
right
now
is
that
basically,
what
the
proposed
suspect
proposed
it's
just
extra
one
and
then
when
we
create
a
center
box,
I
understand
the
ones
and
yeah.
C
B
C
Think
the
scalability
is,
they
use
the
capture,
a
lumber
and
then
they
have
like
ideal
world.
So
the
I
don't
know
they
have
some
magics.
So
can
we
run
some
tests
but
that
one
we
need
to
talk
to
the
scaloppine
hated
him.
I
haven't
followed
that
for
a
while
they
have
some
next
a
special
test.
You
don't
need
and
really
sting
up,
that
many
awful
node
and
there's
dissemination.
So
so
then
they
have
the
estimate
threshold.
C
A
I'm,
just
trying
to
think
through,
like
that
I'd
be
worried
about
the
latency
if
any
pod
creation,
time
concurrent
power
creation,
basically
any
news
cases
that
might
be
folks
building
like
functions
platforms
on
top
of
qubits
to
see
if
this
makes
it
any
better
or
worse
and
then
probably
like
I,
don't
know
how
many
notes
in
it
I
think
that's
where
cuz
I
got
to
the
scalability
said
could
be
useful.
Look
if
you're
spinning
up.
A
We
do
there
are
people
that
are
like
so
Seth
who
was
looking
at
this
he's
an
example
of
somebody
who
went
through
and
like
did
a
lot
of
work,
to
try
to
bring
down
pot
startup
latency,
because
people
are
building
platforms
on
top
of
cute,
and
so
any
of
these
additional
calls,
just
like
makes
it
a
little
slower
for
that
function.
To
get
run.
A
B
A
B
Like
it's
not
like
I'm
saying
you
know,
you
create
the
sandbox,
you
do
a
pod
status
update
once
that
pod
status
update
is
completed.
Then
you
start.
Creating
the
containers
right
so
so,
like
I
can
see
how
that
can
add
significant
latency.
If
you
have
to
let
go
and
do
an
update
to
the
API
server
and
and
that
that
is
not
a
requirement.
B
B
C
We
try
to
update
to
other
state
hearses
based
on
the
significant
changes
about
the
powder
state
houses,
so
we
will
how
it
is
the
small
guiding
in
unity
and
for
the
API
server
or
for
the
controller
so
for
this,
for
them
to
management.
Basically,
it
is
really
we
wait
for
the
host
to
start
a
hooker
finishes,
because
that's
indicated
it
is
this
clear:
the
indicator
of
the
continent
real
state,
her
slip,
the
I
understand
he
will
propose
you
earlier
proposed
connected
asynchronous
before
this
is
finish.
C
B
But
I
mean
the
post
start
hook
is
run
asynchronously
relative
to
the
container.
So
so
what
what
cubelet
does
now
is
start
the
container
and
then
start
the
post
start
hook
and
it
waits
for
the
post
our
hook
to
complete,
but
that
during
that
entire
time,
right
at
the
post,
art
hook,
you
know,
hangs
and-
and
you
know,
takes
five
minutes
to
run.
The
container
is
already
running
and
it's
been
running
for
five
minutes.
A
Otherwise,
you
have
a
number
of
heuristics
here,
I'm
trying
to
figure
out
when
to
send
it
it's
possible
another
way.
We
could
look
at
that
to
your
point.
It.
It
just
goes
into
a
cube
like
we
do
take
a
lock
every
time
we
call
this
thing,
and
so
I
just
need
to
step
three
of
us
to
see
the
real
cost
and
given
the
complexity
of
status
manager,
it's
probably
not
our
best
use
of
time
to
do
that,
while
everyone's
on
this
call
versus
like
just
doing
a
quick
measurement
or
walk
through
that
present
status.
A
A
A
B
C
C
I
think
that
is
just
ask
for
the
huge
page
PR
you
and
the
parole.
I
I
think
I
review
one
simple
one
and
derek
I
also
looked
through
that
note.
I
locate
a
policy
group
around
the
status,
update,
I,
think
he's,
looks
good,
but
I
want
you.
Take
a
last
look
because
you're,
you
know
I
implement
out
the
huge
page
so
to
make
sure
yeah.
A
I
had
that
one
also
some
comments
and
signets
slack
around
getting
clarification
from
sig
release
if
this
needed,
like
the
container
bounding
of
huge
pages
needed
to
go
through
a
enhancement
issue
or
kept
or
not
and
I
have
not
thought
it
did,
but
I
guess
we'll
try
to
get
cleared
on
that.
But
enforcement
as
note
allocatable
I,
don't
think
is
an
issue
that
I
think.
C
F
The
only
part
that
I
could
not
move
out
is
the
update
part,
and
that
is
because
there
is,
even
though
we
are
going
to
make
resources
mutable
in
this
case,
that
information
could
be
it's
not
available
for
at
the
time
of
the
drop
code
is
called
both
in
create
and
update
in
one
of
the
cases
it
is
available.
In
the
other
case,
it's
not
applicable,
so
some
of
them
I
have
to
leave
over
there.
The
other
thing
that
I
was
working
on
was
to
update
Jordan,
had
some
comments
about
whether
defaults
second
defaults.
F
We
want
to
be
able
to
support
a
Down
older
version
of
coop
CTL,
for
example,
and
those
versions
of
cope
serial
would
drop
these
fields
because
they
don't
know
about
it.
So
I
have
to
I
could
not
use
the
set
defaults
in
this
case
for
the
policy
or
the
resources
allocated,
and
this
has
to
be
done
from
the
plugin
and
that
seems
to
be
working.
F
So
it's
in
progress
and
I
think
I'll
have
an
updated
review
out
by
tomorrow.
Maybe
so,
hopefully,
I'm
gonna
complete
the
API
portion
of
it.
This
week
we
did
lose
one
resource
because
of
the
travel
ban.
I
think
one
of
my
two
of
my
colleagues
have
gone
to
China
for
the
new
year.
It's
a
Lunar,
New
Year
and
desk
taco
there,
so
I'm
running
a
little
short
I'm
still
trying
do
this
for
118,
but
just
wanted
you
to
know
that
I'm
running
a
little
short
on
resources
here.
F
H
So
Sierra
would
like
to
to
get
that
revealed
Andre
cap
that
it's
open
so
that
we
can
start
a
can,
keep
opening
some
public
pissed
on
on
the
signatory
to
support
I
added
a
link
in
the
agenda
to
a
demo
where
I,
where
I
showed
that
we
take
all
that's
already
opening
up
or
request
it's
possible
to
to
spin
upon
on
a
signal.
V2
system,
I
think
that
I
took
care
of
all
their
comments.
H
H
G
H
G
I
A
Guess
the
kept
violence
generally
okay.
With
for
proceeding,
the
only
question
I
had
was
the
secret
man
space.
Nothing,
it
was
like
folks
will
still
want
to
build
a
run,
see
advisor
inside
the
container
and
monitor
the
host,
and
so,
if,
when
all
you
were
saying,
the
suggestion
was
privileged
container
when
it
near
secret
minutes
like
that,
if
that
works,
that's
that
sounds
good
to
me.
I,
don't
know
how
others
fell,
but
I
wanted
to
make
sure
that
people
can
still
do
that.
C
I
I
don't
know
other
people
I,
think
I
may
I
don't
have
time
but
I
when
they
first
have
sent
out.
I
did
a
quick
narrates
through
and
we
definitely
require
off
the
privilege
supporter.
So
almost
every
single
node
have
some
certain
level
of
the
private
area,
so
but
I
think
about
the
we
should
pursue,
because
we
cannot
just
I
think
that
animate
for
the
first
for
initial
we
always
have
to
do
the
exist.
Existing
I
would
have
to
do
the
coexist.
C
A
A
But
otherwise,
if
before
that,
just
like
thinking
factoids
received,
like
obviously
we're
past
the
feature
freeze
date
for
this
release
of
queue
but
I
think
if
we
can
get
consensus
on
the
cap
so
that
we
can
proceed
with
merging
PRS
in
119,
I
think
that's
like
the
desired
outcome,
and
so
just
just
happy.
I
wanna
make
sure
that
you
understood
that,
like
if
PR
is
probably
won't
be
able
to
land
until
119
opens.
H
C
That's
I
agree
with
you,
so
each
one
of
them
need
a
separate
discussing
and
each
one
we
have
to
kind
of
open,
hopefully
planted
to
the
resource
management
based
on
the
I,
understand
the
difference
between
and
the
seeker
will
be
to
you.
So
that's
why
I
want
to
move
forward,
and
during
that
time
we
add
more
tasks.
We
can
do
the
AP
comparison
based
on
today's
policy.
Today's
rule,
today's
call
identify
services
enforcement
on
the
node.
Okay,
we
need
some
resource
manager,
washingto
based
on
their
sigil
version,
yeah.
A
I
guess
to
unblock
this
discussion
may
be
done.
What
I
would
propose
is
I'd
like
to
go
to
merge
the
cap,
as
is
now
but
update
the
status
implementable
with
the
proposal
to
be
that
it
goes
in
alpha
and
119.
Yes,
so
that
just
happy
make
sure
that
you're
unblocked
in
the
next
release,
and
then
we
can
get
the
paperwork
in
place
with
cig
release
that
there's
no
challenges
there,
but
I
think
everything
you
had.
A
H
H
G
H
C
Want
ears
instead,
and
that
has
to
start
from
the
know,
the
eating:
it's
not
it.
We
actually
have
the
more
resource
management
unit
it
has
I
know
recently,
because
after
we
have
resource
management,
a
lot
of
resource
management,
dance,
we
didn't
need
I.
The
moon
know
that
Italy
some
past
should
be
up-to-date
because
sometimes
is
can
is
not
a
part
of
the
presubmit
past,
but
I
think
a
lot
of
tasks
can
be
reused
to
do
this.
The
AP
comparator.
That
was
it's
easy
for
us
to
do
that.