►
From YouTube: Kubernetes SIG Node 20200421
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
well
welcome
everyone
to
the
April
21st
Cigna
meeting,
just
reminder
that
this
is
recorded
posted
on
YouTube
for
posterity,
and
we
try
to
generally
treat
everyone
fairly
and
with
kindness.
So
today's
agenda
I
wanted
to
kick
things
off.
First,
with
kind
of
like
a
general
health
check.
I
know
there's
a
lot
of
things
going
on
in
the
world,
both
personally
in
all
of
our
lives
as
we
deal
with
chaotic
home
situations
in
light
of
kovat,
but
then
there's
also
probably
stuff
going
on
just
in
all
of
our
organizations.
As
those
folks
respond.
A
We
had
an
issue
last
week
that
kind
of
raised
it
kind
of
surprised.
It
concern,
particularly
with
sig
release,
where
some
of
our
test
jobs
that
were
released
blocking
were
broken
for
approximately
ten
days
and
I.
Think
it
kind
of
highlighted
that
we
maybe
were
too
dependent
on
a
few
select
heroes
in
the
sig
and
did
not
get
do
enough
to
spread.
A
The
group
what
we
could
have
done
differently
or
should
do
differently
moving
forward,
so
I
put
some
leading
questions
since
the
the
sake
but
agenda
just
to
address
the
issue
so
Aaron
crackin,
burger,
dims,
myself
and
Victor
were
notified
that
the
Pat
releases
of
kubernetes
were
being
blocked
by
a
job
that
had
been
failing
for
about
ten
days.
It
appeared
that
the
job
was
failing
due
to
network
access
being
blocked
in
the
GCB
project.
A
That
was
actually
running.
That
job
took
a
bit
of
time
for
a
Victor
and
myself
and
DIMMs
and
Aaron
to
work
out
exactly
the
alternate
job
that
might
have
been
giving
sufficient
coverage
but
running
a
different
TCP
project
and
in
the
end
we
moved
the
release
blocking
job
to
be
the
same
as
the
presubmit
job
and
Aaron
recorded
some.
A
The
list
of
actions
that
were
taken
I
think
one
of
the
things
we
need
to
do
is
capture
owners
for
completing
the
next
steps
that
were
identified,
but
at
a
macro
level.
I
kind
of
one
use
this
moment
to
pause
and
ask
like.
Are
we
actually
able
to
sustain
the
number
of
test
Suites?
We
are
running
and
do
we
want
to
think
about
pruning
these
things
and
then,
as
a
group,
how
do
folks
feel
about
the
types
of
carrot
and
sticks?
A
We
can
do
to
improve
sustainability
of
our
actual
infrastructure,
not
just
the
normal
cadence
of
getting
code
merged
and
reviewed,
which
I
think
is
secondary
to
like
ensuring
that
it
keeps
functioning
so
one
to
pause
here
and
maybe
see
if
open
up
the
floor
for
other
folks
perspectives.
And
then
maybe
we
can
talk
through
what
things
we
can
do
to
kind
of
like
scale
mentorship,
maybe
a
unique
role
in
the
sig
or
a
way
to
help
grow
stature
to
just
help
maintain
healthy
operations.
So
I
don't
know
Don.
B
Mean
mr.
I
want
to
share
something
I
think
there
are
so
can
people
hear
me
so
I
keep
her
my
own
voice?
Okay,
nice,
okay,
so
this
particular
incidence
I
think
at
least
there's
the
communication
issue.
So
so,
when
the
problem
first
occur,
I
think
there's
the
communication,
so
the
actually
the
we
I
think
that
she
was
dark,
but
maybe
I
had
never
publicly
shared
with
sick.
B
So
we
had
some
of
the
reorg
at
Google
so
and
so
we
lose
some
of
the
maintainer
non-time
antenna
from
Google
site,
but
we
are
in
the
process
to
build
a
new
maintainer
and
the
new
contributor
to
the
calm
to
the
community.
So
we
are
in
the
process
of
doing
so.
So
when
this
problem
first
occurred,
you
through
the
activity
was
actually
the
most
caused
by
fundamental
to
problems
when
it
is
a
stale
stay
out.
B
The
older
image,
which
you
have
some
security
one
and
being
idea
so
cause
that
the
failure
another
one
it
is
due
to
negligence,
so
I
could
want
to
be
Nate
here
and
and
deprecated.
We
need
a
change
of
the
treatment
comfy
msconfig
and
to
fix
that
problem.
So
actually
we
are
engineer
which
is
the
maintainer,
where
we
quickly
identify
the
problem.
So
now
we
think
about
that's
a
good
opportunity
at
you
to
for
the
new
contributor
and
the
new
people
can
grow
the
experience.
B
So
we
assign
those
issues
to
the
new
people
and
there
is
the
miscommunication.
So
it's
now
they
are
quickly
pick
up
those
problem
and
then
there's
the
arrow
actually
talk
to
me
and
I
talked
to
you.
Nantao
talk
to
Eugene
many
people,
we
just
can't
they
say.
Oh,
we
did
a
tragedy
and
we
did
detect
the
problem
and
we
issue
those
issue,
but
those
communication
when
he
approached
us
and
asked
that
that's
why
the
next
are
doing
the
date
of
the
release.
B
So
we
didn't
really
know
no,
it
is
become
to
block
her
for
the
for
the
release,
so
I
just
want
to
share
with
this
one
there's
the
beyond.
So
beyond
what
to
direct
the
problem.
There's
the
other
problem.
So
we
are
still
working
on
those
kind
of
things
to
try
to
figure
out
how
to
grow
more
contributor
and
I
heart
and
also
T
how
to
remove
of
the
order
vector,
unnecessary
dependency
and
the
GCP
projects,
and
the
also
Erica
also
reads
earlier
is
something
that
is
it
is
that
how
is
that
sustainable?
B
We
management
that
many
of
the
past
straits,
and
so
this
is
kinda
week.
We
can
open
discussing
those
kind
of
things
but
I
believe
we
Canada
triple
or
maybe
even
not
to
four
times
of
the
re
in
usual
test
cases
and
but
the
problem
today.
Actually
it
is
the
problem,
it
is
those
negativities
blocker
issue,
those
job
access,
sale
and
the
nobody
really
in
time
to
fix
those
problems.
B
C
C
Was
good
feedback
I'd
like
to
point
out
that
you
know
over
on
container
D
site?
We
were,
you
know
we
were
definitely
impacted
because
the
the
GP
issue
you
know
hit
or
believe
it
or
not-
hit
our
pre
subnets
right
and
in
order
for
us
to
fix
that
we
actually
just
switched
using
TCP
for
simple
validation,
type
stuff.
You
know
you
know
lint
checkers
and
things
like
that
to
to
move
to
more
like
get
up
actions
as
a
local
container,
D
testing
resource-
or
you
know
that
there,
but
we
still
want
to
keep
water.
C
Our
cluster
testing
right
into
n
node
type
testing
in
ng
CP,
for
you
know,
for
now
we
think
that's
the
right
place
right,
so
we
need
I
think
we
can.
We
can
definitely
reduce
the
number
of
buckets
that
we
were
running
off
the
off
the
one
you
know
set
of
permissions
in
the
one
DCP
server
I.
Think
we
just
we
put
too
much.
You
know,
reliance
on
the
one
on
the
one
path,
I
think
across
the
various
projects
that
feed
into
kubernetes.
B
What
they
say,
one
argued
that
when
we
first
build
those
load
III
test
and
also
we
are
the
first
stake
and
to
define
the
confirm
test
that
time
actually
cast
gke
don't
have
that
much
of
the
confirm
test
under
to
any
year.
So
we
actually
say
this
sig,
we
build
those
kind
of
things
and
the
nature
of
no
denial
of
the
confirm
times
the
expanded
or
to
the
even
more
like
most
of
the
know.
B
D
to
e
test,
especially
confirmed,
has
all
the
confirm
times
expanded
to
the
cluster
level,
but
there's
the
particularly
things
because
at
node
level
is
more
deterministic,
so
we,
and
so
we
still
maintain
some
of
the
active
the
know
the
ETA
as
the
release
blocker
even
is
not
confirmed,
but
it
is
the
release
broker
and
because
it's
harder
to
record
that
to
the
cluster
level
and
the
mention
of
the
deterministic
test
environment
and
make
those
tasks
predictable.
So
this
is
the
reason
we
still
have
like
the
no
D
to
e
the
major.
B
Beyond
that
one,
then
we
have
like
the
particular
night,
the
know
the
level
component
tester
like
the
Sarah
I,
like
the
cry
out
like
the
antenna
D
that
is
more
next
another
level
test
and
it's
harder
to
build
those,
and
also
we
have
not
resource
many
movie
later
each.
We
test
that
it
is
really
hard
at
the
castle
level,
because
crash
now
is
like
the
very
easy
to
trigger
those.
Just
like
what
I
say:
it's
not
a
deterministic.
These
were
difficult
meant
and
economic
mystics.
B
C
B
I
just
want
to
share
the
history
and
why
I
don't
know
the
level
test,
and
even
we
put
in
general
resource
to
expand
it
and
know
the
level
confirm
test
to
the
class
that
I
will
confirm.
Tester
there
was
the
student,
the
reader.
We
want
to
maintain
some
of
the
new
law
it
has
a
saline,
is
the
broker,
so
I
noticed
that
a
lot
of
the
release
broker
types
of
students
for
for
this
release
and
the
steal?
Actually,
it
is
reasonable
to
mark
as
the
release
blocker
test
it
just
just
you're
here,
I.
D
E
D
D
Because
I
just
looked
at
the
test
grid
and
we
had
added
some
tests
for
topology
manager
and
CPU
managers
and
I
look
and
they
have
been
failing
for
two
weeks
and
I'm,
not
getting
an
email
and
so
I
think
if
we
go
through
some
of
these
test
jobs,
especially
the
ones
that
are
released,
brokkr
and
make
sure
that
there's
a
party
that
can
be
emailed
or
notified.
So
we
can
look
at
it
quicker.
Instead
of
having
someone
on
the
release,
teams
and
hey,
you
guys
have
been
felling
tests
for
10
days.
D
B
Also
can
share
something
in
the
past.
This
is
this
is
kind
of
a
phony,
a
crack
after
we
all
are
internal,
so
used
to
be
like
the
I
put
up
a
group
of
the
people
engineer
from
Google.
We
have
not
on
call
so.
This
is
why
we
don't
have
it
I
like
your
idea,
Victor,
and
we
should
have
the
email
notifying
our.
So
we
used
to
be
like
the
engineer
being
on
call
looking
to
those
note
ii/3
test,
so
we
things
fail.
B
So
it's
kind
of
the
hanison,
many
people
didn't
notice
and
our
engineer
actually
recommended
check
off.
Those
are
the
like.
The
really
nice
blocker
know
the
each
we
test
and
then
notify
each
other
there's
the
broken
and,
at
the
same
time,
there's
the
corresponding
remain
stinging
and
I
mean
that's
for
the
cheek
a
year
in
his
team.
So
we
also
make
that
it
is
part
of
the
cheek
area.
B
Nice
teamed,
the
block
release
blocker
test,
so
this
is
Canada
working,
my
while
the
doing
the
reorg
and
Laurent
is
being
transferred,
but
it's
not
all
those
duties
being
successful
transfer.
So
it's
some.
It
is
duties
for
inter-college
I.
Don't
want
to
honestly
frankly,
should
be
here,
so
we
are
going.
We
have
to
fix
those
problems
and
at
this
moment
we're
looking
at
we're
looking
a
problem
it
is
and
how
we
are
going
to
take
this
far,
but
I
think
one
kind
of
the
problem
in
the
past.
B
So
this
time
I
think
we
should
build
those
next,
the
uncle
duty
or
whatever
negative
location
on
like
take
this
email-
and
you
involve
community
here
and
I,
also
want
to
share
here,
and
if
the
director
and
I
talked
about
this
even
before
we
want
long
handbag,
we
want
to
say,
are
not
just
Google
now
to
hold
those
duty.
Can
we
just
share
among
our
community?
B
The
puppet
is
if
dark
and
I
looked
at,
how
many
people
participate
up
the
signal
and
it's
keeping
changing
so
so
depend
on
the
so
that
steric
early
also
said
or
not
have
them
press
about
a
feature.
A
lot
of
requests
about
their
particular
API
block
their
production
and
they
but
the
fundamental
apartment
it
is.
We
want
to
build
a
sustainable
contributor
trip
and
the
group
and
sodunke
share
those
duties,
and
so
that's
kind
of
the
that's
kind
of
challenging
darica
I
can
talk
about
the
back
of
us
many
times.
B
A
Don't
don't
I
want
to
make
sure
like
it's
understood
that
it's
associated
in,
at
least
in
my
view,
on
the
cig
that
Google
had
taken
on
as
large
of
a
responsibility
as
they
had
and
it's
understood
like
we
can
all
appreciate
that
people
can
get
temporarily
repurposed
or
get
looking
at
other
downstream
responsibilities.
I
think
what
I'm
wondering
is
if
we
can
use
this
as
a
nice
inflection
point
so
the
same
experience.
A
You
talked
about
dawn
on,
like
gke
release
blocking
tests
if
I
look
at
Ryan,
Seth
or
Victor,
or
any
of
the
other
Red
Hatters
on
this
call
they're
aware
that
we
have
the
same
internal
rotation,
basically
on
the
redhead
side
around
our
own.
We
call
them
build
cups.
It
was
basically
inspired
needs
to
be
built
up
in
cube
and
so
I
think
many
of
our
downstream
activities
from
the
core
open
source
project
are
doing
these
similar
activities.
We
just
haven't
coalesced
right
now
into
a
group
to
do
it
here
and
we
should
do
the
death.
A
What
I'm
wondering
is
if
we
could
get
volunteers
to.
A
E
G
Else
in
sig
note
and
transmit
that
knowledge
back
into
AWS
we're
also
eager
to
help
out
on
the
test
infrastructure
side.
So
I
have
had
a
one-on-one
with
Aaron,
Creek
and
Berger
just
to
learn
a
little
bit
about
prowl
and
suddenly
the
test
infrastructure
systems,
but
I
certainly
would
appreciate
some
deeper
dives.
There.
C
D
There
one
thing
out
loud
when
we
were
looking
at
this
last
issue:
I
mean
there
were
some
connectivity
issues
and
one
of
the
challenges
was
being
able
to
get
access
to
that
system
to
debug,
which
was
pretty
much
shut
down
right
away.
So
I
think
as
we're
going
through,
that
that
would
be
something
to
keep
in
mind.
Can
we
get
some
kind
of
access
to
these
systems
to
you
know,
get
hands
on
to
debug
failures
when
there
is
one
to
do
the
investigation,
yeah.
H
It's
one
to
chime
in
here
this
is
Daniel
I'm.
The
CI
signal
lead
for
1.19
and
in
really
to
some
of
those
issues
with
getting
access
to
the
project.
I
know
Aaron
is
working
on
transitioning,
some
of
that
over
to
community
owned
infrastructure,
which
should
help
with
that
also
in
response
to
the
sorry
I'm
getting
that
was
in
response
to
the
contributors
and
getting
on
boarded.
H
We
have
CI
signal
shadows,
which
you
know
they're
folks
that
are
eager
to
get
involved,
and
you
know
part
of
their
process
is
learning
about
the
test
infrastructure
and
that
sort
of
thing.
So
those
are
folks
that,
while
they
may
not
be,
you
know
on
cig
node
right
now,
there
are
people
that
are
interested
in
getting
more
involved
with
the
community,
usually
they're,
very
eager,
and
they
probably
open
to
onboarding
and
that
sort
of
thing.
So
that
may
be
an
avenue.
We
obviously
from
a
release
blocking
perspective
want
to
help
all
SIG's.
H
You
know
make
sure
that
they
have
good
grasp
on
their
tests
and
that
also,
you
know
we're
reaching
out
an
effective
way
and
doing
good
triage
with
them.
So
you
know
we
are
willing
to
go
I
guess
beyond
just
contacting
you
and
your
tests
are
failing
and
also
help
with
the
effort
of
managing
those
and
also
doing
any
work
required
to
transition
them
to
a
more
sustainable
state.
A
That's
awesome,
pics,
Daniel,
I
think
the
echo
was
coming
from
dawn,
so
don't
just
a
heads
up
you'll
have
to
unmute.
If
you
wanted
to
speak
again,
trying
to
think
is
a
concrete
next
step,
I
guess:
first,
big!
Thank
you,
Jay
and
Victor
for
volunteering.
Your
time
I'm
wondering
if,
as
an
action
item
out
of
this,
the
two
of
you
could
sync
up
and
then
I'll
try
to
join
or
we
can
get
another
participant
and
we'll
write
down
some
goals.
A
But
I'm
kind
of
thinking
of
this
is
like
a
119
activity
to
basically
audit
the
state
of
our
test
and
then
provide
concrete
recommendations
back
to
the
cig
in
some
period.
Let's
say
we'll
determine
the
period
afterwards
where
we
can
go
in
and
start
to
act
on
this,
and
one
of
the
things
I'd
like
to
do
is
like
once
we
have
that
clear
list
identified.
Is
we
kind
of
need
a
carrot
in
a
stick
for
the
other
types
of
contributions
we
get
in
the
stig
and
one
of
the
things
I'm
curious
is.
A
If
we
take
the
output
of
that
small
working
group
and
say
here's
the
needs
that
we
need
to
fail
to
meet
the
stake
before
we
can
take
on
major
new
engagements.
Is
that
too
draconian
of
a
stick?
Or
is
it
the
type
of
thing?
We
think
we
all
need
to
do
just
to
maintain
what
we're
already
supporting
for
the
broad
open
source
user
base
today,
I
think.
C
C
B
There
are
more
people
from
the
chat
to
jump
in
to
want
to
contribute
too
so
yeah
appreciate
their
sermon,
David
and
David
and
I
mean
from
the
Google
GK
team,
the
new
team.
Are
they
amazing?
The
reorg
actually
found
a
new
team,
so
they
would
not
love
to
help
you
and
a
Suren.
Sorry
I,
don't
know
where
you
came
from,
but
the
sermon
also
wanted
here
to
help.
B
Many
people
want
to
help
so
I
think
that
we
started
from
the
signal,
meaning
Easter,
that's
good
enough.
I
will
share
also
some
older
document
and
the
written
by
VG
and
about
how
we
categories
of
the
know,
de
GE
and
I
think
that
we
we
still
try
to
maintain
some
of
those
kind
of
things,
because
it
is
irrelevant
to
our
release,
really
Bennigan's
and
quality,
so
you're
sure
share
those.
We
share
those
documents
through
the
open
idea
signal
of
this
meeting
and
we
came
at
nintendo's
thing.
We
can
you
all
from
there.
A
A
F
F
F
So
one
of
the
conclusion
was
to
write
was
to
come
up
with
a
plug
in
support
for
the
pod
admission
handler,
so
the
reason
being
like
we
as
a
cloud
provider
like
we
support
or
nodelist
or
serverless
nodes,
so
where
we
spin
up
a
far
gate
instance
when
customers
ask
for
a
pod
to
be
spin
up
in
the
air
cluster
on
which
they
want
to
manage
the
nodes
for,
but
the
Fargate
notes
doesn't
support.
Few
things
like
you
cannot
have
host
networking
on.
F
You
cannot
have
the
any
of
the
EBS
volumes
or
persistent
storage
volumes
attached
to
your
pod.
So
one
of
the
issues
that
we
faced
in
the
past
was
like:
how
do
we
restrict
such
pods,
because
with
PKS
or
with
any
other
cloud
providers
like
the
customers,
they
have
or
the
company's
users,
they
have
administrator
access
to
the
cluster?
So,
even
if
we
create
a
PSP
or
even
if
you
have
a
validation
by
book,
that's
rejecting
the
pod
like
they
can
still
go
ahead
and
delete.
F
So
that's
the
discussion
we
had
in
the
past
and
then
this
PR
look
whatever
I
posted
now
as
an
updated
dock
and
sub
dated
notes
from
our
previous
discussions.
So
if
this
one
talks
in
detail
about
how
we
are
going
to
implement
this
particular
plugin
in
the
cubelet
code,
so
the
approach
that
we
went
with
this
was
similar
to
the
CNI.
How
CNI
works
so
there
will
be
like
a
configuration
file
in
a
specific
folder,
and
this
configuration
file
has
the
list
of
plugins.
F
G
G
How
do
we
protect
the
the
host
control
plane
for
lack
of
a
better
word,
for
that
is
completely
managed
from
from
the
kubernetes
users
that
have
kubernetes
administrative
access
to
the
cluster?
So
if
you
remember
from
the
last
time
we
talked
about
this
I
remember,
Derek,
you'd
you'd
said
you
know
why.
G
Why
not
just
use
pause,
security
policies
or
OCI
hooks,
etc,
and
we
had
discussed
a
lot
of
those
solutions
and
we
sort
of
outline
in
the
motivation
here
those
different
alternatives
and
why
we
ended
up
where,
where
we
did
so
tried
to
be
as
as
open
and
transparent
about
the
decision-making
process
as
we
as
we
could
be
so
folks,
so
don't
think
that
we're
coming
at
this
from
a
like.
You
know
we're
trying
to
sneak
anything
in
or
anything
like
that.
A
A
G
No
I
think
that
primarily
you
know
this
is
just
this
is
code
that
that
we
will
that
we
will
keep
having
to
patch
right
into
our
our
kubernetes
versions
and-
and
that's
perfectly
fine.
So
it's
it's
more
of
a
long-term
technical
debt
reduction
that
we're
aiming
for
here
by
up
streaming
this
functionality,
but
there's
no
specific
timeline
and
to
your
point
earlier
about
you
know,
contributing
bug,
fixes
and
the
the
fetching
the
wood
carrying
the
water
type
activities.
You
know,
that's
that's
certainly
more
important,
but
yeah.
This
is
just.
G
This
is
something
that
we
needed
to
do
too,
resolve
our
sort
of
split
control,
plane
issues
with
with
Fargate
and
think
that
we've
come
up
with
a
solution:
that's
that's
extensible
and
and
up
streamable.
So
we're
really
just
looking
for
some
feedback
and
and
in
particular,
if
anyone
can
think
of
any
alternative
to
what
we've,
what
we've
landed
on.
F
So
I
do
have
like
an
open
question
that
I
wanted
to
ask
like.
Let's
say
if
we
wanted
to
implement
this
plugin
support.
So
if
you
see
the
doc
like
and
if
you
search
for
the
word
like
socket
or
something
like
I,
provided
like
three
options
to
say
how
a
pod
or
the
pod
admin
handlers
shim
can
talk
to
the
external
plug-in
it
can
be
either
to
a
shell
or
through
a
G,
RPC
stream
or
through
a
RPC
server
or
through
an
UNIX
socket
like
like
which
one
does
everybody
feels
like
comfortable
doing
different
case.
F
If
we
go
with
this
approach,
like
do
you
feel
like
just
the
shell
should
be
enough
for,
should
we
have
a
JRPG
or
and
UNIX
okay?
The
only
reason
why
I
thought
a
RPC
server
or
on
UNIX
socket
beautiful
useful
is
we
don't
have
like
we
established
the
connection
ones,
and
then
we
can
just
use
that
connection
and
keep
sending
the
request
and
get
the
response,
and
we
don't
have
to
like
encode
decode
the
response.
A
I
F
A
Just
trying
to
think
through,
like
all
the
other
things
that
we
hit
with
things
like
device,
plugins,
we're
just
understanding
on
where
and
how
you
intended
this
plug-in
to
be
life
cycle.
And
if
its
life
cycles
separately
from
like
a
traditional
node
life
cycle
operation.
Where
you
drain
and
do
some
maintenance
and
maybe
write.
G
A
G
Although
there
is
a
you
know,
one
of
the
issues
with
that
particular
sort
of
strategy
is
when
you're
upgrading
the
the
CNI
plugin.
So
if,
if
Kubla
thinks
that
a
CNI
plugin
is
active
because
it
sees
the
file
in
the
whatever
it
is
Etsy
CNI,
netd
file
or
directory',
then
it
just
considers
it
to
be
active
at
it.
But
if
you
replace
that
that
plug-in
file,
it
doesn't
really
know
you
need
to
restart
the
daemon
or
whatever
that
anyway,
that's.
C
Good
point
you
can,
you
can
query
for
the
status
of
you
know
CNI
plugins
that
were
used
at
the
image
of
a
pod,
so
we
should
be
able
to
get
that
formation,
but
it
may
not
be
the
same
for
all
content
types
because
that's
like
augmented
information
in
the
stats
yeah.
We
can
talk
after.
If
you
can
ping
me
Mike
Brown,
the
four
brands
are
brown
WM.
Oh.
F
I
guess
that's
a
good
question
like
whether
what
happens
if
the
flood
plugin
is
not
available
or
what
happens
to
the
cubelet
status
when
it
is
coming
up.
Yeah
I
didn't
think
through
that
one.
Yet
because
what
in
my
mind,
I
thought
like
the
plug-in
should
already
be
in
existing
on
the
node
itself,
but
if
it's
like
similar
to
CNI
plug-in
where,
if
the
customer
does
it
through
a
demon
set
fault,
then
what
happens
right.
F
A
I
I
F
So
let's
say
like
the
plugin
is
going
to
respond
to
say
that
this
pod
cannot
be
admitted
because
it
requires
EBS
volume
or
any
other
persistent
volume.
Then
the
queue
with
this
pod
admit
handler
will
treat
the
response
and
similar
to
how
other
part
admit
handler
returns.
A
admit
result,
this
pod
admit
under
stream
also
will
return
a
result
saying
that
the
reason
code
saying
unsupported
pod,
spec
and
then
with
a
message
and
then
with
the
boolean
flag,
admit
set
to
false.
F
I
H
I
F
F
G
Perhaps
Alexander
is
just
asking
how
the
couplet
will
reject
like
the
some
specificity
there,
like
that
it'll
add
a
condition
object
into
this
to
the
pod
status
conditions,
collection.
That
explains
why
it
was
rejected
and
puts
it
into
a
particular
state.
Is
that
what
you
were
getting
at
alt-enter?
Yes,.
I
So
my
thing
is
what
like
gooblat
need
to
put
us
port
and
we
state
what
says:
I
need
to
resk
it
away
to
somewhere
else,
but
like
what's
one
one
side
of
a
story,
but
while
we
were
discussing
I,
actually
I
got
another
idea.
What
was
maybe
stupid?
Questions
like
heavier
is
considered
to
use
scheduler
extension,
so
practically
like
were
plug
into
a
scheduler
which
will
not
put
with
like
bad
ports.
To
to
your
note.
C
B
Like
asked
a
couple,
what
the
Alexandra
is
suggest
is
that
the
last
couple
we
can
we
talk
about
topology
right
new,
a
while
scheduling,
so
this
one
it
is
more
like
the
others
that
create
here.
No,
the
security
like
well,
our
security
contacts,
support
level,
whatever
things
like
that
or
maybe
no
the
feature
level
of
the
a
wireless
view
to
those
are
well
is
into
the
sky
owner
at
the
first
place,
and
then
you
don't
you
basically
it
is
reduce
amount
of
our
for
no
the
level.
B
We
are
like
the
mismatch
between
the
schedule
and
another
level.
So
then
you
don't
need
actually
the
power.
You
know
the
level
add
emission
control
powder
and
to
do
those
kind
of
things.
So
this
is
actually
the
another
level
at
the
meeting.
Control
is
the
complex,
complicated
the
whole
thing
everything,
and
if
we
can
build
that
into
the
scheduler
using
the
scheduler
extend
our
framework,
we
basically
don't
need
it
to
this.
So.
B
J
A
A
F
G
G
Think
at
the
end
of
the
day,
it
was
just
the
security
stance
of
having
these
composed
nodes
have
a
control
plane
that
is
under
no
circumstance
able
to
be
examined
by
the
tenant
overrode.
Any
any
decision
to
have
any
control
of
these
security
based
policies
at
the
API
layer
and
I
guess
we
kind
of
consider
the
scheduler
to
be
the
API
layer
for
lack
of
a
better
place.
To
put
it.
B
Okay,
the
Rena
is
sure
this
one
is
just
long
time
ago
when
Google
support
TPU,
and
we
we're
doing
or
something
sooner
hinder
behind
this
thing
that
had
the
coroner
compose
the
node
and
compose
those
accessible
to
those
TPU
device.
So
then,
but
we
don't
change
of
the
couple
that
is
like
the
s,
the
open
source
upstream,
because
that
can
occur
at
least
from
my
side.
That's
because
exposed
extra
complexity
and
also
most
surface
area
to
community.
Not
everyone
will
benefit
by
this
feature,
but
actually
the
trade-off
for
the
overall
community
for
everybody.
B
So
that's
why
we
spend
time
to
compose
those
notes
behind
a
thing
and
deliver
data
to
the
customer,
so
I
just
want
to
share
this
way.
That's
a
couple
of
years
ago,
I
I'm,
the
one
I
could
be
full,
propose
anything
to
the
community.
I
already
pushed
back.
I
said:
oh,
no,
let's
just
do
something
more
complicated,
just
share
here.
E
One
other
thing
we
might
want
to
consider
is
how
this
interacts
with
something
like
config
map
basic
you
play
conveyed
but
convey.
It
seems
like
it's.
We've
been
under
the
assumption
for
a
long
time
now
that
cluster
admin
sort
of
owns
the
notes
as
well.
They
own
the
note
objects
and
objects,
there's
a
lot
of
echoes
that
go
sorry
and
so
I
think
this
is
going
in
a
different
direction
than
we've
gone
before.
I
have
one
just
general
question
just
because
I'm
not
sure
how
far
gate
works.
Is
this
a
CRI
like?
G
G
I
G
F
F
G
G
A
A
Address
termination
of
a
pod
and
the
sequence
order
to
terminate
containers
in
that
pod
when
just
a
user
powers
off
a
computer
and
I
personally
view
that
as
a
prereq
to
moving
sidecar
containers
forward,
because
all
the
production
readiness
reviews
in
the
world
are
making
it
clear
to
me
that
people
use
grenades
in
ways
that
I
didn't
anticipate.
And
so,
if
we're
going
to
start
adding
shut
down,
sequencing
I
want
us
to
to
do
shut
down
sequencing
in
a
way
that
will
work,
whether
you're
using
spot
instances
or
any
of
the
other
things
on
clouds.
A
E
A
K
One
thing
I
just
wanted
to
bring
up
and
so
I
think
would
be
really
useful
if
we
had
a
concrete
list
of
prerequisites
for
doing
this
work.
I
totally
understand
your
respect,
not
wanting
to
do
this
before
we
have
is
you
know
the
ground
were
cleaned
up,
but
you
know
this
is
a
super
important
feature
for
a
lot
of
people,
so
we're
very
willing,
I
think
to
get
people
to
do
things
that
will
unblock
this.
A
So
that
that's
concerning
to
me,
then,
to
the
earlier
point
on
I
think
this
is
a
good
counterpoint
to
say,
like
there
are
features
that
are
important
to
a
lot
of
communities.
I
know
this
is
particularly
important
to
certain
networking
communities,
but
we
spent
the
first
half
hour
of
this
call
talking
about
how
we're
having
trouble
just
keeping
the
lights
on
in
the
sig
and
so
maybe
Howard
or
Mike
and
others.
This
is
a
good
time
to
to
jump
in
and
help
J
and
Victor.
A
K
A
So,
let's
of
the
exercise
that
Jay
and
Victor
will
arrange
on
just
test
infrastructure
and
then
maybe,
if
there's
interns
available
to
the
community,
we
can
balance
the
engagement
but
I'm
really
reticent
to
take
on
a
ton
of
new
things.
Given
that
the
impact
we
had
to
the
community
was
potentially
blocking
as
a
stream
release
and
to
me
that
should
be
like
a
wake-up
moment
for
us
and
I'm
happy
to
push
back
because
I
think
it's
in
the
best
interest
of
communities
as
a
whole
that
we.
B
Told
me
support
just
I
call
what
dark
here.
I
also
want
to
mention
that
earlier
I
didn't
mentions
I'm,
seeing
one
constant
it
is
we
used
to
have
a
stable
team
like
sustainable
and
maintain
those
kind
of
things,
but
that's
because
it's
the
company,
one
company
so
to
maintain
a
test
state
hers.
So
due
to
the
reorg,
we
lose
that
one
we
have
to
rebuild
this
time
really
want
to
build.
That
is
sustainable
team
from
the
signal
which
is
connector
so
in
hand.