►
From YouTube: Kubernetes SIG Node 20190723
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
This
would
be
the
same
thing,
so
we
are.
The
status
in
this
case
would
be.
What
is
the
current
allocation,
which
is
there
are
two
parts
to
it?
One
is
the
request,
and
the
other
is
limits
and
limits
is
currently
stored
in
the
C
group.
Although
the
API
does
not
allow
you
to
query
that
it
can
be
modified.
However,
there
is
nothing
we
do
about
the
requests.
We
only
have
the
sum
and
we
base
it
on
the
what
the
spec
tells
it.
C
That's
fine
today,
because
we
don't
expect
that
to
be
mutable,
but
in
this
cap
the
proposal
is
to
make
that
mutable.
So
thinking
more
about
it,
extending
it
to
keep
the
requests
and
limits,
along
with
the
other
information,
can
help
it
be
the
source
of
truth
to
regenerate
the
resources
allocated.
That's
in
the
status
proposal,
the
status
today
right
I'm.
What.
A
I'm
trying
to
figure
out
is
so
today.
The
current
state
is
that
when
the
cubelet
restarts
and
it
reconnects
to
the
it'll,
wait
until
it
reconnects
to
the
API
server,
it
will
get
those
pods
that
are
running
on
it
and
then
it'll
perform
admission
on
those,
and
so
they
keep
peace.
That
we're
missing
right
is
that
we
want
to
not
reject
pods
that
were
running
before,
simply
because
a
different
pod
say
had
requested
a.
C
C
And
if
the
current
state
is
not
does
not
tell
you
match
up
with
the
desired
state,
then
we
know
that
this
was
something
that
was.
This
is
a
part
that
was
in
the
process
of
being
resized.
Our
own
size
was
desired
on
it
and
we
can
very
quickly
tell
whether
it's
admissible,
whether
the
resize
is
admissible
or
not.
The
part
itself
is
admissible,
so.
D
A
Admission
on
pods
right,
so
this
isn't
just
about
setting
the
status
field.
This
is
about
making
sure
that
if,
for
example,
one
pod
has
requested
an
increase
to
the
entire
size
of
the
node
and
gets
admitted
first
that
we
don't
actually
perform
that
resize
until
all
the
other
pods
that
were
running
have
also
been
admitted
right,
yeah.
C
The
resize
results
in
exceeding
the
capacity
that's
available,
then
that
particular
part
would,
we
would
have,
we
would
admit
it,
but
we
would
say
that
the
pod
has
failed
resize,
so
the
desired
state
would
still
show
the
change
would
be
in
when
we
call
the
port
life
cycle,
I
believe
in
the
predicates
we
sum
we
pass
the
pods
we
set
the
desired
sum:
the
currents
if
a
different
value
of
current
resources
allocated
is
found.
We
use
that
instead
of
using
the
desired.
So
that
would
be
a
change
to
the
admission
control
logic.
A
Okay
and
how
do
we
ensure
that
all
the
pods
that
were
running
actually
come
back
before
we
try
and
perform
any
resizes?
What's
to
prevent?
For
example,
a
pod
gets
admitted
with
its
previous
requests,
and
then
we
handle
them.
Resize
upward
of
that
pods
request
before
other
pods
that
we're
running
on
the
node
get
admitted
since.
C
So
there
are
two
cases
to
consider
here:
one
is
the
cubelet
restarted
and
nothing
has
changed
on
the
API
server
side.
So
no
new
resizes
has
come,
and
in
that
case,
if
we
were
exceeding
capacity,
then
we
would
see
in
both
cases
it
would
be
applicable
when
a
part
has
been
admitted.
We
atomically
right
the
current
allocations,
a
new
pod
comes
in
then
it's
requests
and
the
resources
and
resources
allocated
would
be
similar
would
be
the
same.
C
So
we
write
that
and
if
a
pod
has
is
in
the
process
of
being
resized
and
then
the
cubelet
restarts,
we
see
the
difference
between
what
is
what's
current
allocation.
Isn't
the
desired
is,
and
then
we
can.
We
can
Q
that
we
can
admit
that
with
the
old,
with
the
current
allocated
values
and
work
towards
resize.
The
third
case
is
when
a
pod
resize
was
requested,
while
cubelet
was
offline
and
the
same
logic
applies
there
as
well.
So
I.
A
Mean
I
guess
the
scenario
I'm
thinking
of
is
let's
say:
there's
two
pilots
running
on
the
node.
Each
one
is
requests
half
of
the
nodes
resources
that
are
allocated
or
one
pod
is
incredibly
high
priority
and
has
all
the
special
markers
to
make
it
so
and
the
other
pod
has
requested
an
increase
in
its
resources,
but
it
hasn't
been
granted
and
then
the
cubelet
restarts
and
the
first
pod
we
get.
Is
the
lower
priority
pod?
What's
to
prevent
that
pod
from
being
admitted
and
then
having
its
resize
apply
before
the
super
important
one
gets
there.
C
C
A
Have
that
sources
ready
things
you
denote
that
we've
gotten
all
pods
from
the
API
server,
so
we
so
as
long
as
we
we
actually
in
the
cubelet
track
whether
we've
received
all
the
pods
from
the
master.
So
as
long
as,
were
you
saying
that
we
would
handle
the
three
sizes
during
admission,
or
would
this
be
part
of
the
update
process?
Oh
it.
C
C
Yeah
we
do
have
the
timestamp
in
the
status.
So
if
we
store
the
at
what
times
time,
then
we
can
determine
we
don't
have
to
until
all
the
parts.
If,
especially
if
there
is
a
we
received
all
parts
can
that
kind?
If
we
can
determine
that
that
all
the
parts
that
were
there
before
have
been
received,
then
we
can
definitely
do
this.
C
A
A
E
A
The
problem
is
that
there's
some
when
someone
has
requested
a
resource
change
and
that's
now
the
pod
spec,
but
we
don't
want
the
cubelet
to
act
on
that.
So
the
cubelet
has
some
internal
state
where
it's
actually
admitted
right
and
so
now
we're
trying
to
figure
out
restore
the
cubelets
desires
of
what
it's
trying
for
resources
towards,
so
that
if
it
restarts
that
we
don't
get
as
motor
behavior
and
we're
all
of
a
sudden,
all
resource
updates
are
implicitly
applied
because
the
cupid
restarts
it
gets
applied
from
the
api
server.
A
A
F
C
It
just
makes
it
a
little
bit
more
I
would
say
heavy-handed
in
a
I.
Don't
know
if
that's
a
good
term
to
use
here
for
one
resource
update.
Now
we
have
resources
which
is
the
desired.
We
have
resources,
allocated
respect
what
resources
allocated
which
what
the
cubelet
is
working
yards,
which
couplet
has
agreed
to,
and
we
have
the
status
resources
allocated,
which
is
the
actual
current
state
which
is
generated
so
it
doesn't.
It
seem
like
a
little.
C
F
I
don't
enough
about
this,
but
also
checkpointing,
especially
I
mean
and
check
when
you
recovery
oftentimes,
it's
not
that
easy
and
and
restarts.
So
that's
also
not
very
easy
solution.
Sometimes
it
might
be
easier
to
just
update
resources
on
the
API
server
and
rely
on
you
guys
around
in
a
way,
actually
that
some
how
kubernetes
works,
everything
that
is
persistent
is
in
the
API
server
and
it's
probably
better
to
follow
that
model
for
everything
else
as
well.
F
F
C
I
mean
it's
fine,
I
think,
let's
just
review
it
over
the
coming
week
and
think
about
that
two
approaches
and
see
which
one
makes
sense:
I'm
I'm,
ok
with
both
approaches,
I
just
was
leaning
towards
the
local
check,
pointing
I
also
need
to
think
through
a
few
sub
cases
here
and
make
sure
that
the
state
that
was
before
I
restart
is
exactly
where
it's
going
to
be
after.
We
start
I'm
going
to
work
through
a
few
examples
here
over
the
week,
and
hopefully
we
can
converge
on
this.
So
thanks
for
the
input.
H
H
C
H
C
Have
to
do
the
check
pointing
to
get
the
source
of
truth
on
where
Goulet
was
with
regards
to
resizing
before
the
restart
occurred,
the
pod
condition,
I,
guess
most
people
David
and
all
have
not
been
in
favor
of
it.
I
only
feel
it's
necessary.
It's
useful,
not
necessary,
but
useful
to
have
it
so
that
the
cluster
resources
are
better
utilized.
C
So,
for
example,
if
VPS
is
that
the
pod
condition,
if
without
the
pod
condition,
the
VP
will
have
to
wait
like
3
seconds
5
seconds
30
seconds
and
then
say:
oh
yeah,
it's
been
too
long,
I'm
gonna
evict
this
spot
or
if
it
sees
the
pod
condition
set,
then
it
can
very
quickly
say.
Okay,
my
policy
allows
this.
C
This
can
tolerate
reschedule
so
I'm
going
to
evict
it
and
the
controller
will
generate
a
new
one
and
will
put
on
a
new
node
which
has
capacity
so
that
in
my
mind,
it's
useful
to
have
and
it's,
but
people
have
differing
opinions
about
it
and
it's
something
that
we
can
always
add
on.
It
doesn't
have
to
be
there
on
the
first
implementation,
similar
to
preemption
these
two
preemption.
We
have
decided.
We'll
add
on
if
eat
it
later.
A
J
So
sure
and
I
have
been
working
on
this
image,
decryption
KP,
but
before
I
get
to
that.
Let
me
just
give
some
background
on
this,
so
we'll
be
working
on.
Something
called
contain
the
image
encryption
for
the
past
year
and
the
idea
here
is,
you
know
you
will
be
able
to
go
country
as
well
as
encrypt
the
data
and
the
days
the
container,
and
so
when
you
upload
an
image
registry,
it
would
say
encrypted
and
when
you
download
they
run
in
the
run
time
it
will
perform
the
decryption.
J
So
the
the
idea
here
is
that
we
have
encrypted
container
image
to
everyone,
a
decrypt
on
the
run
time,
which
is
the
relevance
of
the
K
P.
So
this
is
what
that
we've
done
with
Internet
that
this
in
container
D.
This
is
merchant
different
thing
that
you
right
now
it's
targeted
for
version
of
country.
We
also
have
ongoing
work
in
the
other
runtimes,
as
well
as
having
is
part
of
the
hosting
I
admit,
but
the
KP
that
we
have
open
is
really
talking
about.
J
How
do
we
provide
a
facility
for
kubernetes
to
use
this
feature,
and
the
main
idea
for
this
KP
is
really
similar
to
image
pool
secrets,
we're
passing
credentials
to
pull
in
the
Tron
registry.
We
are
using
this
single
facility
to
pass
keys
to
perform
the
decryption
of
the
images
yeah.
So
partial.
Do
you
want
to
talk
a
bit
more
about
the
the
KP
article
in
screen?
Share?
Yes,.
G
Yeah,
are
you
able
to
see
my
ski
yes,
okay?
So,
as
Bandhan
mentioned
right,
we
we
had
just
recently
merged,
contain
encryption
and
decryption
continuity.
So
the
way
CVC
component
is
playing
role
into
is
during
the
runtime.
We
need
to
decrypt
these
images
on
the
worker
node
before
they
get
merged
and
in
order
to
decrypt
them,
we
need
to
provide
the
keys
required
for
the
decryption
to
succeed
and
looking
at
that,
Oh
a
while,
while
we
trying
to
see
how
it
can
fit
in
communities.
G
Communities
already
has
a
infrastructure
for
SIA
for
storing
secrets,
so
we
kind
of
came
up
with
a
secret
which
can
be
used
to
store
this
necessary
keys
to
decrypt
the
images,
and
this
secret
is
modeled
after
the
image
pool
secret
and
we
call
it
image
decrypt
secrets.
The
reason
of
this
model
after
image.
Pull
secret
is
unlike
other
time
who
seek
regulus.
G
Let's
we,
this
secret,
which
holds
the
private
key,
needs
to
be
consumed
by
the
by
the
container
d
or
any
runtime,
while
pulling
the
image
and
should
not
be
mounted,
should
not
be
mounted
on
a
part.
So
so
is
a
enhancement
proposal
that
we
submitted
with
the
kubernetes,
and
then
I
talked
about
the
motivation
of
what
oh
yeah
and
this
KP.
G
The
main
goals
that
we
are
trying
to
achieve
here
is
like
have
a
secret
which
can
have
a
key,
and
these
secret
can
be
used.
You
know
like
in
regular
party
ml
or
Omega
T
Prime
in
general,
and
we
also
have
integrated
them
in
service
accounts.
What
it
doesn't
cover
this
secret
is
that
it
does
not.
It
has
no
role
to
play
in
the
encrypting,
my
images,
so
we
see
communities
playing
role
in
decrypting
images,
but
these
secret
are
missing.
G
So
we
propose
like
an
API
definition
for
image:
D
cube
secret
as
well
as
so
going
on
a
detail
this
this.
This
has
a
strong
resemblance
to
the
way
the
image
pool
secrets
have
been
designed,
so
it
mimics
the
behavior
of
that.
We
couldn't
use
that
because
the
image
pull
security
is
tightly
coupled
with
that,
its
usage
of
being
able
to
provide
credentials
for
pulling
no
containing
images
which
are
behind
the
username
password.
So
we
needed
a
little
place
where
these
secrets
can
can
be
stored,
a
dedicated
place.
G
G
Then
this
is
where
we
describe
how
we
can
use
a
service
account
and
with
the
service
account,
you
should
be
able
to
use
the
secret
just
like
any
other,
then
yeah.
We
discos
and
detail
of
the
API
image
handler.
We
added
the
DC
params.
The
DC
params
essentially
represent
the
the
keys
required
to
decrypt
an
image,
and
you
can
see
here
in
the
second
line.
The
art
config
is,
is
the
one
what
existing
image
pull
secret
uses
so
right
now,
the
essentially
relationship
with
the
there
is
a
strong
requirement
of
having
the
image
school
police's.
G
But
if
the
user
sets
the
user
image
full
policy,
if
not
present
or
never
so
you
in
case
of,
if
not
present,
the
keys
are
still
required,
because
when
the
new
image
is
coming,
continuities
has
to
decrypt
it.
But
if
it's
not
in
continuity
is
gonna,
sir,
if
its
image
is
already
found
in
continuous
cache,
then
the
keys
will
are
not
required
and
if
the
image
policies
never
and
then
new
image,
CS
is
not
applicable,
but
the
cached
image
case.
G
The
keys
are
not
required
and,
as
I
mentioned,
this
is
far
from
ideal,
but
this
is
right
now
inconsistent
with
the
way
image.
Full
secret
is
its
design
and
implemented
how
what
we
were
talking
to
direct
on
on
the
KP,
where
he
made
a
comment,
and
it
seems
like
there
has
to
be
a
generic
solution
to
address
this
issue
with
both
both
these
secrets.
You
know
like
image:
full
secret
and
image
decrypt
secret,
so
where
we,
the
user,
preserves
user
policy,
is
a
setting.
G
G
All
right,
so
this
gives
a
general
idea
how
the
secrets
are
consumed.
So
when
you
have
when
you
have,
when
you
get
a
request
to
create
a
part,
you
are
irritated
reference
to
decrypt
keys
and
then,
when
you
are
in
a
cubelet,
sends
requests
to
pull
image.
It
sends
a
decrypt
keys
as
well
and
in
same
keys
are
passed
to
the
continuity
and,
if
they,
you
may
say
it,
it
checks
authorization
there's.
There
is
nothing
changing.
If
there's
a
there's
a
recommendation
on
a
KP
to
change
J
have
the
authorization
check
here
as
well.
G
G
We
considered
an
alternative,
like
is
any
way
to
use
kubernetes
encrypted
images.
Yes,
there
is
like
we
wrote
a
plug-in
for
continuity,
where
in
continuity
was
smart
enough
to
see
if
the
image
is
encrypted,
it
will
go
and
talk
to
the
cure
choice
of
key
server
or
like
a
vault
or
something
and
fetch
the
keys
and
decrypt
it,
but
in
in
that
kind
of
scenario,
kubernetes.
If
we
don't
real
Everage
the
kubernetes,
existing
infrastructure
for
handling
secrets
and
and
then
customers
or
users
will
have
to
handle
the
secret
management
on
their
own.
G
G
E
E
J
I
Actually,
even
Patrick
that
might
want
to
follow
up
even
today,
we
have
that
too
similar
problem
I
mean
that's.
Why
earlier
the
Ahmanson
that
image
a
poor
policy,
so
that
goes
just,
for
example,
there's
the
one
know
that
they
have
to
choose:
how
to
contain
their
wife.
The
level
lower
lower
and
another
one
it
is
asked
for
always
I
know
is
the
description
secret
and,
and
then
this
is
like
the
the
eat.
I
E
I
guess
my
feedback
here
is
like:
if
we're
doing
a
check
on
the
node
against
the
cached
image,
it
seems
like
if
there
was
an
admission
control
or
you
could
basically
fail
faster
if
the
user
wasn't
authorized
rather
than
waiting
for
it
to
get
scheduled
to
a
node
and
then
failing
the
containers
start
there,
like
I,
think
it'd
be
more
deterministic.
If
you
failed
upfront
rather
than
after
it
was
scheduled
to
a
node.
G
J
There's
kind
of
two
things
you
know,
one
of
it
I
think
is
that
we
didn't
want
to
do
a
image
for
ordering
that
measuring
face.
Do
we
have
to
kind
of
pull
out
the
image
Pro
sequence
as
well,
but
I
think.
The
other
point
that
we
were
discussing
is
that
in
the
future
we
every
seed
use
cases
for
to
be
able
to
use
hardware,
security
modules
or
keys
in
the
hose
unwrap
by
TPM,
or
something
like
that.
So
in
that
case
the
decryption
would
and
authorization
would
have
to
be
performin
removed
itself.
G
G
K
So,
if
you're
a
quick
way
to
verify
by
the
decryption
key,
given
a
cast
4
4
4
4
image,
because
you
said
that
if
he
is
cast
image,
we
want
not
verify
whether
we
can
use
it
so
do
I
need
to
fully
pour
the
image.
Then
we
know
whether
the
decryption
works
on
out
or
we.
There
is
a
way
to
know
that
faster,
instead
of
putting
whole
image
actually.
G
I
kept
doing
this
so
the
way
it
works
is
so.
Continuity
is
pulling
the
images
before
pulling
the
images
you
get
a
descriptors
and
in
descriptors
we
are
notations
and
we
saw
the
first
large
first
time.
The
image
is
getting
full,
it
just
pulls
the
blobs
and
try
to
decrypt
it
the
next
time
when
you
try
to
use
the
same
image
if
you're
trying
to
flow
in
this
pod.
What
happens
is
it's
the
same?
Full
full
request
goes
to
continuity,
but
continually
finds
the
image
in
his
cache,
but
she
still
have
access
to
descriptor.
G
So
we
just
try
to
unwrap
the
key,
and
if
you
are
able
to
unwrap
it,
we
don't
really
touch
the
blob
again
and
we
just
send
the
same
block
say
we
just
send
the
success
back.
Your
question
about
is
the
only
way
to
detect
earlier,
so
the
very
fact
that
the
act
of
no
like
downloading
the
descriptor
and
trying
to
unwrap
key
will
will
basically
fail
so
I'm,
not
sure
them
I
answer
your
question.
You
know,
like
so
think
of
it.
G
J
So
I
think
we
doesn't
have
time
to
go
through
a
lot
of
details,
but
how
the
implementation
of
the
encryption
works
is
that
the
block
is
encrypted
by
a
session
symmetric
key
and
as
a
measure,
key
is
being
wrapped,
and
so
what
we
do
for
the
check
authorization
is.
We
are
wrapping
that
symmetric
key,
but
not
actually
performing
the
decryption
itself
yeah.
I
We
don't
have
much
time,
but
the
people
we
move
to
next
topic
I
want
to
talk
about.
What's
your
target
for
the
and
I
think
the
everyone
maybe
have
a
lot
of
idea
about
the
incremental
detail,
but
it
is,
it
is
wallet,
I,
think
nobody
I'm
crashing.
This
is
the
wallet.
It
requires
an
a
monitor
feature
and
we
do
want
in
integrator
is
kubernetes
single
battery.
So
there's
no
question
so
then
maybe
needs
just
talk
about.
What's
your
target,
we.
G
Are
hoping
target
1.16,
because
the
changes
in
continuity
are
already
there
and
the
changes
for
supporting
CRI
or
are
almost
there?
And
so
you
have
to
raise
us
here
for
it.
Even
the
Clio
support
is
in
our
ease
with
us.
We
just
have
to
raise
a
PR
for
it,
so
we
would.
We
think
1.16
should
be
fine,
but
I
think
the
the
enhancement
freeze
for
is
it's
the
one
thing
for
it
right,
I,.
I
Really
thought
you
can
get
in
1016
since
yesterday
is
the
first
time
we
talked
and
because
to
get
a
PR
in
that's
really
time
consuming.
So
enough
talk
about
implementation,
detail
after
we
settle
down
all
the
API,
because
the
API
unit
goes
to
the
sig
attacked
a
peer
reviewer
committee
for
that
one.
So,
even
like
the
look
at
your
API
is
Rene.
I
Theater
straightforward
for
this
deal,
I
think
there's
a
lot
of
powerful
time
consuming,
so
I
really
thought
and
impractical
I
think
we
can
try
to
target
the
1.17
and
we
are
allocated
assigned
to
like
the
even
a
peer
reviewer
from
the
signal
to
help
you
I
worked
together
with
the
Stig
architecture,
because
so
then
maybe
can
speed
up
the
process,
but
it's
do
I
think
that
1070
is
the
more
reasonable
time.
Nine.
G
That
sounds
good.
I
just
have
one
question
so
since
I'm
since,
since
sharing
the
screen,
so
this
issue
that
I'm
sharing
the
add
image
to
the
1:06
is
this
gets
much
first
and
then
the
PR
gets
up
for
discussion
or
when,
when
you
said
that
we
need
to
talk
to
API
reviewers,
we
are
also
talking
about
this
document
as
well.
I
mean.
Does
the
there's
a
PR
in
the
enhancement?
Yes,.
I
You
talk
about
a
cab,
yes,
so
we
could
from
the
know
the
representing
we
could
talk
about,
because
this
is
a
cab
actually
include
the
both
API
and
also
the
implementation.
So
we
could
try
to
get
to
try
to
get
the
captioning,
but
just
you
we
need
to
have
like
the
large
name,
consensus
on
the
API
and
also
implementation.
G
I
F
F
We
didn't
need
this
annotation
anymore,
so
this
annotation
got
deprecated
and
recently
we
tried
to
completely
remove
it
from
our
code
base,
but
we
ran
into
an
issue
and
it's
for
actually
study
pods
static.
The
way
that
the
static
parts
work
is
that
we
create
these
pods
on
a
node
and
then
a
an
object.
A
pod
object
for
these
static
pods
is
created.
This
object
is
called
mirror
pod.
This
object
is
created
afterwards,
basically
after
the
actual
pod
is
created
on
an
old.
This
mirror
pod
is
created
on
the
API
server.
F
Static
parts
can
have
priority
as
well.
The
problem
is
that
priority
is
like
a
DNS
kind
of
like
a
name
server
thing.
We
have
a.
We
have
a
notion
of
the
name
of
the
priority,
which
is
called
priority
class,
and
then
the
priority
is
resolved
to
its
integer
value.
At
admission
time
admission
happens
when
a
pod
object
is
created.
F
So
now
that
the
Miller
part
is
created,
your
priority
of
these
static
pods
is
resolved
to
the
integer
value,
and
then
we
we
know
which
one
of
them
is
critical
and
which
one
is
not
criticality
works
only
with
the
value.
So
if
any
value
higher
than
a
certain
number,
which
is
now
I
guess,
2
billion
is
considered
critical.
F
The
problem
is
that,
at
the
time
that
the
actual
part,
not
the
mirror
object
and
the
actual
part
is
created
on
the
node
cubelet
does
not
see
the
value,
because
the
value
is
not
resolved.
The
values
were
later
resolved
at
application
time
of
the
object,
so
the
value
is
nil
or
basically
empty
and
it's
considered
in
zero
and
the
part
is
not
considered
critical.
F
So
there
is
a
chance
and
we
have
actually
seen
it
in
production
that
the
pod
gets
rejected.
Give
that
order
now,
there's
not
enough
resources.
We
need
to
change
the
mechanism
that
basically
static
priorities
are
resolved.
One
possible
solution
is
that
cubelet
resolves
the
priority
to
the
integer
value
at
the
4dz
static
parts
at
a
time
and
they
are
created
and
then,
of
course,
the
same
resolution
happened
later
when
the
mirror
pod
is
created
by
the
API
server,
but
we
need
to.
F
We
need
to
know
the
priority
at
the
time
that
static
part
is
created
under
no.
It's
also
that
a
node
does
not
reject
the
critical
ones
if
it
if
it
is
out
of
resources
resumed
resolving
the
priority
is
relatively
simple:
it's
just
a
worry
to
get
the
objects
from
the
API
server
and
find
basically
the
object
so
identified
by
the
priority
class
name.
You
probably
request
I,
might
get
the
value
and
find
what
they
occupy
your
ideas
so.
E
I
A
F
I
First,
this
is
not
a
rare
dependent
say
it's
a
sick.
We
means
the
from
a
stake.
No,
the
standpoint.
We
don't
want
to
support
the
static
partner
from
day
one.
We
just
couldn't
that
totally
get
rid
of
that
wise,
because
a
lot
of
production
dependent
product
depend
on
this
one.
Even
gke
and
I
know
many
people
depend
on
this
one.
I
So
even
there's
the
many
self,
a
host
powerful
cluster
I've
heard
being
carried
out
by
many
six,
but
they
seem
to
really
make
progress
so
which
so
the
signals
still
have
to
carry
on
this
way
and
the
static
part.
But
the
most
people
use
in
the
static
part
actually
is
the
quieter
created
called
parts
and
the
static
part.
The
worst
part
is
the
second
part,
there's
no
any
controller
or
scheduler
helping
it
if
it
is
static
powered
by
design.
I
If
you
think
about
it,
is
required
to
run
powder
on
the
loader
which
have
that
one
so
because
there's
no
such
control,
there's
no
such
controller
to
management
name
and
cannot
be
scanned
it
to
any
different
nodes.
It's
dynamic
it
like
an
any
other
part,
the
predator
power.
It
is
so
what
I
would,
rather,
if
we
have
the
common
priority
cost,
if
we
have
to
cannot
totally
get
rid
of
the
static
amount
of
support
I
have
to
rather
next
we
are
have
a
great
policy
and
default.
I
I
C
M
F
M
If
we
completely
get
rid
of
them,
that
would
solve
the
problems,
but
I
don't
know
if
that's
viable
I
have
been
wondering
about
whether
they
should
be
a
new
resource,
type,
I,
think
kind
of
what's,
when
the
implications
of
that
may
be
or
if
there
is
sort
of
a
more
generic
way
that
we
can
kind
of
solve.
All
of
these
exceptions
without
having
to
hard
code
the
exception
into
every
single
controller
and
admission
controller
and
everything
yeah.
But
this
is
maybe
a
bigger
conversation
is
correct.
F
A
Only
counterpoint
I'll
give
to
be
may
call
static.
Parts
critical
is
that
we
do
today
at
least
MGK
differentiate
between
some
pots
of
the
masters
and
other
console
will
mark
the
API
server
it's
critical,
but
maybe
not
know
some
gke
add-on.
That
provides
a
small
piece
of
functionality.
So
we
have
had
cases
in
the
past
where
we
accidentally
run
too
many
static
pods
on
the
master
and
that's.
A
I
Exactly
want
to
find
out
this
is
the
purpose:
try
to
prevent
people
abusively
using
static
about.I
if
people
profil
simply
using
static
apart
and
put
everything
on
a
single
and
it
is,
they
are
not
really
dependent
on
the
kubernetes
or
chose
recent
sister,
and
that's
that
that's
that
that's
the
wrong
operation,
so
that
do
not
approach
it.
We
have
learned,
fallen
protection,
things.
F
F
C
A
The
problem,
my
my
suggestion,
would
be
that
we
consider-
although
maybe
we
should
not,
but
that
we
consider,
even
if,
if
we
get
a
static
pod
and
the
node
is
already
full
of
static,
pods
admitting
it
anyways
simply
because
that
usually
means
that
someone
isn't
using
Kulick
resource
management.
Yes,
we
can.
I
A
We,
like,
we
have
had
minor
outages
because
we
put
too
many
static
pods
on
a
node
and
thankfully
they've
been
minor
only
because
we've
admitted
the
really
critical
stuff
so
having
just
some
way
of
like,
even
though
we're
saying
this
isn't
like
a
great
way
to
run
your
stuff
having
the
performance
of
the
system
degrade
a
little
bit
better.
When
say
you
put
too
many
possibilities.
A
I
What
it
is
so
basically
I
think
this
is
the
abusive
things
they
basically
don't
need
the
coupon
ideas
or
transition
here
right,
sir,
but
I
told
you
understand,
we
cannot
fabricate
even
we
are
winning
like,
because
a
static
crowd
is
really
error-prone
just
by
design
from
day
one,
but
we
just
cannot
get
real
after
we
reach
to
the
limited
of
the
real
critical
use
cases,
which
is
the
bootstrap
of
the
caster
and
also
for
the
rename
minimal
side
of
the
critical
part.
You
cry
on
every
node,
and
so
that's
I
think
the.
F
I
I
K
I
I
O
So
we
need
someone
that
has
it's
in
the
owners
file
for
Kubla
to
approve
some
of
these,
so
we
can
push
them
along
I.
Think
what
we'd
like
to
do
is
probably
just
every
every
meeting
for
the
until
the
160
million
might
just
list
the
outstanding
PR
that
need
approval
from
from
someone
outside
of
our
little
scope
of
work,
just
to
see
what
can
be
done
and.