►
From YouTube: Kubernetes SIG Node 20190226
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
There's
a
lot
of
1/2
discussing
and
a
lot
of
people
participate
is
to
talk
about.
The
CI
makes
the
step
and
also
the
Turkish
gym
and
which
is
the
only
building
in
the
continent
run
time
for
for
today.
So
we
we
are
want
to
kick
out
even
weekly
Lena
standards,
a
lot
of
production
and
depend
on
that
when
taka
shame
today
and
a
lot
of
people
is
doing
the
transition
from
the
bell
curve
as
the
cooper
neck
has
come,
though
d
for
the
only
container
and
home
to
the
other
company
Renta.
A
A
A
Actually
expressed
a
strong
desire
to
maintain
dakashin
and
maintain
and
dr.
shim
related
of
the
content,
a
random
interface
confirm
test
all
those
kind
of
things
and
I'm,
not
sure,
because
we
do
follow
up
on
that
one.
So
is
there
any
things,
change
on
that
well
or
any
action
item
and
they
live
there.
So
we
want
to
talk
about
more
of
those
things.
C
So
I
have
an
action
item
to
discuss
this
internally,
to
figure
out
like
what
what
we
want
to
do
there
in
terms
of
investment,
I.
Think
from
the
perspective
of
like
where
we
are
today,
we
just
we
haven't
had
any
kind
of
involvement
in
that,
so
I
think
we
need
to
figure
out
kind
of
some
owners
internally.
That
could
own
that
I
mean
me
personally,
I'm,
not
really
the
right
person
to
own.
A
And
I
think
thank
you.
Maybe
the
best
people
to
to
to
talk
about
to
brief
me
about
the
nectar
from
the
1806
or
1809
about
her.
The
holder
stack
of
her
inching
stack.
It
is
built
on
top
of
their
company.
Maybe
you
want
to
say
something
related
to
the
the
holder
stack,
how
we
are
going
to
plan
to
help
the
existing
darker
user
if
they
want
to
transition
to
the
continuity.
However,
you
teach
others
into
continuity.
A
C
So
for
us,
there's
there's
kind
of
an
intermediate
step
that
that
we're
working
on
right
now,
which
is
getting
darker
to
utilize,
all
of
the
come
of
container
D,
so
historically
docker
has
leveraged
the
runtime
components
of
container
D
and
that
we've
added
this
kind
of
image
management
snapshot
in
that's
used
by
CRI
container
D
today.
So
it's
not
just
a
matter
of
like
Dockers
passing
through
everything
to
container
D
today,
really
its
CRI
container
D
is
doing
some
image
management
using
a
container
D
code
and
then
docker
has
its
own
image
management
code.
C
That's
stored
in
a
completely
separate
place.
So
it's
not
just
a
matter
of
upgrading
docker
to
use
container
D,
and
then
we
can
just
leverage
CRI
container
Diaz.
Is
we
really
have
to
kind
of
migrate
kind
of
the
way,
Dockers
doing
storage
to
the
way
that
container
D
is
doing
it?
And
then
we
can
start
talking
about
having
the
to
kind
of
share
kind
of
a
common
implementation.
C
So
like
in
terms
of
like
timeline,
you
mentioned,
1806
I,
think
it
went
back
to
17,
maybe
1712.
When
we
first
introduced
container
D
1
dot,
o
it's
not
going
to
be
into
a
release
this
year,
we're
fully
leveraging
the
container
D
back-end
so
that
when
you're,
when
you're
pulling
and
doing
all
the
operations
and
docker
it's
actually,
you
could
all
that
stuff
would
be
visible
within
container
D.
C
A
Want
to
mention
that
the
really
I'm
innocent
18:06
is
just
because
that
was
on
top
of
the
CI
community.
So,
basically
you
are
going
to
even
we
Magra
gaeta
to
the
for
the
user,
medicated
to
the
continuity
using
continuity
CI
to
communicate.
The
other
thing
you
could
still
install
in
have
the
darker
stack.
You
still
could
be
using
amici
feature
building
feature
compose
feature.
I
need
to
do
the
work.
The
reason
we
made
it
a
decision
to
emerging
those
kind
of
things
together,
instant,
there's
the
performance
concern.
There's
a
simplification,
install
nation
concern.
A
Another
one
comes
in
it's
because
we
think
about
all
the
use
kisses
because
you
don't
want
to
have
the
SAG
continuity
anything.
You
also
have
to
install
another
separate
darker
to
further
image
building.
So
we
asked
to
think
about
those
cases.
That's
the
Rena
I
mentioned
I
to
ancient
all
sixes
Vista
to
at
least
to
me.
That's
the
that's.
The
worship
start
to
have
the
everything's
together.
C
Yeah
so
I
mean
everything
should
be
there
where
you
could
like.
If
you
have
1806,
you
have
container
tea
there
and
available
to
use
if
you
enable
the
CRI
plugin,
but
it's
well
I
think,
as
was
pointed
out,
it's
it's
not
turned
on
by
default.
Today
we
would
like
to
have
that
turned
on
so
that
if
you
have
doctor
installed,
you
could
try
the
CRI
community
plugin.
A
D
And
so,
if,
if
you
get
to
a
state
where
you
say
we're
gonna
remove
the
doctor
shim,
then
ultimately,
you
have
C
cluster
lifecycle
asking
well,
which
one
should
we
run,
which
ones
do
we
test
and
obviously
there's
a
variety
of
implementations
that
are
available
for
people
to
look
at
so
is
it
the
right
use
of
our
time?
Right
now
to
you.
D
To
explore
this
much
deeper,
if
we
don't
yet
know
all
the
gaps,
one
would
need
to
actually
eliminate
an
entry
CRI
habilitation,
because
that's
basically
the
effective
goal,
that's
being
described
versus
just
document
what
we
think
the
prereqs
are,
that
need
to
be
done,
and
so
getting
the
CRI
to
a
state
that
we
are
all
happy
with.
It
seems
like
prereq,
one
being
the
windows.
A
I
think
you
are
totally
right:
I,
don't
think
we
can't
take
any
action.
I
was
just
want
to
kick
out
the
since
the
since
the
community
is
wise
and
also
many
people
really
looks
like
a
fungus
tried.
Many
people
actually
have
a
lot
of
confusing,
and
also
you
are
right.
Even
we
want
to
deprecate.
If,
eventually,
we
don't
want
this
building
a
sham
to
talk
about
us
there
I
and
then
we
need.
We
need
to
talk
about.
Okay.
A
What
is
a
replacement
for
the
are
testing,
what
it
is,
the
for
our
III
test
or
what
it
is,
but
I
think
the
I
also
want
to
take
this
chance
to
repeat
signal
the
decision.
When
we
first
started
there,
I
we've
been
repeated
at
decision
for
many
times
we
think
about
which
continent.
One
time
we
define
the
common
API
and
the
which
she
real
continent,
rental
implementation,
is
running.
We
actually
give
to
the
image
provider
and
and
also
vendor
the
kubernetes
vendor
or
image
vendor
to
up
to
them.
To
this
idea,
so
is
so.
A
A
So
we
provide,
as
the
signal
that
we
provide
Syria
confirmed,
has
to
to
ensure
all
those
is
power,
kubernetes
requirement,
supinated
requirement
and
on
the
note,
Copernicus
mildew
requirement
and
then,
but
then
each
vendor
or
each
it
each
component,
Wrentham
of
Linda
and
run
those
tests
include
performance
functionality
of
correctness,
others
campaigns
and
for
each
release.
They
update
those
things,
so
it
is
their
responsibility
and
needs
the
we
act.
E
I'd
like
to
speak
to
that
particular
requirement,
I
think
that
I
think
you're
right
that
it
is
a
valid
requirement
that
the
container
runtimes
will,
you
know,
validate,
run
the
end
of
an
tests,
but
at
the
same
time,
once
we
move
docker
shim
into
its
own
tree,
or
it
was
pretty
completely
at
that
point.
Kubernetes
will
have
no
choice
but
to
pick
one
or
two
or
three
and
then
on
each
PR.
E
Let's
push
rate
run
the
end
to
end
tests
with
one
of
these
cry
sham
enabled,
if
not
all
three
of
them
will
make
running
each
of
your
tests
longer
and
then
the
requirement
to
make
these
things
work
is
it's
a
shared
requirement
between
the
groups,
both
container
D
cry,
oh
right,
and
maybe
the
dr.
shim
one
group
that
would
be
new
with
the
kubernetes
team.
I.
Don't
think
this
can
be
a
only
the
container
runtimes
own,
the
testing
of
end-to-end
tests
for
kubernetes
I.
E
Don't
think
we
have
the
manpower
in
the
container
month
times
to
do
that.
We've
had
a
number
of
situations
where
kubernetes
has
put
you
know
because
they
had
to
they
put
patches
in
couplet
and
they
are
directly
to
docker
and
we're
finding
we're
still
finding
those
I
know.
Users
got
a
lot
of
work
going
on
to
try
to
pull
those
out,
but
we
need
to
do
that
so
that
we
can
finalize
this
cry,
pío
right,
so
that
we
could
even
consider
doing
you
know
rip
rip
in
the
Chanel.
Yes,.
A
That's
the
for
sure.
The
the
reason
we
wanted
this
car
so
soon
is
just
one
to
community
knows:
there's
a
lot
of
hidden
requests.
It's
not
as
simple.
We
can
get
you.
We
already
have
that
some
people
already
have
a
continuity
in
production
and
then
some
on
some
window
I
have
the
crowd
in
production.
So,
but
if
for
community,
we
still
carry
the
burden
and
the
neck
am,
I
know,
because
there's
a
lot
of
the
product
independent
and
also
there's
a
CRI
dependency
and
I
can
also
get
the
new
feature
like
the
windows.
Container.
A
E
I'm
very
concerned
about
each
of
the
runtimes
making
independent
decisions
with
regard
to
things
like
idle
timeouts
that
came
up
the
other
day.
We
we
need
to
have
a
common
decision
at
the
cry
entry
point.
We
can't
just
make
some
of
these
decisions.
You
know
otherwise
you'll
end
up
with
the
pods.
Can
you
know
we're
gonna
work
different?
We
have
to
be
configured
completely
differently,
which
is
what
happened
with
networking
right,
I.
E
G
Yeah
I
had
two
things:
I
want
to
mention.
Some
one
of
them
is
when
it
comes
to
to
multiple
OS,
multiple
platform,
support,
I,
think
moving
to
decry
API
and
getting
things
iterating
faster.
There
is
very,
very
important
in
the
case
of
Windows
in
particular,
we're
already
being
limited
by
the
documentation.
That's
there
because
the
container
D
code-
that's
there
today,
already
supports
some
of
the
things
that
users
are
asking
us
to
implement
as
runtime
class,
but
we
can't
do
it
through
the
docker
API
and
so
from
a
sig
window.
G
Standpoint
you're
going
to
see
us
accelerating
an
effort
to
move
to
cry
container
D
as
fast
as
we
can
so
that
we
can
unlock
some
of
those
features
and
notes
like
later
in
the
agenda
today.
There's
some
questions
around
things
like
how
to
schedule
cores
and
do
gang
scheduling
with
hyper-threading
I
believe
those
are
all
things
that
need
to
be
represented
cleanly
into
CRI
and
if,
as
long
as
we
have
docker
shim
in
the
codebase
I'm
believe
that
that
would
actually
prevent
us
from
being
able
to
implement
that
correctly
in
CRI.
D
Don't
I
think
in
the
past,
you
and
I
have
had
this
conversation
around
what
to
do
with
respect
to
container
runtime
choice
and
you
keep
code
in
the
tree
and
I.
Think
you
and
I
had
always
said
like
this
is
a
big
impact
for
a
community
beyond
what
our
own
particular
service
providers
do
and
I
still
feel
like.
This
is
a
big
impact
and
so
I
still
think
we're
looking
at
like
a
minimum
year
until
anything
actually
is
done
here
and
so.
D
So
even
Patrick
to
your
point,
like
Windows
containers
in
GA,
like
if
you
go
GA
and
you
still
depend
on
dr.
shim
like
I,
feel
like
we
need
to
set
a
clock
and
that
clock
is
still
probably
a
year,
and
so
so
I
guess
I'm
wondering
is
what
is
the
right
time
to
like
start
that
clock
like
do
we
start
that
clock
once
all
the
dependencies
have
men
met
or
do
we
start
a
clock
and
that
forces
a
community
to
actually
make
the
changes
that
it
needs
to
make?
D
Because
I
can
see
us
going
in
one
of
two
ways:
we're
like
we
say
we're
going
to
make
a
ton
of
changes,
but
then
all
of
us
are
distracted
by
our
own
needs
or
requirements.
This
never
bubbles
to
the
top,
or
we
say
we're
gonna
like
establish
a
goal.
Then
we're
gonna
make
it
a
priority
to
get
something
done,
but
I
have
a
feeling
that,
like
we're
still
discussing
like
four
releases
a
year,
you
know
I
agree
with
Mike
that,
like
we
can't
just
you
know,
leave
users
in
jeopardy,
so
I.
A
Totally
agree
with
you,
no
really
I
think
we
could
discuss
this
week
and
a
nice.
We
could
actually
post
point
two
this
week
because
many
people
with
you
Priya
lanced,
so
not
many
people
here,
no
really
I
want
the
more
people,
understand
the
complexity
behind
this
one
and
so
so
so
then
we
could
in
the
future
we
could
make
the
decision,
so
we
could
document
to
our
decisions
if
we
could
not
really
see
or
what
we
are
going
to
do
and
is
that
it
is
just
one
I
wish
to
do.
We
kick
start
this
clock.
A
We
put
there
because
every
time
I
think
once
a
while
this
kind
of
problem
we
always
in
the
community
and
the
cirrhotic
name.
It's
really
wonderful.
It's
best
idea,
so
simplify
our
code,
a
lot,
but
there
always
can
we
have
two
people
put
into
annex
plan
so
I
wanted.
We
have
the
communicative
way
to
document
and
then
to
put
it
back
there
in
some
way,
the.
D
Only
awkward
part
to
me
is
like,
if
a
year
from
now
the
list
of
like
certified
kubernetes
offerings
in
the
world,
they're
all
doing
something
different
than
what
we
are
defaulting
to
in
the
community,
and
that
would
be
the
only
awkward
state
I'd
see
I,
don't
know
if
we've
actually
reached
that
state
like
like
I
have
not
the
breasts
of
Larry
alert.
Vendor
is
doing
so,
but
either
way,
I
still
think
we're
looking
at
a
year
timeline,
and
so
it's
good
that
we
had
this
conversation.
H
One
thing
is
I,
think
yeah
well,
shall
we
actually
cook
off
this
timer
is
going
to
be
a
problem,
even
if
we
do
nothing
given
that
everyone,
it's
pretty
busy
with
other
projects.
Even
if
we
do
nothing,
we
have
to
make
a
decision
about
whether
we
can
start
adding
features
that
doctor
does
not
support
a
Christian
does
not
support,
and
that
would
be
important
for
actually
evolving
the
CRI
and
also
support
more
like
for
some
class
and
other
things.
So
if
everyone's
okay,
with
going
forward
that
dr.
H
shim,
will
not
support
what
some
of
the
new
features
then
yeah,
that
would
be
one
decision
to
make,
but
for
them
we
can
actually
dedicate
all
that
existing
features
into
Christian.
That's
another
problem,
and
now
that's
something
is
going
to
break
users,
so
I
think
for
that
to
actually
happen,
we
need
to
like
deprecated
and
wait
with
at
least
multiple
releases
or
yeah.
I
G
So
I
just
want
to
clarify
one
thing.
So
if
we're
saying
a
clock
of
around
a
year
for
deprecating
dr.
shim,
should
we
move
it
to
CRI
or
does
or
if
we're
communicating
deprecation
early
enough?
Should
we
just
delete
the
code?
I
mean
I'm,
just
wondering
if
this
is
a
you
know,
three
four
release
journey,
you
know:
are
we
gonna
spend
a
year
getting
to
an
interim
part
where
we're
cry?
G
Dockers
shim
has
lesser
functionality,
but
still
lives
on,
or
do
we
just
say,
let's
cut
it,
if
we're
not
going
to
support
new
functionality
on
it,
I
think
that's
something
that
would
be
important
for
signo
to
weigh
on
and
say
like
is
dr.
Shen
going
to
be
supported
for
multiple
releases
in
a
deprecated
States
that
people
can
migrate
off
with
reduced
functionality,
or
do
we
just
draw
a
hard
line
and
cut
it
like
that's
what
I
want
to
know.
F
E
A
We
need
maybe
along
oh,
maybe
we
need
to
decide.
It
clearly
decide
yeah
it's
this
one.
It
is
also
engage
with
the
CI
API
a
promoter
from
alpha
to
the
to
the
to
the
background.
Oh,
it's
not
engaging
and
also
we
need
so
for
the
today's.
Is
there
I
also
because
to
support
older
customer.
We
also
have
a
lot
of
legacy
they're
looking
for
the
login
for
the
execute
a
lot
of
things.
A
Actually,
we
do
have
the
legacy
support
there
and
when
we
was
designed
those
kind
of
things
we
need
to
talk
about,
and
so
because
is
we
want
a
smooth
transition
and
then
we
support
those
legacy
implementation,
but
we
do
plan
to
deprecate
always
do
you
see
what
those
who
should
be
deprecated
move
and
we
have
the
common
way
to
do
this.
Login
handling
those
kind
of
his
open
discussing,
and
we
need
to
review
all
the
CII
today
and
that
you
come
out.
Okay
is
the
way
to
promote
that
Rebecca,
the
problem
it
is.
A
We
also
there's
a
Black
Sea
region
and
we
want
to
deprecated
and
in
the
today's,
the
CRI
interface.
There
also
have
the
new
supporter,
like
the
windows
and
also
just
in
general,
support
the
safe
you
powder,
all
those
kind
of
stare,
I
think
and
they
still
it
is
evolving
and
moving.
So
our
issues
of
treated
him
as
the
additional
feature
for
the
all.
We
want
to
address
many
of
those
problems
and
the
promo
desire
to
the
next
level.
That's
all
we
could
discuss.
I
did
right
now.
A
G
G
We
still
need
to
graduate
runtime
class
to
to
GA
and
then
basically
set
what
that
timeline
is,
and
so
I
think
just
getting
all
of
that
clearly
communicators,
just
what
I
see
is
the
next
step
here
and
whether
or
not
we
have
a
docker,
shim,
CRI
or
not
sort
of
would
fall
basement
I.
Think
that
would
decide
itself
based
on
the
timeline
once
we
can
close
on.
What's
reasonable.
Okay,.
G
D
D
Who
who
will
act
as
the
advocate
for
when
we
know
that
this
is
the
right
thing
to
do?
And
so
and
I
still
feel
like
many
of
the
individual
members
within
the
community
I've
made
a
transition,
are
not
going
to
be
incentivized
to
spend
time
to
try
to
make
a
change
and
so
other
than,
if
they're
going
to
make
a
PR
update
and
get
frustrated
that
they
have
to
fix
something
entry
or
not.
So,
but
overall
I
still
think
like
I
said
earlier.
A
D
Think
that'd
be
perfect
right
like
I.
Actually
to
me
it's
not
a
good
look.
If
someone
from
Red
Hat
is
doing
this
and
I
don't
know
if
it's
a
good
look
for
someone
from
Google
to
be
doing
this
versus
someone
from
the
community
to
act
as
that
voice
right,
because
I
know
in
each
of
our
cases
are
companies
that
may
default
container
online
choices
that
we
need
a
voice
from
the
community
that
represents
the
existing
entry
support.
So.
A
A
So
next
time
it
is
the
time
and
I
will
look,
the
Harry
I
will
send
the
email
and
also
we
have.
We
took
the
meaty
notes
and
we'd
record
a
today's
meeting
about
this
topic.
I
will
send
those
things
to
Harry
and
hopefully
he
can
from
the
keynote
and
also
work
hard
to
get
out
other
opinion,
and
then
we
can
share
our
opinion
on
keys,
and
then
we
come
back
after
pipes
and
to
communicate
when
months
later.
A
L
Okay,
great
so
I
Tim,
it
started
the
discussion
late
made
last
year
about
basically
pot
overheads
and
since
then,
there's
been
a
lot
more
for
progress
and
sorry's
defining
run
time.
Class
and
I
wanted
to
just
spend
a
couple
minutes
quickly
to
give
an
overview
of
what
we're
proposing
here
and
get
feedback.
So
basically,
today
you
know
if
you
go
ahead
and
define
the
constraints
as
far
as
limits
and
requests
for
a
different
CPU
and
memory
for
container
and
all
the
containers
in
your
pod.
L
You
could
effectively
have
a
Potsie
group,
which
is
just
the
sum
of
the
containers,
and
you
know
when
a
runtime
comes
in.
It
should
be
using
the
parent
C
group
that
is
created
in
this
case.
You
know
just
quickly.
Looking
at
the
runtime
spec,
you
know
it's
pretty
clear
that
if
a
seegar
path,
in
this
case
it
be
cube,
pod,
which
is
created
by
cubelet,
is
provided
as
a
runtime
I
should
be
working
inside
of
this
pod.
L
So
if
someone
were
to
create
one
and
use
a
sandbox
runtime,
let's
say
cotta
containers,
for
example,
and
just
have
maybe
one
container
in
there-
that's
maybe
only
constraining
down
to
let's
say
50
megabytes.
If
I
were
to
use
that
you
know
I'm
supposed
to
use
it.
The
spec
says
it
you
provided
it.
We
would
get
in
out
of
memory.
We
wouldn't
be
able
to
effectively
launch
a
hypervisor
in
such
a
small
constrained
pod
C
group,
so
this
is
just
kind
of
one
example
of
the
problem
that
it
exists
a
day.
L
On
top
of
that,
we're
also
not
really
able
to
get
effective
scheduling,
an
effective
resource
quota
management
by
cubelet,
or
you
know,
for
the
node
itself,
because
we're
not
taking
into
account
the
overhead
associated
with
a
runtime.
So
currently
you
know
with
no
sandbox
is
the
read.
Overhead
is
very,
very
minimal
and
you
know
the
allocatable
pool.
Has
you
know
the
system
available
memory
on
the
system?
L
Then
you
take
away
just
this
kind
of
carved
out
over
heads
around
system
and
couplet,
and
that's
used
enough
just
to
kind
of
take
these
into
account
and
the
actual
paused
container.
You
know
in
a
run,
see
type
of
scenario
is
negligible
enough,
and
this
is
kind
of
how
its
managed,
as
soon
as
you
start,
utilizing
the
runtime
class
to
use
alternative
runtimes,
whether
it
be
G
Pfizer
or
a
firecracker
integration
or
cotta
containers.
L
The
overhead
is
not
negligible
and
you
know
you
would
need
to
be
able
to
take
this
into
account,
and
you
know
if
you
were
to
look
at
resource
quotas
per
user
or
just
like
accurate
scheduling
itself.
This
should
really
be
taken
into
account.
Otherwise,
the
current
work
around
is
just
carve
out
a
lot
more
system.
Memory
is
reserved,
and
you
know
just
it's
not
even
close
to
heuristic
and
it's
very
you'd
have
to
be
very
conservative
to
make
this
work,
so
this
is
a
far
from
non-ideal.
I
would
say
so
really.
L
The
summary
of
you
know
what
Tim
started
and
kind
of
refining
here
for
the
proposal
is
to
introduce
in
the
pod
spec
in
overhead
fields,
which
would
be
the
same
as
kind
of
the
resource
requests
field
that
you
would
have
associative
container,
but
this
would
be
dedicated
just
to
the
overhead
associated
with
running
a
pod.
This
would
be
runtime
class
specific.
So
in
the
case
you
know
for
different
handlers.
Today
you
know
the
runtime
classes
is
defined
just
as
a
key
value
pair
of
a
string,
and
then
this
year
I
had
may
have
an
implementer.
L
I
would
then
suggest
that
you
would
have
a
runtime
troller,
an
admission
controller
that
would
mutate
and
would
see
if
a
runtime
class
is
defined
for
this
pod
go
ahead
and
and
implement
right.
The
pod
overhead
value
itself.
That's
the
high
level
looking
at
it
closer
again
high
spec,
add
in
an
overhead
field.
I
wouldn't
expect
anybody
to
have
to
manually
go
in
and
set
this
again.
This
should
be
done
through
something
like
an
admission
controller,
utilizing
a
runtime
class
whether
they
can
write
it
or
not.
L
Add
the
four
fields
basically
for
CPU
and
for
memory,
both
the
requests
in
the
limits
and
again
this
is
what
would
be
used
by
the
controller
to
actually
update
the
pots
back
and
then
add
a
mutating
admission
controller
I
think,
depending
whether
runtime
class
is
still
a
C
or
D
or
four,
it's
a
core
type.
You
know,
maybe
some
of
this
will
change
as
far
as
whether
it
should
be
a
web
hook
or
built-in
or
anything
else.
L
But
essentially
this
is
a
very,
very
simple
controller
that
would
just
go
in
and
where
applicable,
inject
the
actual
pot
overhead
and
that's
that's
about
it.
I
do
have
the
original
RFC
document,
that's
in
the
Google
Doc,
where
as
a
little
bit
more
detail-
and
you
can
see
some
conversation
based
on
that.
D
I,
remember
I
think
it
was
Tim
who
presented
this
last
year
and
mrs.
Derick
and
I
still
feel
like.
This
is
really
complicated,
and
you
know
at
some
time
and
I
have
less
experience
with
using
runtime
class
and
so
I
don't
want
to
like
discourage
the
reasons
you
laid
out
for
why
it's
needed.
I
guess
I'm
still
like
wondering
why
we
don't
just
do
pod
level
resource
requirements
and
like
ignore
this
resource
overhead
thing
altogether.
D
That
resource
overhead
actually
tie
into
quality
of
service
or
not
and
like,
if
I'm,
not
running
with
a
runtime
class.
The
stuff
you
enumerated
here,
like
memory
back
volumes,
still
will
charge
back
to
the
pod
secret.
So,
like
you
have
overhead
leakage,
just
in
the
normal
course
of
the
application
running
application,
so.
D
More
experience
with
Karen
Hayes
these
days
is
like
people
don't
know
how
to
size
anything
right
like
they
just
don't,
and
we
we
do
a
lot
today
to
try
to
size
individual
containers
correctly,
but,
like
that's
largely
a
consequence
of
us,
never
having
a
pod
sandbox
like
secret
bounding
box
in
the
beginning
and
like
if
we
had
resource
requirements
at
the
pod
level,
like
I,
just
think
the
whole
world
would
be
simpler
for
people
boom
they're.
Only
like
how
much
do
you
want
to
get
to
this
pod.
D
A
If
we
have
the
beginning
is
the
pod
level
I
used
to
sink
over
that
way.
I
was
trying
to
pro
at
way,
but
recently
a
lot
of
this
guy
instead
container,
and
then
they
talk
about
the
length
of
separator
in
each
channel
for
the
second
container
and
they
may
change
implementation
washing
by
washing
a
nice
baby.
Nice
I
was
kind
of
thing
about.
Oh
thank
god.
We
don't
have
the
static
on,
probably
I
will
say,
Google
unique
name
and
the
public
way.
A
L
Of
course,
I
see
that
both
are
very
valid
and
would
be
useful.
I
could
see
that
having
unconstrained
containers
and
just
constraining
the
pod,
where
you
can
essentially
say
you,
kids
can
do
what
you
want
as
long
as
you
sit
in
the
sandbox
makes
a
lot
of
sense.
I've
heard
different
people
kind
of
ask
and
assume
that
this
was
in
the
pod
spec
already,
when
it's
obviously
not
so
I
kind
of
when
I
was
initially
looking
at
this
proposal.
That
was
something
that
I
hadn't
I
had
thought
of
that.
L
G
So
so
cake
I
walk
through
the
user
experience
on
that
real
quick.
So
let's
say
you
wanted
to
do
a
container
with
the
limit
of
a
hundred
Meg's,
but
you
wanted
to
leave
room
for
a
sidecar,
and
in
that
case
you
would
say
the
pod
has
a
request
of
200
bags
and
a
limit
of
of
200
bags,
and
so
now
there's
this
Delta
of
a
hundred
Meg's
that
I
could
use
for
scheduling.
That
sidecar
is
that
is
that
kind
of
what
you're
saying
yes,.
G
Of
like
getting
that
slush
within
the
pod,
boundary,
okay
and
so
now,
the
second
use
case
is:
if
we're
looking
at
the
overhead
due
to
a
hypervisor
for
the
case
of
like
kata
or
hyper-v,
then
are
we
suggesting
that,
in
order
to
make
room
for
this
overhead,
if
we
were
to
do
it
using
the
pod
spec,
then
we
would
still
need
a
runtime
admission
controller
that
says:
hey
your
runtime
class
is
using
a
hypervisor,
so
add
the
hypervisor
overhead
to
the
pod
request
and
lemon.
Is
that
how
you
would
see
that
implemented?
D
And
I
think
the
runtime
control
are
still
needed.
I
just
I-I've
lived
now
through,
like
four
years
of
explaining
resource
requirements.
Setting
and
I.
Don't
think
we've
done
the
perfect
job
of
making
it
easy
for
users
to
think
about
and
I'm
more
and
more
convinced
that,
like
people
just
want
to
think
about
their
pod
and
it's
the
power
user
that
thinks
at
the
container
level
and
even
then
they
think
wrong
and
having
having
something
at
the
pod
boundary
firts.
That
I
just
think
is
so
much
easier
and.
H
D
L
D
Eight
might
workload
and
I
want
to
take
this
into
account.
It
all
starts
with
a
bad
example
right,
someone
on
the
internet,
like
cats
out
pod,
that
was
running
and
they
see
a
resource
requirement,
overhead
value
of
X
and
they
copy
and
paste
that
into
their
new
cluster
right
like
and
and
suddenly
bad
practices
have
permeated
all
throughout
the
internet
and
like
it's,
the
thought
like
I,
just
don't
think
the
users
should
have
to
think
about
like
and
so
I
don't
feel
like
it
should
appear
on
a
pod
spec.
L
L
That's
that's
why
I
was
saying
that
they
shouldn't
ever
have
to
write
it.
It
should
come.
That's
why
the
admission
controller
would
be
there.
So
that
way,
if
you
have
a
runtime
class
defined,
whether
that
was
from
an
admission
controller
or
whether
someone
decided
that
this,
what
they
saw
on
the
internet
want
to
do,
they
would
just
write
that
and
the
run
admission
controller
would
go
through
and
augment
the
pods
back.
N
L
In
and
what
I
was
suggesting
is
is
that
you
can
write
your
yeah
Mille
and
you
can
apply
it,
but
the
API
server
would
override
it
or
you
could
pick
the
bigger
of
the
two.
You
know
if
you
say
you
love
overhead
and
want
to
eat
all
your
CPUs
go
for
it.
But
this
is
what
the
administrator
said.
This
runtime
class
costs
from
an
overhead.
A
A
Please
think
about
this
site
and
some
user,
maybe
buddy
behavior,
and
so
some
people
power
user
may
be
cited
because
they
believe
their
company
can
share
the
resource
because
they
have
the
different
time
of
the
bursts
and
the
some
user
may
be
really
carefully.
Think
about
the
concept.
Consider
of
the
aggregate
of
the
container
resource
requirement
plans
the
override,
so
this,
maybe
is
the
mixed
behavior.
So,
however,
you
need
is
to
reach
that
one
yeah.
D
I
just
have
I
really
think
resource
requirements
generally
when
they
appear
on
prospects,
introduced,
confusion
everywhere
and
I
I.
Don't
think
the
overhead
concept
means
anything
to
any
other
than
a
container
around
time.
Implementer
and
not
something
an
end-user
even
knows
about
or
thinks
about.
Yes,.
L
D
M
L
I
agree
it's
awkward.
If
we
have
something
in
there
and
say:
oh
just
just
don't
touch
this
hey
I
appreciate
that,
even
if
the
runtime
controller
will
just
blast
it
away
it
still,
it
doesn't
make
sense
from
an
end
user,
but
as
far
as
a
easy
from
an
implementation
standpoint,
it's
it's
clear
exactly
what
needs
to
be
done.
It
is
at
the
pod
level.
A
D
F
G
D
A
So
just
carry
more
discussing
on
the
other
tab
and
I
think
the
in
at
least
this
is
Stephanie's
enhancement
for
our
overall
resource
management,
and
then
we
have
the
on
the
node
resource
management
we
have.
We
do
have
a
lot
of
missing
accounting
resource
which
you
will
be
eventually
cause
the
resource
starvation
on
Arnold
and
the
cause
in
stable,
Bennett
I,
don't
know.
So
that's
right.
This
is
Stephanie,
it
is
no
I,
no
matter
it
is
the
pot
resource
level
of
the
resource
requirement
and
requires
an
anemic
or
it
is
the
overhead.
It
is
enhancement.
A
D
Think
the
only
option-
don't
I,
think
we
discussed
in
this
last
year
and
maybe
I'll-
ask
it
again
in
people
and
tell
me
why
that
would
be
like
it's
a
fixed
overhead
per
pot.
I
thought
right.
And
so,
if
you
had
configured
a
node
that
says
I'm
running
sandbox
and
you
treated
that
as
a
kind
resource
and
like
the
node
could
advertise
the
number
of
total
things
that
could
support
right.
And
then
we
can.
A
Not
just
a
as
you
also
have
another
kind
of
even
more
complex
it
here
yesterday
in
another,
we
we
have
the
reserved
for
the
system,
demons
and
also
and
also
Cuba
night
demons.
So
then
we
basically
have
the
concept
to
based
on
that
one
machine
capacity
and
the
substracted
of
the
auto
reserve.
Then
we
have
the
cube
house
concept,
that's
the
top
level
of
the
C
group.
So,
based
on
these
kind
of
things,
you
have
the
new
new
power
coming,
you
increase
of
the
artistic
group
and
the
top
level
sickle.
A
If
you
have
two
less
the
part
there,
then
you
may
have
to
reduce.
Are
we
going
to
reduce
those
kind
of
things
are
I'll
keep
that
is
big,
so
that's
kind
of
the
complexity,
then
another
run
up
complexity.
We
need
to
address
I
feel
like
that.
We
are
basically
push
the
problem
from
the
one
pieces
to
another
pieces
and
until
then
give
them
more
like
that.
I
don't
know
Lyle.
Instead,
they
like
the
Naoko
reaching
Iowa
like
a
and
another
thing
people.
Also,
if
I'm
not
going.
A
Last
time
we
talked
about
there's
the
in
the
community.
People
asked
for,
if
they
want,
you
have
the
know
to
have
the
regular
node
also
have
the
effect
run.
Comcast
and
the
support
actually
recommend
node
and
also
secure,
secure
container,
for
example,
T
riser
device
are
know
that
could
be
supported
regularly.
Reckoner
Linux
native
Linux
container
also
could
have
to
support
the
G
riser
United
kisses.
The
runtime
actually
is
different.
Overhead
is
different
for
different
apart
how
I'm
going
to
address
that
problem?
It.
D
A
I'm
I'm
more
constant,
you
solve
the
problem,
resource
accounting
problem
and
the
node,
because
even
we
have
the
pulses
still
past
container,
we
still
tend
to
charge
you
to
properly,
and
so
that's
the
it's
the
kiddin
he
didn't
resource
usage
and
the
sometimes
cost
provenance.
This
was
establishing
issue.
Another.
M
A
F
Yeah
I
will
put
the
link
on
the,
but
this
way
after
the
Renzi
CVU
thinks
we
observe
the
memories
back
during
Wednesday
start.
So
if
you
have
a
publish,
very
small
memory
and
say
five
mega
Omega
as
possible,
that
you
cannot
run
anymore
because
of
the
memories
fact
will
exceed
the
very
limit
and
it
will
get
you
killed
yeah.
So
that's
the
issue
and
we're
we're
working
with
the
OCR
community
to
fix
this,
and
here
currently
there
are
several
proposal
and
they
are
working
on
that.
F
A
We
couldn't
work
out
that
one
then
I
think
there's
a
kubernetes
has
a
changing
kubernetes
bhakti
for
the
each
she
work
node
by
D
for
minimum
of
the
memory.
The
quest
is
the
format,
no
matter
how
you
you
don't
space,
active
your
format
and
that
might
have
to
change
to
the
10
Meg
or
15
Maya.
So
we
were
still
working
that
well
identify
those
things
and
internal.
We
do
some
measurements
and
we
found
those
problem.
Yeah.
F
And
if
this
is
not
fixed,
hopefully
it
will
be
fixed,
but
if
this
is
no
things,
that
means
that
we
will
have
a
special
model
that
you
will
have
a
fixed
memories
back
at
beginning,
but
actually
your
node
may
use
much
lesser
memory,
and
this
is
only
for
the
very
issue
and
for
other
before
sandbox
o-69
scenario.
For
other
use
case.
There
may
also
be
some
some
music
had
in
that
case,
you
have
a
spec
and
you
want
to
tighten
that
and
we
can't
lay
in
renée's
with
those
magic
model.
We
don't
have
a
way.
A
So
we
are
going
to
you
I
think
that
night,
how
could
you
please
to
send
mail
to
the
to
the
coming
in
here?
Cuba
neck
is
a
state
nor
the
community,
and
because
this
is
actually
I
want.
We
do
have
our
several
working
items
proposed
and
we
only
I
will
only
advise
that
you
and
also
I
want
to
take
a
personal.