►
From YouTube: Kubernetes SIG Node 20200317
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Welcome
to
the
March
17th
signal
meeting,
we've
got
a
couple
items
on
the
agenda.
I
don't
know
if
everyone
is
here.
That's
on
the
agenda,
given
the
world
situation
we'll
do
our
best
as
usual,
if
stuff's
not
here
on
the
agenda
that
you
want
to
raise
just
feel
free
to
add
and
we'll
get
through
it
as
time
permits.
A
A
B
A
A
C
C
The
way
that
kubernetes
that
the
cubelet
right
now
organizes
the
cgroups
is
that
in
there
is
a
top
level
c
group
under
which
all
of
the
pods
that
are
in
the
guaranteed
quality
of
service
will
appear.
Each
pod
will
have
its
own
see
group
under
that
there
will
be
a
container
AC
group
for
each
container
in
the
in
the
pod,
for
burst
of
all
and
for
best
effort,
it's
similar,
except
that
vegetable
and
best-effort
live
in
their
own
top-level
C
group.
C
The
current
situation
in
the
qulet
is
that
the
pod
level
c
group
has
its
limit
set
for
guaranteed
every
time
and
it
is
set
to
the
sum
of
the
limits
for
all
of
the
pod
containers
and
for
burst
of
all
also
it
will
be
set
if
and
only
if,
all
of
the
containers
in
the
pod
provide
a
limit.
The
limit
that
is
set
is
the
maximum
between
the
sum
of
all
the
container
limits
and
the
maximum
single
in
a
container
that
exists
in
the
pod.
C
And
now,
if
we
go
on
to
to
the
set
of
questions
that
were
raised
last
time,
first
question
was
what
would
happen
with
we
said
pod
limits.
What
would
happen
with
init
containers,
so
I
think.
If
we
set
pod
limits
on
in
the
C
group
on
the
pod
level,
then
it
should
only
be
set
after
the
image
containers
have
finished
executing
and
the
pod
limit
should
not
take
into
consideration
the
limit
that
was
set
for
the
image
containers.
The
reason
is
that
this
feature
comes
for,
allowing
us
to
share
resources
between
containers.
C
The
image
containers
are
going
to
be
run
sequentially
and
they're
going
to
be
run
alone.
They
won't
be
able
to
share
resources
with
any
other
container
in
the
pod,
simply
because
no
other
container
in
the
port
is
going
to
be
running
concurrently
with
each
image
container.
Okay,
please
stop
me
if
you
want
me
to
answer
any
specific
questions.
C
The
second
question
that
came
up
was:
can
resources
be
reused
between
cgroups,
so
the
definition
in
the
C
groups
is
that
resources
can
indeed
be
reused
across
sibling
groups,
assuming
that
they
have
been
released
from
the
cigarette
was
actually
using
them.
So
if
a
CPU
intensive
processes
running
in
1c
group-
and
then
it
finishes
using
the
CPU
or
releases
the
CPU,
the
C
group-
next
to
it-
would
definitely
be
able
to
use
those
shares.
There
is
no
problem.
C
The
memory
issue
is
a
bit
more
complicated
because
it
is
really
very
much
dependent
on
the
runtime
being
used.
So
if
the
runtime
is
using
GBPC
Julie
PC,
you
can
go
into
the
link
is
also
in
the
cap
itself,
will
sometimes
opportunities,
futuristically,
clear
memory
or
release
memory
back
to
the
operating
system.
The
specific
that
the
complete
algorithm
is
in
the
link
that
you
can
also
find
them
in
the
cap.
Muscle
which
is
used
by
alpine,
is
a
lot
nicer.
C
It
will
automatically
mark
memory
that
has
been
released
using
the
three
system,
call
using
M
advice
and
that
don't
need
flag,
and
that
means
that
if
there
is
memory
pressure,
then
memory
can
actually
really
be
reused.
Sibling
sequence,
I
also
checked
what
happens
with
go
and
with
Java
and
with
nodejs
in
all
of
these
cases,
as
it
is,
written,
Golan
will
actually
also
use
the
M
advised
system
call.
So
during
garbage
collector
runs,
it
is
possible
for
memory
to
be
released
all
the
way
back
to
the
operating
system
and
then
reused
in
different
C
group.
C
Obviously,
this
really
is
something
that
every
application
would
have
to
figure
out
for
itself
where
it
falls
and
the
memory
reuse,
and
also,
if
the
container
is
poking
and
exiting
an
additional
process
inside
of
the
container
next
to
the
main
processing
the
container,
then,
when
that
process
will
finished
and
obviously
all
of
the
it
allocated
will
also
be
released
back
to
the
operating
system
and
therefore
will
be
available
for
reuse
in
a
sibling,
C
group,
okay.
So
it
is
useful
in
other
words,
but
it
depends
on
the
specific
use
case.
C
A
additional
overhead
to
be
considered
free
for
use
by
the
C,
the
containers
that
are
actually
running
some
some
workload,
the
pause
container
itself.
The
process
itself
is
not
going
to
be
using
the
memory.
It
doesn't
do
anything.
So
this
is
a
convenient
place
to
just
mark
that
the
memory
is
being
used
and
the
scheduler
should
probably
do
the
right
thing
aside.
There
don't.
A
C
So,
in
that
case,
what
we
need
to
come
up
with
is
where
we
can
assign
that
the
resources,
considering
that,
if
it
is
assigned
to
the
C
group
in
on
the
pod
level,
then
it
would
really
not
remain
unused.
It
would
be
allocatable
by
the
children,
C
containers,
the
children,
C
woods.
So
that's
an
open
item
and
we
can
continue
to
discuss
this.
C
C
If
intent
behind
the
resource
overhead
is
to
make
sure
that
the
the
memory
is
allocated
to
something
that
the
scheduler
is
aware
of,
and
then
something
else
is
going
to
be
using
that
memory.
In
addition
to
the
overhead,
you
would
basically
be
increasing
the
amount
of
memory
provided
to
the
workload
itself
and
not
assigning
it
to
the
overhead
part
of
the
resource
allocation.
E
Honestly,
there
are
sister
than
have
no
matter
it
is
you
assign
that
to
the
cost
company
a
group,
or
it
is
just
account
at
the
pod
level-
actually
no
difference
here.
So
when
you
have
the
memory
starvation
issue,
I
die
pod
level
0,
you
certainly
you
have
that
one
anyway,
the
sickroom
we
are
kicking
and
will
based
on
our
design.
Then
we
are
decided
how
to
declaim,
kill
some
reclaimed.
E
They
could
kick
out
to
reclaim
stride
and
they
could
never
do
something
like
the
nectar
push
down
to
the
aisle
or
they
maybe
just
kill
something,
kill
something.
Then
we
will
paste
other
homes
go
to
kill,
so
our
paths
continue
already
by
design
for
the
armed
kubernetes
management
that
the
past
company,
actually
it
is
in
that
evening
container
actually
should
be
supposed
by
design.
It
is
last
one
to
be
killed,
so
I
don't
think
about
it.
E
There's
the
problem
I
personally
think
about
this
is
a
Libya
over
design,
and
the
last
meeting
I
did
mention
that
how
the
overhead
actually
make
the
feature
you
propose
here.
Actually
one
like
the
one
step
forward
I
can
make
it
is
that
possible.
So
it's
not
the
complex
it
here
actually
actually
to
help.
You
solve
some
some
some
issues
and
although
I
don't
think
what
is
any
concern
here
if
there's
mm,
if
there's
the
memory,
starvation
I
did
I
just
top-level
secret
and
they
killed
many
of
the
things.
I
don't
think
kernel
will
killed.
E
C
I
think
it
is
useful
to
add
that
overhead
to
one
of
the
sea
groups
I'm
just
a
bit
leery
of
adding
it
to
the
pod
level
sea
group,
because
if
we
add
it,
there
then
I'm
afraid
that
it
is
going
to
be
utilized
by
or
doubly
utilized
once
by
the
runtime
itself
and
then
once
again
by
something
running
in
the
pod.
So
I
want
to
assign
it
to
a
C
group
where
it's
not
going
to
be
doubly
used
also
by
the
workload
in
the
pod
itself.
C
F
F
C
Thing
is
that
if
you
decide,
if
you
define
a
limit
in
a
c
group
that
doesn't
automatically
allocate
the
memory
to
that
c
group,
the
memory
will
only
be
allocated
once
something
actually
tries
to
allocate
that
something
in
that
single.
Try.
So
I
locate
the
memory
itself.
So
for
the
request
part
in
the
resource
section,
the
scheduler
obviously
we'll
take
it
to
consideration
the
overhead
that
is
defined
in
the
resource
overhead.
But
even
if
you
added
that
same
amount
of
memory
to
the
limit
in
the
C
group,
it
doesn't
mean
anything.
C
It
would
only
mean
something
if
that
memory
was
actually
going
to
be
used.
So
the
container
runtime,
the
shim
itself
or
I,
don't
know
the
VM.
If
you're
talking
about
kata
containers
should
also
be
in
a
single
and
should
also
be
considered
as
part
of
the
the
C
group
where
the
memory
is
being
accounted
for.
E
Point
out
the
past
container
or
what
have
a
runtime
share
and
a
container
died
in
kind
of
the
part
should
be
terminated.
Yes,
so
so,
no
matter
where
you
charge
that
limit
entry-
and
it's
done
health
and
I-
know
your
statement.
You
missing
here,
you
want
to
guarantee
those
the
resource.
The
third
level
of
the
resource
can
be
shared
among
the
container
but
excluded
past
container
or
other
runtime
stream
container.
The
wish
it
is
to
me
is
like
the
logical
here
actually
is
not
correctly,
because
any
of
those
random
stream
containers
been
killed.
E
Entire
of
the
part
will
be
terminated
yeah.
You
need
a
guard
key
there.
So,
basically,
what
I
want
to
see
you
use
it
up.
We
needed
said
that
I
needed
that
limit.
Actually,
even
we
said
I
need
me
to
not
guarantee
that
container,
which
is
the
fundament
container,
can
be
using
all
the
resources
and
only
is
just
make
sure
there
is
not
to
be
heavy
wrongly.
We
are
abusively
using
other
containers
resource,
so
I.
Don't
think
about
that.
We
this
this
feature
actually
have
the
cost
problem
for
any
of
those.
What
do
you
propose
here?
C
Okay,
if
I
understand
what
you
mean,
then
we
this
feature
or
the
feature
that
I'm
trying
to
propose
here
only
affects
the
limits.
It
doesn't
talk
about
the
request,
the
request
which
is
on
handled
on
the
scheduler
level,
the
resource
overhead,
there's
no
negative
interaction
with
this
with
the
pod
over
resources,
limit
or
resource
sharing
functionality,
and
it
doesn't
matter
and
may
be.
C
The
right
thing
to
do
would
be
to
not
even
increase
the
limit
for
the
pod
or
the
post
container
at
all,
since
the
memory
isn't
going
to
be
allocated
by
something
running
inside
of
these,
these
containers,
these
C
groups
and
the
memory
shouldn't
actually
be
considered,
something
that
they
might
add
to
their
limit.
So.
A
C
Not
sure
because
if
you
consider
that,
even
if
you're
using
say
something
like
Kutta
containers,
a
block
of
memory
is
assigned
to
the
VM
itself.
But
then
the
VM
in
cata
containers
would
actually
launch
its
own
internal
I.
Think
it's
container
D
or
something
like
that
using
some
sort
of
CRI,
and
it
will
also
create
additional
C
groups
inside
of
that
VM,
and
it
would
still
make
sense
to
to
be
for
it
to
be
possible
to
share
resources
between
those
internal
containers.
C
E
Me
earlier
because
I
believe
both
David
and
I
suggest
you
take
that
to
continent
run,
run
time,
share
or
post
content
or
their
requester
its
power,
the
the
power
over
head.
Whatever
we
describe
there
and
attitude
what
you
the
power
level
off
the
limiter
at
the
top,
and
then
you
cover
those,
cannot
that
this
case
is
and
then
we
don't
need
the
over-design
Co
hard
to
count.
He
passed
container,
not
it
or
maybe
it
container
sham
using
more
the
resource.
This
specific
and
all
maybe
next
kind
to
their
limit.
E
C
If
you
grant
that
overhead
to
the
C
group
on
the
pod
level,
the
fact
that
you've
increased
the
limit
on
that
on
that
C
group
doesn't
mean
that
that
memory
is
actually
going
to
be
allocated.
So
it
just
means
that
this
entire
pod
can
now
example.
Instead
of
being
able
to
use
with
the
1
megabyte
that
the
port
itself
requires,
it
can
use
the
1
megabyte
plus
another
megabyte
that
is
specified
in
the
pot
overhead,
but
it
doesn't
mean
that
it
is
actually
using
that
additional
1
megabyte.
C
So
if
one
of
the
real
processes
in
the
workload
that
the
pod
is
is
managing
tries
to
allocate
2
megabytes
of
memory
that
will
work,
the
limit
will
make
it
possible,
but
in
in
case
we're
using
the
resource
overhead
scenario.
Basically,
that
means
that
we've
used
that
additional
1
megabyte
twice.
We
assigned
it
to
the
pod
in
this,
in
the
hope
that
it
would
be
allocated
to
the
overhead
the
over
it
might
be
using
it,
because
the
overhead
is
probably
not
running
in
the
the
shim
is
not
running
in
the
same
signature.
F
C
What
I
thought
is
that
it
still
has
a
potential
problem,
and
that
is
if
the
shim
is
not
going
to
allocate
all
of
its
memory
upfront
before
it
starts
any
of
the
containers
that
it
is
going
to
manage.
Then
there
is
a
race
condition
here
and
it
is
possible
for
the
memory
to
be
allocated
by
one
of
the
containers
that
are
running
under
this
shim
yep.
F
So
for
those
sorts
of
problems,
that's
what
Don
was
talking
about
earlier
with
the
own
killer.
If
it
does
eventually
try
and
allocate
that
memory
it'll
trigger
an
and
because
of
the
settings
we've
configured,
the
user
process
will
get
kicked
out,
hopefully,
instead
of
the
overhead
process.
If,
if
they're,
actually
two
different
processes,
I,
don't
know
how
that
works
with
sandbox
can
that's.
C
C
C
E
E
It's
no
different,
but
just
think
about
if
it
is
the
guarantee
job
and
it's
it's
currently
job
and
then
and
that,
if
without
after
powder
level
image
and
without
actually
today,
this
is
the
still
70
today,
even
today
we
already
have
the
popular
ones
ago.
Even
we
don't
have
to
leave
it.
So
it
goes
to
the
popular
and
a
bit
earlier
of
the
kubernetes
is
the
mint
and
the
memory
design
actually
don't
have.
The
parallel
then
goes
to
the
route,
which
is
big
problem,
so
you're,
basically
those
kind
of
messaging.
E
If
we
got
off
nectar,
loot
level
of
the
memory
progr
you
mean
we
may
not
claim
those
kind
of
things,
so
you
effectively
may
be
hard
to
either
calming
workload
the
performance
so
so
I
basically
think
about.
This
is
not
problem.
We
try
to
do.
Puddle
I
was
a
group,
and
it's
not
problem
actually,
and
we
will
make
those
memory
project
through
the
local
region
instead
effect
of
the
per
per
node,
a
performance.
A
E
You
Clara
Valley,
there
I
have
just
like
a
difference,
so
usually
we
don't
have
the
pad
level.
We
don't
have
that
limiter.
So
then,
basically,
it's
not
the
colonel
won't
effectively.
Reclaim-
and
here
we
have
the
root
level,
have
the
memory
memory
project.
Then
they
will
reclaim
and
that
will
actually
will
whence
the
kernel
status
reclaim
and
actually
definitely
will
affect
everybody
performance
and
but
this
one
actually
original,
because
we
have
the
power
level
of
the
limit.
E
D
C
It
is
an
accounting
difference.
It
might
be
actually
a
little
better
because
it
would
mean
that
if
the
the
port
is
limited,
then
you
won't
be
able
to
swap
the
entire
note
once
the
container
that
wrote
the
file
goes
away
and
then,
if
there
is
still
enough
available
resources
on
the
pod
level,
then
a
new
container
that
is
being
created
that
has
specific
limits
would
still
be
able
to
start
so
so
it
might.
C
F
C
As
far
as
I
can
tell,
there
is
no
harmful
interaction
with
non-uniform
memory
architectures.
As
far
as
the
C
group
is
concerned,
the
process
might
be
right.
Now
we
don't
support
I,
think
the
CPU
said
C
group,
so
so
there's
no
pinning
of
specific
processes
to
specific
CPUs
and
in
the
new
misete,
but.
C
C
It's
accounted
in
in
the
same
group
and
as
far
as
any
additional
cgroups
that
we
might
be
able
to
use
I,
don't
think
that
III
don't
want
to
tackle
them
in
this
specific
capsule
right
now,
I
only
want
to
touch
memory
and
CPU,
but
if
we
were
to
add
support
for
them
in
the
future,
I,
don't
I,
don't
see
any
inherent
problem
here.
Unless
somebody
can
like
me.
B
Yeah
Derek
I
think
you
had
asked.
Would
there
be
any
impact
here
on
topology
manager
and
I
brought
it
up
in
our
weekly
meeting
with
the
rest
of
the
group,
and
we
looked
at
it
a
little
bit,
but
unfortunately,
none
of
us
have
really
dug
deeply
into
this
cap
and
I
think
it
would
be
good
if
we,
you
know
on
the
topology
managers,
have
tooken
closer
look
at
this,
and
we
just
haven't
done
that.
Yet.
Okay.
C
So
if
you
want
I'm
available
both
on
email
and
on
slack,
if
you
need
my
input
for
anything,
thank
you.
Okay,
so
I
changed
my
mind
a
bit
relative
to
last
time
that
we
discussed
this
and
also
based
on
comments
that
were
put
on
the
issue
in
in
github.
My
thinking
now
is
really
to
put
a
resource
in
the
spec
level,
and
that
way
it
would
be
at
least
easier
to
explain
exactly
what
the
expected
behavior
is.
That
is
the
biggest
advantage
here.
C
The
second
advantage
would
be
that
it
is
not
therefore
tied
anymore
to
the
quality
of
service
mechanism.
It
puts
the
same
regardless
of
which
sort
of
service.
Your
part
is,
with
a
small
caveat,
that
for
guaranteed
parts,
the
the
pod
level
C
group
doesn't
actually
mean
anything
since
the
request
and
the
limit
are
you're,
never
going
to
be
able
to
use
more
than
the
limit
for
any
specific
C
group
in
any
specific
container.
C
So
if
you
define
the
resource
on
the
pod
level,
that
is
higher
than
the
sum
of
the
limits
in
the
containers
it
wouldn't
matter,
and
if
you
define
something
that
is
lower
than
basically
you're,
just
making
sure
that
you
don't
get
all
the
memory
that
in
the
pods
in
the
containers
themselves,
so
so
for
guaranteed
pods,
it
really
doesn't
help.
This
is
more
feature
that
is
relevant
or
useful
for
burstable
or
best-effort
pods.
C
C
A
But
I
want
to
ask
like
a
practical
question:
if
that's
okay,
so
you
do
pod
level
resource
limits
and
then
users
fix
a
couple
containers
in
there
that
are
using
a
go
runtime
or
a
JVM
as
their
underlying
runtime
and
if
I
specify
no
container
requests
on
those
individual
containers.
When
the
container
starts
and
says
Oh
tell
me
how
many
CFS
shares
I
have
like
the
default.
Behavior
of
the
JVM
I
was
gonna,
say:
oh
I
have
two
men
shares
and
so
I'm
going
to
tune
garbage,
collecting
threads
1
and
user
s.
A
Horrible
performance
same
thing
would
happen
on
golang,
and
so
like
one
of
the
things
that
I'd
struggle
a
little
bit
here
is
like,
if
you
add
pod
level,
resource
requirements
and
assume
you
don't
set
anything
on
the
container
level.
How
do
we
make
it
that
application
runtimes
that
use
this
actually
do
the
right
thing,
or
will
users
just
be
chasing
out
how
to
tune
their
JVM
are
going
GC
behavior?
A
C
So
that's
a
very
valid
question.
Obviously
I
don't
know,
so
it
really
will
depend
on
the
application
itself
as
long
as
the
option
to
do
something
like
this
is
not
available.
Nothing
will
change
for
Java
for
goal,
and
maybe
it
is
possible
to
somehow
be
aware
of
the
real
amount
of
resources
that
you
are
going
to
be
using
I
mean
this
is.
A
So
today
the
JVM
will
start
and
C
shares
available
to
see
group
and
then
or
the
CPU
sets
assign
to
it
and
then
try
to
turn
GC
threads
to
that.
But
then
you
know
there
are
other
ways
we
could
get
this
down
into
the
container
environment.
You
could
do
the
downward
api,
and
so
I
was
just
struggling
a
little
bit
where
it's
like.
C
C
A
C
A
The
idea
that,
like
I,
don't
want
to
think
about
resource
tuning
on
the
container
basis,
because
the
pod
level,
when
the
outcome
is
that
you're
gonna,
have
to
be
tuning
your
garbage
collecting
threads
on
your
runtimes
very
granularly.
It's
almost
like
my
app
the
pod
author
doesn't
want
to
care
about
the
actual
user
right
in
the
app
definitely
won't
care.
Yes,.
E
Exactly
I
think
it
is
the
same
same
question
last
week:
I
think
that
all
those
cap,
the
interactive
with
this
existing
features-
I,
don't
have
big
concern,
like
the
you
saw
me
defend
a
lot
of
things.
I,
don't
think
it's
a
problem.
My
problem,
it
is
the
like.
The
hot
is
using
this
feature,
and
in
the
past
the
people
proposed
the
pod
level
even
ourself
and
couple
times
other
pod
level
requires
an
anemia.
E
The
problem
is
especially
today
like
the
second
container
proposal
or
those
kind
of
things
we
basically
I,
think
I
use
in
the
term.
Last
week
we
were
spry
or
tea,
so
you
have
the
application.
You
are
running,
that's
the
most
important
thing
and
then
then
you
are
using
some
static
container
or
help
container
to
help
you
and
those
people
deliver.
E
Those
tzedakah
comes
in
a
helper
container,
logger
component,
so
they
basically
just
okay
I,
don't
care
how
much
the
resource
usage
you
go
figure
out
to
the
pot
level
and
you
add
the
pot
level
you
basically
guys
in
each
of
those
helper
container
and
the
resource
usage
and
pass
your
application
and
then
I
add
the
powder
level
you
end
up
could
be
an
artifact
and
your
real
application
well-behaved
it.
Then
you
have
the
really
neat,
because
you
major,
but
do
all
the
rest
of
stuff.
You
have
no
idea,
so
they
may
overuse.
E
They
may
stop
your
application
completely.
So
that's
kind
of
the
for
me.
This
is
why
I
I
think
it's
like
the
unequal
resource.
The
word
priority,
so
we
basically
made
the
application
our
job
kubernetes
do
a
lot
of
things.
It
is
helped
us
the
ICT
help
those
the
wiper
application
developer,
but
actually
this
proposal
could
be
impractical
in
reality.
Could
we
make
their
job
is
much
harder?
E
Even
that's
my
concern
because
it's
kind
of
we
remove
all
those
kind
of
the
harder'
requirement
for
people
provider,
services
and
it
goes
through
the
Celica
container,
but
they
don't
take
care
of
the
resource
management
and
that
new
people
not
reimburse
me
and
the
people
derived
in
the
real
application
actually
have
all
the
trouble
and
the
people
behind
those
operating
I
also
have
the
other
travel.
That's
my
concern.
I
think
this
is
the
biggest
concern
at
least
the
last
week.
Okay.
B
E
A
B
A
Like
is
there
Java
there
and
how
do
you
tune
it
right,
like
what
type
of
guidance
would
you
give
to
people,
because
these
are
like
the
real
world
practical
things
that,
like
we
were
struggling
a
little
bit
on
the
redhead
side
when
we're
thinking
through
like
what
what
the
right
JVM
behavior
be,
what
should
the
right
go?
Vine,
behavior
being
I'm,
sure,
Google
and
others
are
having
the
same
discussions.
Okay,.
C
So
I
can
tell
you
from
the
IDE
that
we
are
developing,
that
we
do
have
Java
based
tools
in
that
IDE.
For
example,
one
of
the
things
that
you
would
probably
want
to
run
in
an
idea
is
to
run
the
Java,
compiler
or
maven
or
any
other
related
tools.
It's
a
bit
different.
So
for
those
use
cases,
since
these
are
short-lived
processes,
that
you're
going
to
run
inside
of
the
container
and
they're
going
to
consume
some
amount
of
memory
and
then
they're
going
to
go
away.
C
Everything
that
they
use
will
be
deleted
and
I
don't
actually
care
about
the
ergonomics
of
it.
So
if
it
thinks
that
it
has
one
CPU
when
actually
it
has
half
a
see
you
or
the
other
way
around,
it
doesn't
really
matter.
The
process
is
going
to
run
for
really
short
while
and
then
it's
going
to
go
away
and-
and
it's
not
going
to
be
a
problem.
D
C
Memory
is
first
of
all
the
main
concern,
because
the
memory
is
not
a
compressible
resource
for
shortly
of
Java
programs.
There
is
going
to
be
some
inherent
unstableness,
maybe
for
the
way
that
it
considers
how
many
threads
to
run
or
when
to
do
the
various
housekeeping
tasks.
But
it's
really
not
that
noticeable,
at
least
for
my
use
case
as
far
as
I
can
tell.
But
if.
A
A
C
E
I,
ask
you
some
question.
Actually
the
sounds
like
you
really
have
the
real
time.
Well,
your
business
is
a
constant
you.
Never
this
figure.
How
hard
for
you
NW
features.
Do
something.
Is
that
sorry
I
couldn't
understand.
You
can't
honestly
wait
without
enable
this
features
standard
for
the
kubernetes?
Actually,
you
could
enable.
What
do
you
propose
here
and
through
some
other
way
you
could
have
the
daemon
side
and
because
we
already
today,
I
have
next
the
caustic
group
right.
You
are.
We
already
have
next
the
autos,
the
next
topic.
E
I
love
the
current
heat
up
lab
the
pursuit
bursts
and
the
best
effort
route
us
a
group.
You
could
have
the
demon
and
you
could
watch
off
the
API
server.
You
could
decide
update
of
the
C
group
or
partner.
Was
they
go
and
once
you
see
this
is
the
landed,
instant
change
after
copper
nineties?
Have
you
ever
think
about
that
way?
Actually,.
C
C
The
question
is:
how
would
how
would
it
interact
with
any
future
development
of
the
cubelet?
So
I
would
rather
solve
the
problem
as
best
I
can
correctly
and
in
the
community,
and
not
have
to
maintain
my
own
diamond
set.
That
is
doing
something
that
might
not
be
entirely
correct
and
might
not
continue
working
in
the.
E
E
The
standard
is
much
higher
and
but
it
sounds
like
you
because
for
the
particularly
when
they're
all
not
nothing
internal
using,
you
may
have
more
flexibility,
because
you
like
expecially
for
internal
infrastructure,
you
could
appreciate
and
your
company
all
the
application
developer
follows
certain
doors
right
like
the
java
application,
how
to
do
and
the
gonna
application
how
to
do,
and
whatever
things
you
could
define
based
on
the
optimize
that
but
therefore
open
source,
the
proven
ideas
we
really
want.
You
pass
the
result,
a
lot
of
time.
We
we
couldn't.
E
Obviously
we
made
a
lot
of
pins
take,
but
the
next
day
what
a
dark,
David
and
I
earlier
mentioned.
We
have
the
concern
because
the
people
mind,
obviously
when
using
this
feature.
So
this
is
where
we
give
you
a
lot
of
question.
It's
not
we
like
that.
A
lot
of
what
do
you
propose
is
make
sense,
but
it's
just
our
concern.
It
is
different
use
cases
different
things.
It
made
people
highly
possible.
E
C
A
A
One
of
the
things
that
wasn't
clear
to
me
in
your
use
case,
though,
was
like
is
the
deployment
of
kubernetes.
That's
running
your
use
case
only
running
your
pods,
or
are
there
other
pods
that
wouldn't
want
this
capability
like
we
could
explore
like
a
node
level
option
that
says
you
know,
enforce
pod
level
boundaries
without
putting
at
me
an
user
API.
A
E
C
C
E
Actually,
this
is
exposed
complexity,
naka,
for
example,
at
least
the
complexity
for
whip
here
feature
for
sorry
for
OH
weepy,
a
vertical
part.
The
auto
scaling
definitely
have
the
we
just
settle
down
about.
Those
kind
of
things
immediately
sways
expose
the
new
complexity
to
data
feature.
This
is
why
I
have
a
even
a
alpha.
It
is
good
idea
and
the
only
region
hesitate
here,
though,
if
you
in
the
past,
we
did
see.
E
A
E
Separated
demon
side
so
they
can't
build
in-house,
and
but
if
they
Pro
is
really
useful
and
they
also
have
incentive
to
come
back
to
upstream
because
they
don't
want
to
maintain
those
things.
So
we
can
have
the
more
like
the
introduction
use
kisses
to
back
up
that
way.
So,
right
now
we
have
a
lot
of
question
and
and
actually
no
answer
right
in
the
practice.
E
Even
I
have
not
the
production
experience
before
is
hard,
and
but
tester
is
not
uni,
because
kubernetes
design
actually
is
a
different
from
of
iboga.
So
certain
complexity
from
book
actually
is
not
necessary
here,
and
so
so
that's
why
I
was
suggested
separate.
The
demon
said
before
we
didn't
have
the
kubernetes
move
to
the
sequel
version
to
you
and
even
with
v2
I
think.
A
I'm,
like
we're,
not
writing
a
memory
limit
if,
if
it's
not
like
we're,
not
changing
the
behavior
right
now
and
just
let
these
code,
so
it's
like
the
daemon
said,
probably
works,
but
you
just
want
that
demon
set
to
track
an
annotation
on
your
pod
instead
of
looking
at
the
per
container
things.
But
then
you.
A
F
C
The
other
question
is
they
I
do
manipulate
the
signal
levels?
How
will
that
affect
the
scheduler
if,
if
I
am
manipulate
the
secret
levels
from
outside
of
the
cubelets?
Well,
the
information
that
cubelet
provides
to
the
scheduler
with
regards
to
the
current
requests
and
limits
will
that
be
affected
in
any
way.
E
So
scheduler
don't
using
image,
so
you
only
set
up
meanings
there
yeah
so
I
understand.
What
do
you
concern
you
constant
about
the
coop,
maybe
override?
What
are
you
out
there?
So
you
can
watch
ApS,
server
and
when's
the?
What
is
the
company
you
could
could
after
the
Cuban
enter
hierarchy,
music
now
I'm
thinking
about
if
the
actual
Paulo
state
has
to
be
put
back
and
actually
that's
the
good
way
you
set
the
lineage
because
you
all
have
the
past
container
and
you
are
the
creator
of
the
parlangua
secret
man.
E
G
E
Is
running
yeah
only
things
in
the
long
run
was
we
have
the
VP
a
so
they
maybe
update.
So
you
need
to
watch
which
of
those
cannot
fail
to
change
and
the
another
constant
is
because
you
may
not.
You
may
still
have
the
problem
and
doing
the
continuous
tap
time.
I
saw
some
types.
The
continents
that
have
time
have
a
lot
of
bursts
and
then
you
may
not.
This
feature
not
help
you
at
all,
but
I
think
that
you
are
okay,
because
earlier
you
are
talked
about
in
each
container
as
well.
E
C
Isn't
a
concern
I
think
I
agree.
My
problem
is,
is
mainly
what
happens
after
everything
is
up,
so
it's
a
very
burst
of
all
workload
and
and
the
bursts
are
what
is
causing
my
problem.
Not
the
start
up.
The
start
up
is
actually
very
deterministic.
I
know
exactly
what
the
memory
usage
is
up
to
the
point
where
the
user
starts
to
actually
work.
H
I
want
to
speak,
but
you
don't
want
to
interrupt
so.
I
was
just
thinking
what
like
introducing
this
fields
in
reports
back
like
a
troll
on
what
slide.
It
also
will
have
our
features.
So,
for
example,
like
some
time
ago,
we
discussed
about
where
shared
devices
vision
report
like
our
game
a
so
when
we
like
allocate
one
device
and
when
and
share
it
with
multiple
containers
and
with
syntax,
will
help
it.
E
E
C
E
May
I
suggest
you
summarize
what
we
discuss
and
what's
the
next
step
at
your
cab,
so
we
can
easily
refresh
our
memory
next
time
and
also
can
you
I
do
your
slides
to
the
today's
meeting
notes,
because
just
not
any
good,
and
so
we
can
come
back
next
time.
Then
you
can
came
to
talk
to
us
again,
so
we
have
with
me
to
track
those
things
shorten
yeah.
Thank
you.
Thanks.