►
From YouTube: Kubernetes Resource Management WG 20170523
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
B
I
guess
the
first
thing
to
mention
is
that
the
proposal
dock
has
been
translated
to
me.
It's
available
for
review
on
the
kubernetes
community
repository
so
there's
a
PR
called
add
proposal
for
cpu
manager
and
yes
thank
you
for
the
link.
Shamone
yeah
any
and
all
reviews
and
comments
would
be
very
welcome
there.
B
B
One
requirement
that
was
surface
we're
staging
the
proposal.
Dock
is
the
need
to
potentially
expand
the
CRI
to
allow
CPU
sets
to
be
updated
for
containers
and
so
I
think
for
that
topic.
It
can
turn
it
over
to
Seth
and
Shimon
who
both
prepared
PRS
to
scope
out
what
that
would
look
like
and
they're
they're
up
for
review.
The
links
are
there
in
the
document.
So
if
you
want
to
want
to
expand
on
that.
C
C
They
a
difference
between
them
is
just
basically
in
their
approach
on
how
much
they
won't
extend
the
CRI
or
leverage
existing
code.
So
there
are
more
motorized
the
same.
The
different
only
the
implementation
detail,
but
it's
a
good
start
I
think
to
just
have
a
conversation
on
how
much
we
want
to
do
it
and
what
will
be
the
timeline
to
to
integrate
it
because
conventions,
updating
the
CPU
sets
for
the
containers
during
the
runtime
will
be
probably
very
important
piece
of
functionality
that
will
be
needed
for
the
CPU
manager.
C
A
I
think
of
questions,
and
maybe
others
newest,
but
I
didn't
when
I
was
catching
myself
was
like.
Originally,
there
was
a
period
of
time
where
I
thought
that
docker
updates
actually
restarted.
The
container
process
sounds
like
that
might
not
be
the
case
but
I'm
serious.
If
anyone
has
actually
researched
it
back
a
certain
number
of
docker
versions.
No,
that's
true!
Well,
you
know
the
state
of
the
world
woman
I
only.
C
Just
one
of
the
recent
versions-
and
they
do
not
replace
that
any
containers
what
I
found
out
is
that
it's
basically
a
blip
container
right
to
the
AC
group
file
system
that
can
manipulate
the
constants
CPU
sets,
but
I
didn't
notice
any
any
restarts.
But
I
used
one
of
the
latest
docu
operations,
so
I
have
no
idea
how
it
would
work.
Some
old
ones.
D
D
E
I
think
there
were
some
edge
cases
for
the
kernel
memory
limit,
I
love
to
check
that,
but
otherwise
I,
don't
think,
there's
anything
that
that
would
require
a
restart.
So
if
they
were
restarted,
Dokken
must
be
doing
it.
There's
nothing
in
Plan
C
that
is
restarting
the
process.
It's
just
updating
those
equals
okay.
A
A
B
Let
the
CPU
set
for
each
container
for
that
pod
will
be,
and
that
decision
holds
for
the
lifetime
of
the
container
and
so
static
refers
to
the
fact
that
that
allocation
won't
change
for
the
lifetime
of
the
container
and
that's
in
contrast
to
a
different
policy.
That's
planned
called
the
dynamic
policy.
B
Okay,
so
initially
when,
when
a
pod
lands
say
if
it
has
burst
full
QoS
class
or
best-effort
QoS
class,
we
want
it
to
be
able
to
run
on
all
of
the
available
course
so
that
we
get
maximum
CPU
throughput
for
the
jobs
on
that
note,
but
mean
well,
if
there's
a
guaranteed
pod
that
lands
that
has
containers
that
request
integer
numbers
of
CPUs.
B
We
want
to
improve
their
performance,
specifically
with
respect
to
CPU,
CFS,
scheduler,
latency
and
cache
affinity
by
constraining
them
to
a
smaller
number
of
course,
and
then
ensuring
that
they
fulfill
their
cpu
quota
only
using
those
course,
and
so
that
you
have
to
move
the
the
non-exclusively
allocated
containers
off
of
those
course
when
that
guaranteed
pod
lands
on
the
node.
And
since
we
don't
know
in
general,
all
of
the
pods
well
we'll
get
down
to
to
a
node
a
priori.
We
have
to
do
it
in
a
more
reactive
way.
B
B
A
Something
so
after
timing
that
maybe
this
isn't
going
to
get
into
q1
7,
but
hopefully
we
can
get
this
1/8
like
ideally
I'd
like
it.
If
fix
that
and
Simon
could
call
us
on
a
single
preferred
proposal
and
then
I
think
at
this
point
we
could
probably
just
transfer
this
discussion
to
sig
note
on
the
CRI
McKenna
I
mean.
G
It
could
be
really
the
only
difference
between
our
proposals
is
mine,
tries
to
use
an
existing
CRI
method
called
a
date
run.
Okay
that
we're.
Basically,
it's
meant
to
update
the
runtime
itself,
but
in
this
case
we
can
pass
it
a
container,
ID
and
a
you
know,
an
updated
can
container
config
and
use
it
that
way,
but
in
my
opinion,
X
using
a
method
using
the
method
for
a
purpose
that
its
name
doesn't
imply.
So
my
proposal
just
makes
a
separate
method
for
that,
specifically
for
the
purpose
about
dating
container
resources.
G
C
B
B
A
Do
and
it's
usually
n
minus
three:
okay
yep,
we
don't
give
a
prescribed
docker
version
anymore
in
the
community,
so
I
think
Google
might
be
still
running.
Doctor
111
I
think
they
want
to
move
to
one
later
version:
I'm,
not
sure
like
Red
Hat
runs
112,
so
I
don't
know
we're
not
as
prescribed
on
the
best
version
to
run
it's
just
n,
minus
3,
I.
Think
right,
yellow
they're,
saying
if
it's
110
I
think
we're
okay,
all
right.
A
H
F
B
F
F
F
Increment,
so
so
what
is?
The
sub
can
be
briefly
very
briefly
talked
about
it
during
the
face-to-face
meeting.
It's
all
about
you
know.
What
can
we
do
to
minimize
application?
Hey,
listen
see,
it's
all
around.
You
know
advanced
keywords
right
so
so
the
goal
is
to
essentially
enhance
performance
of
latency,
sensitive
applications
and
kubernetes.
F
You
know
one
is
minimizing
T
latency
by
you
know
removing
any
impact
or
impact
from
shared
resources
from
other
containers.
Now
the
number
two
is
just
along
with
it
will
improve
overall
application
proof.
So
essentially
there
are
two
parts
to
it.
One
is
essentially
the
key.
You
said
you
know
what
you
would
like
to
do
is
we
want
to
isolate
the
background
applications
such
as
you
know,
garbage
collectors
or
any
large
files,
and
so
anything
which
is
really
a
background
task.
You
know
to
be
a
region
of
the
cache,
especially
the
last
level,
cache
rate.
F
Arguably
you
can
say
hey
as
long
as
like
between
these
proposals.
We
discussed
if
we
can
isolate
the
low
latency
files
to
specific.
You
know
course
right.
So
then,
you
know
all
is
good,
but
then
the
problem
is
the
last
level.
Cache
is
a
shared
resource
across
all
paths.
Right
and
that's
the
point
of
contention
and
again
here.
Essentially,
we
are
utilizing
a
hardware
capability
based
on
Intel
oddity,
where
the
goal
is
to
start
simple.
Essentially
what
we
are
doing
is
you
know,
trying
to
associate
all
the
background,
containers
or
specific
key
threads
between
them.
F
For
example,
if
you
take
a
java
application,
you
know
it's
one
container,
but
then
there
are
several
key
threads.
Only
certain
of
the
speeds
are
really
the
background
type
of
starts
of
garbage
collection,
tax-rate,
rester,
imeem,
the
application
threads.
You
don't
want
to
kind
of
isolate
them
so
essentially
keep
it
simple.
Just
associate
the
background
task
to
a
partition
for
a
stack
but
rice
to
the
systems.
F
Just
use
the
enter
LLC
right
I
mean
because
if
you
try
to
put
them
on
to
nail
them
to
specific
regions
of
LLC,
then
there's
I
mean
there
are
things
which
are
hard
to
describe
from
an
application
perspective.
What
all
is
going
to
happen
so
jail
the
background
tasks
for
a
start,
you
know,
keep
it
simple
register
with
user
in
parallel
sit
straight.
F
You
know,
and
this
essentially
in
terms
of
a
JVM
context
and
the
value
proposition.
There
is
a
detailed
proposal,
including
proof
of
concept,
assault
rate,
I.
Think
many
of
you
might
have
seen
this
before
as
a
detailed
document.
So
kindly
go
through
it.
She
even
goes
and
say
hey.
If
I
partition
the
garbage
collection
past,
you
know,
how
does
it
go
a
change,
a
time
they're,
basically
assigned
different
partitions
of
the
DC
threads
to
you
know
come
size
varying.
F
So
now,
essentially
from
and
also
a
little
bit
more
on
customer
use
cases
say
essentially
add
Dell.
You
know
the
bit
of
nice
proof
of
concept
with
the
enterprise
customer,
so
there
we
were
not
using
it
in
a
container
context,
but
more
of
the
VM
context.
So
what
we
did
was
essentially
on
a
single
socket.
There
were
both
start
applications
and
one
VM,
in
other
words,
the
background
big
data
type
of
you
know,
transactions
happen
expect
they
basically
was
just
background.
I
would
inventory
management,
so
you
know
what's
happening
so
there
with
that.
F
So
the
Verdi,
what
he
really
did
was
just
gel
the
big
data
task
to
you
know
a
region
of
the
cash
rate,
and
that
way
what
happens
is
you're
able
to
maximize
the
overall
utilization,
but
still
not
impact
research
applications
right
we're
able
to
guarantee
the
legacy.
That
is
a
good
concept
we
take,
that
is
enterprise.
Car
is
very
successful,
so
this
is
kind
of
liberating
this
model
in
kubernetes
and
also
largest
ecosystem
and
where
we
see
application
specific
opportunities.
F
I
don't
set
the
CBM
where
the
two
I
mean
it's
not
just
about
a
simple
assignment
of
the
entire
container
to
the
partition,
but
being
even
more
selective,
only
certain
background
threads
within
the
container
to
the
partition.
And
again
you
have
to
remember
that
the
number
of
partitions
are
limited
and
you
know
it.
For
example,
if
we
take
the
latest
Intel
numbers
their
text
around
16
back,
so
that's
a
number
and
it's
a
shared
resource
and
it's
a
constrained
resource.
The
size
is
around.
You
know
megabytes.
F
You
know
we're
talking
like
20
megabytes,
40
megabytes
and
it's
shared
across
several
tasks
and
also
further
a
little
bit
of
context
about
until
the
schedules
kind
of
an
internal
discussion
with
Brunel
I
think
already
had
team.
So
one
of
the
biggest
you
know
things
we
did
when
this
work
was
started
by
Intel
with
Google
and
Dell.
Was
that
what
he
realized
was
along
the
journey?
You
cannot
use
C
groups,
for
the
simple
reason
is
C
groups
with
built
in
inheritance
property.
F
It's
basically
when
you
fold
a
process,
it
inherits
the
parents
relationship
in
terms
of
you
know,
managing
all
the
resources,
and
in
this
case
it
simply
doesn't
work
because
you
want
to
construct
you're,
not
following
any
hierarchical
groups
with
deciding
these
are
the
set
of
tasks.
You
want
to
associate
these
partitions
right,
you
know
arbitrarily,
and
that
was
a
one
of
the
key
reasons.
We
went
ahead
with
the
resource
control
file
system
model
coming
basically
a
new
successor,
spy
system
component
type,
which
is
a
key
behind
driving
this
submarine.
F
So
now,
diving
down
to
I
mean
at
least
a
high
level
implementation,
somebody
on
what
all
the
steps
that
would
happen.
The
first
is
the
engage.
The
cache
partition
we
created
up
a
cloud
administrator
I
mean,
of
course,
this
feature
is
depend
on
Intel,
Rd,
T
calc,
it's
possible
only
in
those
mode
to
support
it
right
and
cache.
Partitions
are
available,
made
available
to
be
a
mess,
I
mean
if
you
are
doing
bad
metal.
F
Exposing
the
cache
partition
to
an
ass
through,
for
example,
is
to
standardize
container
specs
to
an
environment
variable.
So
I
pulled
up
a
example,
so
the
microservices
deployment
examples
are
the
environment.
Variables
are
specifically
passed
to
a
java
application.
You
know,
as
you
can
see,
we
scroll
down
java
up
the
value
recommend
basically
of
the
value
for
different
use.
The
given
these
related,
algorithm
or
I
think
the
X,
and
this
is
probably
around
the
heap
size.
F
A
Can
classic
question
here,
I
guess
I
ever
sure,
so
it
when
you
basically
sort
of
exposed
to
a
containerized
cargo
container
spec.
So
is
this
an
actual
resource
that
you
expect
to
appear
in
a
containers,
resource
requirements
and,
like
a
literal
number
next
to
it,
like
I,
want
as
much
of
the
cache
or
I
guess,
and
do
you
imagine
that
the
node
would
advertise
some
some
size
for
this
like
what
is
the
way
that
the
node
would
push
this?
The
good
question.
F
Directly,
the
interesting
part
is
this
is
really
a
constrained
resource,
so
I
mean
you
cannot
really
say
that
hey
so,
for
example,
the
best
that
would
make
sense
is
percentage,
but
it
I
mean
to
me
at
least
for
a
start.
It
should
be
so
simple
that
partitions
are
even
better.
That
is
pre-created
rate.
F
What
the
size
should
be
right,
at
least
a
starting
point
and
then
just
pass
the
partition
ID
or
something
of
that
order
or
some
form
of
notation,
because
the
percentage
means
then
you
do
I
mean
that's
the
next
level,
where
you
add
more
dynamism.
That
means,
and
then
you
also
have
some
tunability
around
that
right.
So
the
simple
is
just
I
mean
saying:
hey,
you
know
you
belong
to
this
partition.
I
mean
you
can
use
this
partition.
Add
your
text
to
the
partition
as
simple
as
that.
F
It's
a
bit
of
the
other
way.
So,
for
example,
let
us
say
there
is
a
guaranteed
pod
and
the
best
of
first
pod
running
race.
So
the
goal
is
to
make
sure
that
guaranteed
pod
is
absolutely
guaranteed.
Everything
with
respect
to
the
process,
including
the
last
level
cache
and
maintain
the
application,
can
realize
the
you
know
the
special,
the
SLO
around
a
latency
said
case:
I
mean
I.
You
know
by
when
kubernetes
now
shall
not
violate
this
latency
I
mean
it's
a
later.
F
A
What
I'm,
trying
to
tease
out
is
like
the
unspoken
requirements
between
steps.
Three
and
four
here
right,
we're
like
step
four
is
giving
one
potential
way
that
the
cache
partition
is
consumed
in
a
sample
jadianna
by
setting
some
m
bar
channels,
but
like
similar
to
huge
pages
as
another
example
of
display
with
JVM
stuff
or
even
when
you
size,
your
keyed,
like
I,
would
expect
those
n
bars
to
be
pulled
from
the
download
API
on
the
actual
containers,
resource
requests
and
so
I'm.
Just
trying
to
understand
like
when
you
make
the
caste
petition
availability.
A
But
how
does
the
cubelet
actually
fairly
share
allocate
those
to
the
individual
pods
independent
of
what
the
container
spec
said
like
an
M
bar
is
opaque
largely
to
be
to
the
kyboot
but
I
think
that's
a
detail.
We
need
to
kind
of
tease
out
and
I
have
to
read
through
your
proposal,
my
vehicle
mature.
It
maybe
don't
jump
out
to
me
but
I.
Think,
like
the
current
contractor
closed
here,
it's
probably
not
sufficient.
A
F
A
E
G
E
If
I,
like
my
papa's
I'm
Kimber,
it
has
two
little
doesn't
confirmed
as
well.
So
the
way
this
works
is
whenever
the
container
process
is
working
any
other
process,
it
doesn't
get
added
to
this
partition
automatically
so
and
it
wouldn't
want
all
of
them
to
be
added
automatically
as
well.
So
we
need
a
way
for
the
process
to
communicate
up
of
the
sack
as
required
to
add
a
new
task
to
the
cache
partition.
So
as
far
as
I
understand,
this
is
nothing
similar
to
that
right
now.
E
So
the
way
I
imagined
it
would
work
at
minimum
is
we'll
need
to
expose
some
kind
of
an
FD
or
a
pipe
into
the
container
process.
It
will
not
become
standardized
in
OCI,
and
the
container
process
knows
that
I
can
send
kids
over
here
and
ask
those
to
be
added
to
the
cache
partition
and
then
optionally.
If
we
need
further
validation,
then
it
can
go
back
to
the
cubelet
and
then
cubelet
can
decide
whether
to
reject
or
accept
that
request.
And
if
the
request
is
accepted,
then
it
comes
back
to
the
CRI.
E
E
So
one
thing
that
could
be
done
is,
if
you
don't
want
to
go
all
the
way
back
up
to
the
cubelet.
You
can
just
say
that
hey
this
process
can
add
whatever
tasks
is
it
wants
to,
and
then
it
can
just
request
this
to
the
to
the
Shemp.
The
monitor
process
and
monitor
can
automatically
add
it.
Then
it
doesn't
have
to
do
the
whole
round
tripping
up
to
the
cubelet.
That
will
make
it
simpler,
but
then
there
is
no
authorization
as
such.
E
A
B
It
be
okay
to
just
just
do
it
progressively,
so
you
could
have
some
sort
of
a
reconciliation
loop
that
just
looks
at
all
the
pigs
and
said
Seeger
and
adds
them
to
the
partition.
E
E
F
B
F
E
E
E
So
yeah
I
don't
know
the
details
of
how
what
Connor
said
would
work
so
we'll
have
to
take
a
look,
but
I
was
just
asking
that
if
they
go
with
that
approach,
then
can
we
get
rid
of
the
requirement
that
the
process
needs
to
ask
an
Ag
toss
dynamically?
So
then
there
is
no
upward
communication
required
I.
Don't.
B
Think
it
solves
a
problem
that
you
guys
raised
to
me
just
looking
at
this,
like
most
of
the
complexity
really
comes
from
the
requirement
for
a
sub
container
level
scheduling
so
you're.
Looking
at
some
sort
of
like
thread
level,
I
mean.
If,
if
you
do
the
cache
partition
paste
on
the
cores
that
the
tests
are
running
on,
then
you
still
have
the
same
problem
right.
You
have
to
communicate.
B
Okay
I
want
a
garbage
collector
thread
to
run
over
on
these
cores,
but
I
guess
you
know
if
we
just
did
container
level
partitions,
then
that
would
that
would
avoid
that
complexity.
As
a
first
step,
you
know
you
could
start
by
hurting
best-effort
containers
and
then
hurting
first
of
all
containers,
but
you
know
going
down
to
the
thread
level.
I
think
you
went
up
a
big
can
of
worms,
no
matter
which
you
like.
A
B
B
B
E
E
E
Yes,
so
I
meant
this
sim
needs
to
exist
for
like
any
any
any
runtime.
That
is
you
that
is
run
series
so
like
container
D,
Dockers
cryo,
all
of
these,
how
monitoring
processes-
and
we
can
add
that
functionality
to
the
monitoring
process
and
define
some
kind
of
standardization
and
the
OSI
is
like
okay.
This
is
the
FB
that
needs
to
be
passed
or
which
you
communicate
here.
Your
task.
F
Yeah
I
think
I
think
the
key
is
that
I
mean
D.
Application
is
well,
I
mean
running,
is
non-privileged
right
and
now,
how
do
we
make
sure
that
hey,
it
can
add
selective
threads
to
something
like
a
privilege
just
attach
it?
How
do
you
make
it
happen?
This
ecosystem?
Does
this
precise
column?
We
are
trying
to
solve
here
right
and,
of
course,
like
we
said
column,
I
mean
this
is
completely
independent
of
any
of
system.
This
problem
will
exist
even
in
a
simpler
next
person.
Beside
accomplish
this
because
I
mean
all
the
time
there
is.
F
A
If
we're
starting
to
call
out
needs
that
have
to
keep
having
to
do
like
Fred
levels,
scheduling
like
this
like!
Could
we
look
at
boogity
possibility
of
solving
this
another
way,
which
would
be
like
having
a
daemon
chat
pod
that
took
on
that
responsibility
versus
having
to
cubelet
have
to
have
that
responsibility
so
anywhere,
where
you
want
ice
cream
of
the
JVM
anywhere,
where
you
want
this,
this
to
happen
with
your
JVM
cool,
make
sure
it's
co-located
with
a
daemon
subpod
that
can
run
with
higher
privileges.
Maybe
that's
what's
going
to
monitoring
so.
A
B
E
B
B
F
What
is
interesting
is
I,
went
to
this
specific
discussion,
even
with
the
kernel
team
in
the
past.
Also,
now
recently
I
mean
tells.
Basically,
if
the
surface
is
what
it
is
of
course,
yeah
like
I
said
gonna
be
bad,
that's
cool,
let's
use
the
thread
case,
the
thread
affinity
but
so
surface
I
think
yeah.
This
is
how
it's
going
to
be
like
from
a
Content
standpoint.
Only.
F
Add
anything
to
it,
but
I
guess
it'd
be
getting
back
to
Sisyphus
and
partitions.
It
can
be
I.
Think
looks
like
it.
Yes,
something
in
the
direction
of
her
sister
downloads,
called
it
authorization
it
will
dissolve.
We
don't
need
to
go
back
to
cubed
right.
Take
four.
None!
It's
a
possibility.
You
don't
know
you
really.
E
F
E
F
F
Apps,
you
need
to
figure
out
what
the
heck
is
happening
it
and
you
know,
basically,
if
you
are
doing
more
fine,
grained
monitoring
and
you
need
to
stick
that
in
the
cubelet
monitoring
budget.
The
next
very
related
features
from
an
intelligent
family
is
the
cash
monitoring.
Basically,
now
your
partition
is
you've
faded,
a
cache
partition.
So
you
need
to
monitor
it
periodically.
You
know
and
then
again
this
needs
to
sit
in
the
couplet
purchase.
Then.
F
What
is
very
interesting
is
that,
because
you
are
partitioning
the
cache,
it's
very
likely
that
the
memory
bandwidth
usage
may
search,
because
you
know
what
was
happening
was
there
was
a
cache
in
between
fielding
that
application.
That
would
be
indented
a
copy.
So
now,
as
we
progress,
it's
also
important
to
monitor
the
memory
bandwidth,
especially
you
know
when
we're
doing
cache,
partition
and
need
to
put
into
the
public
budget,
and
there
is
also
another
feature
from
Intel
called
memory.
Bandwidth
allocations
for
limiting
memory.
F
Bandwidth
usage,
it's
more
even
super
advanced,
but
you
know,
as
we
do
adding
these
sets.
Pmt
and
mbm
then
something
to
think
about
fit
to
again
to
avoid
any
kind
of
power
surges
all
those
as
we
cover
this
along
and
the
last
but
not
least,
this
is
super
advanced
again,
some
something
to
think
about.
As
we
progress
is
potentially
the,
but
just
could
I
mean
this.
Could
progress
with
smarter
scheduling,
type
of
ideas
to
basically
be
right
now,
the
way
the
apps
work
is
the
garbage
collection
tasks
can
happen
anytime.
You
know.
F
Basically
it
is
not
something
which
is
manually
scheduled
straight.
So
what
does
the
this
is
some
kind
of
conveyed
the
you
know
as
notices
the
cubelets
rate
I
mean
basically
right
now
to
look
at
a
JVM
and
some
kind
of
predictive
analytics,
I'm,
saying
hey
how
long
approximately
this
garbage
collection
you
in
for
the
text
and
what,
if
we
notified
cubelets
to
cubelet,
this
was
notified
to
the
tube
scheduler
for
doing
overall,
smarter
scheduling.
She
knows
this
is
going
to
take
like
next
five,
approximately
five
seconds
right.
A
Okay,
well,
I,
don't
myself
take
some
time
very
similar
links,
documents
and
digesting
some
of
us,
so
I
know.
You
were
also
going
to
present
to
sig
note,
which
is
probably
good
thing.
F
So
basically,
JVM
is
very,
very
concrete
in
terms
of
specific
threads
within
the
containers,
but
in
general
anything
which
is
like,
for
example,
a
simple
so
doing
of
file
transfer
rate
I
mean
then
obviously
you
know
this
is
not
deadline
phone.
Then
that's
a
good
use
case
and
beyond
that,
even
in
the
nsv
case,
then
they're
going
towards
5g
I've
been
in
service
customer
discussions
that
disk
it's
it's
very
critical,
especially
when
you're
doing
network
slicing
rate.
The
goal
is
to
make
sure
these
different
slices
are
not
interfering
with
each
other.
F
That
I
was
a
very
good
use
case
and
energy
front.
Okay,
but
the
immediate
is
part
on
this.
The
Delian
one
is
like,
and
the
good
thing
is
also
figured
out
some
ways
to
kind
of
remove
any,
at
least
for
now
remove
any
JVM
dependencies.
That
means
it
should
work
with
JDK
8,
which
is
you
know,
one
which
is,
you
know
basically
made
available
to
customers
now.
Ddp
nine
is
still
not
available
from.
A
Like
a
prioritization
standpoint,
ascetic
curiosity
like
anybody
who's
looking
at
getting
the
cache
partition
support
here.
What
other
like
there
must
be
a
whole
host
of
other
tables,
fake
things
that
are
required
to
make
cuba
viable
platform
command
list
like
this.
This
can't
be
like
requirement
number
one,
so
I'm,
assuming
like
any
of
the
CPU
sitting,
work,
that's
probably
a
higher
priority
for
this
target
demographic
than
cache
partitioning
like
I'm,
just
wanting
like
as
a
project
as
we
prioritized
us
it's
just.
Where
would
that
fall?
F
To
me,
I
think
this
can
happen
in
badly
because
it
even
is
so
agreed
that
I
mean
the
addiction
depending
all.
That
is
super
important,
but
it's
kind
of
saying
you
know,
as
we
do
this.
If
the
simple
thing
as
an
LLC
is
not
looked
at,
then
there
is
no
kaliciak
solution.
You
solve
that
problem,
but
then
the
because
the
LLC
work
is
not
complete.
It
comes
and
puts
you
in
a
holistic
angle,
because
this
was
really
some.
F
What
I've
seen
this
really
kills
all
the
worlds
skewers
work
you
didn't
do
on
the
carpet
on
pinning,
to
course
everything
right.
That's
what
I
found
working
with
the
enterprise
customer
back
until
I
mean
you
can
try
all
this
be
tried
all
that,
then
you
found.
Oh,
my
god.
This
is
the
best
way
to
manage
such
situation
for
latency-sensitive
apps,
so
immediately.
This
is
a
key
part
of
the
holistic
solution.
So
I
would
word
for
parallel
progress.
F
F
The
good
thing
is
also
a
recalled
lump
but
starting
to
fish
avoided,
and
he
did
confirm
that
Google
is
using
this
internally
inside
blog
in
a
very
simple
form.
Something
like
this
just
create
a
few
static
partitions
and
manage
them
in
a
simple
way,
but
we
have
founded
very
effective
for
latencies
into
the
application.
A
D
E
What
I
mean
I
think
I
mean?
If
maybe
we
can
talk
about
this
one
more
thing
if
there
are
no
other
alternatives
and
I
start
an
obscene
discussion
on
OCA
to
get
this
attic?
Ok,.
A
Safer
grounds
up
this
sort
of
known
topics
which
they're
there
any
other
additional
topics
that
people
want
to
bring
it
in
today's
meeting.
Otherwise,
we've
been
adjourned
for
next
week.
When
it
sounds
like
we
will
do
a
deeper
dive
on
us.
If
you
pinning
proposal
and
just
for
awareness
I,
think
we're
also
trying
to
explore
doing
a
prototype
around
resource
classes
in
the
next
three
weeks
or
so
so,
hopefully
that'll
be
on
the
agenda
soon.
A
I
A
Face
book
they
saying,
like
we
think
resource
classes
seems
like
a
good
concept,
but
we're
not
really
sure
if
it'll
work,
okay
and
so
two
folks
here
at
Red
Hat
are
trying
to
see
if
it
will
actually
work
from
a
scheduling
standpoint.
So
when
you
have
a
resource
class
resource
in
your
pub
stock
can
list
actually
record
properly
in
the
notes.
A
It's
predicate
checks
that
that
that
resource,
when
expanded
out
to
an
individual
device
on
a
node
that
match
that
feels
luckier
like
does
the
scheduler
get
confused
or
or
not
so
we're
just
going
to
figure
out
like
given,
what's
been
discussed
thus
far.
Where
will
this
break
down
so
that
we
can
inform
a
better
proposal
and
know
if
it's
going
to
work
at
all,
so
we're
just
trying
to
get
a
prototype
in
place?
A
I
G
A
Our
merge
because
it
describes
how
we've
been
working
for
the
last
few
months
and
then
we
can
come
back
to
getting
the
mailing
list
stuff
figured
out.
I'm
not
I'm
fine
getting
a
new
list.
If
we
want
to
go
that
route,
that's
ever
big
fun.
For
me,
I
just
I
I
need
Tyra
to
get
me.
The
look,
I
think
I,
don't
know.
If
she's
responding
on
that
Pru,
it
was
any
details.
So.