►
Description
Meeting notes and Agenda:
https://docs.google.com/document/d/1ALxPqeHbEc0QOIzJ3rWWPpwRMRlYDzCv0mu2mR4odR8/edit#
A
A
A
Hi
Thomas
has
to
reboot
his
Samus
crashing,
but
part
of
what's
going
on
in
this
slide
is
we
need
a
mix
of
user
requirements,
and
so
in
some
cases
they
need
both
shared
cores
and
pin
and
pin
cores
as
well,
and
so
some
there
is
some
mixing
going
on,
and
this
is
just
one
of
the
base
cases
that
we
looked
at.
That
was
also
in
the
dock.
From
last
year.
C
I
guess:
okay,
I
guess
the
main
thing
I
think
katanas
was
looking
to
point
out.
That
here
is
that
the
potential
for
having
Suboxone
deployments
right
between
the
blue,
green
and
orange
sets,
hopefully
those
color
coding
contribute
purely
and
the
idea
that
well
actually
it's
twofold
one.
C
We
need
a
more
optimal
deployment
for
certain
use
cases,
and
the
second
is
that
sometimes
the
the
way
you
deploy
it
may
not
need
to
be
just
at
a
let's
say,
a
pod
Qs
level
that
could
be
a
container
kind
of
Qs
level
here
to
consider
and
they're
the
kind
of
things
that
gets
opened
up
by
looking
at
a
plug-in
model
in
an
alternative
way
of
of
implementing
the
policies
that
we've
currently
got.
C
Fundamentally,
here
we
are
discussing
it
append
type
of
a
model.
So
if
you
roll
with
that
as
an
example,
let's
say
a
CPU
pinned-
and
we
know
the
first
two
cores
that
are
allocated
to
the
first
container,
that
that
needs
a
certain
set
of
properties.
C
It
may
need
exclusive
allocation.
It
may
need
a
certain
type
of
performance
profile
or
power
management
profile.
C
The
point
is,
there
are
attributes
of
Dash
set
that
may
be
different
from
the
the
course
let's
say,
the
six
allocators,
the
second
container
or
the
fourth,
the
third
container
and
part
of
what
we're
looking
at
is,
if
you
can
bring
in
an
attribute
set
into
the
interface
like
this,
maybe
through
a
config
map,
then
there's
a
lot
more
flexibility
within
the
the
plug-in
either
driver
plug-in
implementation
for
to
process
that
moment,
I
have
participants
waiting.
C
C
C
C
That
we'll
move
on
I'm
going
to
assume
this
set
of
attributes
is
for
now
it's
an
okay
thing
for
us
to
to
look
at
I
think
it
has
to
be
back
if
he
wants
to
take
over
from
what
was
presented
last
time
to
show
what
the
progression
and
tauren
and
how
the
idea
is
the
feedback.
Rather,
that
was
given.
That's
been
used
to
update
the
proposal.
B
Right
so
yeah.
Thank
you,
sorry
for
the
inconvenience
with
my
internet
connection.
Hopefully
this
hurts
back.
This
was
our
kind
of
short
overview
last
time,
so
we
were
discussing
having
a
central
resource
manager
which
can
handle
plugins.
It
supports
registration
through
a
socket,
and
then
we
had
a
set
of
events
lifetime
events
forwarded
to
to
the
plugins.
What
was
the
feedback
from
from
this
discussion?
So
first
we
had
a
very
interesting
conversation
about
more
or
less
chicken
and
egg
problems
like
what
happens.
B
If
you
start
the
system
and
your
plugin
is
not
coming
up,
how
can
you
continue
work
and
basically
there
was
a
desire
expressed
that
there
are
fallback
mechanisms,
basically
that
when
the
plugin
is
not
working,
something
can
take
over
and
still
work
with,
the
majority
of
the
use,
the
majority
of
the
pots,
which
might
be
best,
divert
pots
or
standard
kind
of
static,
static
requests,
as
we
have
them
in
in
standard
kubernetes.
Today,
the
change,
the
other
kind
of
feedback
or
general
feeling.
B
So
what
what
kind
of
nice
step
to
have
would
be
if
we
can
minimize
the
slope,
make
the
changes
a
little
bit
a
little
bit
smaller
to
cubelet
so
that
it
can
coexist
with
other
components
and
and
and
work
in
in
better
kind
of
manner,
without
replacing
everything
so
yeah
the
the
this
this
we
we
try
to
integrate
this
feedback
and
simplify
our
first
integration
iteration
in
terms
of
bootstrapping.
This
is
a
little
bit
related,
also
to
the
fallback
and
chicken
and
egg
problem.
B
So
how
do
you
start
if
you
don't
have
a
plug-in?
How
do
you
handle
that
the
case
in
place,
so
we
we
made
some
thoughts
also
in
that
direction
and
with
that
did
some
small
or
some
adjustments
for
the
proposal.
B
The
adjustments
that
are
exposed,
so
we
are
suggesting
to
keep
basically
the
the
the
set
of
existing
managers
in
our
first
iteration,
like
the
CPU
manager,
topology
manager,
memory
manager,
the
array
device
manager,
so
they
they
basically
remain
there
and
what
we
would
like
to
to
think
if
we
can
integrate
this
new
component
as
a
parallel
manager
to
them.
We
we
would
like
to
to
have
the
register
mechanism,
for
what
we
need
is
the
register
mechanism
to
be
in
place.
The
life
side,
life
cycle
events.
B
We
could
reduce
a
lot
further
down
if
the
minimal
or
this
kind
of
existing
managers
are
kept
inside
the
container
manager.
So
what
we
could
cover
in
the
first
iteration
is
in
terms
of
minimal
support
would
be
heading
for
for
a
location.
We
would
need
two
callbacks,
basically
an
ad
container
and
the
remove
container
callbacks.
B
They
would
receive
as
input
arguments
a
c
group,
some
container
ID.
This
would
help
us
identify,
uniquely
where
the
container
is
running
in
which
C
group
and
we
also
receive
a
list
of
available
CPUs,
so
why
the
list
of
available
CPUs,
the
reason
for
that
is
actually
if
we
later
would
like
to
coexist
with
a
backup
or
fallback
mechanisms
or
some
some
additional
handling
of
down
through
the
standard
CPU
manager.
B
We
could
basically
share
the
CPU
sets
are
located
from
from
the
CPU
manager
back
to
the
plugins
through
the
set
of
available
CPUs
remove
container
is
very
simple:
it's
just
the
container
ID
and
returns
a
Boolean.
Basically,
if
it
was
successful
or
not,
interesting,
are
the
return
arguments
of
the
add
container.
B
So
we
we
were
thinking
that,
basically,
what
is
sufficient
to
cover
a
lot
of
the
use
cases.
What
we
illustrate
is
if
we,
if
we
return
a
CPU
set,
and
if
this
CPU
set
is
exclusive
or
not,
if
it's
exclusive,
it
means
that
we
have
to
usually
in
in
a
lot
of
the
customer
use
cases.
We
would
like
basically
to
run
on
exclusive
set,
of
course,
and
other
other
pods
should
not
overlap.
B
So
if,
if
the
plugin
was
there
to
allocate
an
exclusive
core,
basically
this
has
to
be
removed
from
from
a
list
of
available
cores
managed
by
the
CPU
manager
so
and
and
through
that
we
we
can
do
that.
Basically,
we
will
know
if
the
score
the
CPU
set
is
exclusive
or
not,
and
with
that
we
can
align
or
synchronize
the
two
states,
the
one
which
which
the
plugin
will
have
and
and
the
CPU
manager
will
will
have.
B
So
those
are
the
lifetime
events.
We
also
add
an
additional
component.
We
call
it
a
store.
The
store
basically
keeps
a
list,
it
keeps
tracks
what
CPU
sets
were
allocated
and
if,
if
a
container
gets
moved,
remove
stores
from
the
store,
the
reason
for
the
store
is:
if
you
want
to
have
a
consistent
State
across
CPU
manager
and
a
plugin,
we
can
use
a
new
policy,
we
call
it
a
resource
management
policy,
parallel
to
the
non-static
and
and
the
policies
what
you
know
from
classical
CPU
manager.
B
B
We
are
planning
to
treat
that
as
as
non-policy
or
basically,
all
Bots,
which
are
not
handled
by
by
plugin
will
be
met,
mapped
to
best
efforts,
but
basically
this
policy
additional
policy
can
can
in
in
longer
term
forwards
to
static
or
to
yeah
to
static
policy
in
CPU
manager.
If
we
basically
want
to
reuse
standard
pods
with
with
static
configuration,
so
we
we
can
do
that
in
in
terms
of
another
edition.
If
you
want
to
use
the
plugin,
it
will
be
a
requirement
that
this
policy
is
selected
in
the
CPU
manager.
B
So
if
you
basically
start
starts
the
cubelet
with
and
this
resource
manager,
it
has
to
be
to
be
used,
you
have
to
configure
this
policy
to
be
set
so
that
people
you
have
a
correct
state
in
terms
of
CPU,
set
allocations.
B
Yeah,
maybe
a
small
remark:
in
the
first
kind
of
iteration
we
are
planning
to
have
CPU
managers
still
existing
and
topology
memory
manager
and
Device
Manager.
They
will
be
disabled
if
we
are
using
the
resource
manager
as
a
component,
just
the
CPU
manager
can
support
the
minimal
set
of
best
effort
containers.
B
B
The
goal
later
is:
if,
if
we
have
pods
which
are
using
static
or
let's
say,
guaranteed
quality
of
service
and
did
not
have
a
resource
class
attached
to
the
bot,
they
can
go
directly
to
topology
manager
memory
manager,
but
we
will
need
to
integrate
inside
the
resource
manager
policy.
Some
sort
of
mechanism
which
which
forwards
this
course
to
to
the
static.
D
I
have
just
a
clarifying
question:
I
guess
so,
when
you're
talking
about
resource
class
and
resource
driver.
Are
these
the.
D
B
We
are
quite
open
there.
We
we
we,
we
can
introduce
a
new
class
which
we
attach.
What
we
need,
or
what
we
found
is
very
nice
idea-
is
to
attach
a
driver
to
a
bot.
So
we
we
need
a
mechanism
to
attach
driver
to
a
bot,
it's
Unique
one-to-one
relationship.
Basically,
if
you
will
have
a
driver
name
attached
to
the
Bots
pack,
it's
completely
sufficient
how
it's
realized.
We
are
completely
open
and
move
forward.
If
you
have
hints
how
we
can
do
it
easier,
nicer
right.
B
B
This
is
what
we
search
a
mechanism
to
to
identify
what
what
is
needed
to
handle
the
the
pot
Resource
Management
and
in
dra.
You
handle
that
through
a
resource
class
kind
of
crd.
B
So
it's
one
possibility
that
we
create
a
similar
CRT
which
which
basically
can
can
associate
the
bot
with
with
some
driver
or
Samsung
plug-in.
It's
a
nice
comp
set,
but
I
personally
like
it.
But
alternative
is
basically
if
there
is
a.
E
B
Or
something
in
both
spec,
which
basically
it's
optional
if
a
driver
is
or
if,
if
a
plug-in
kind
of
identifier
is
set,
then
automatically
the
the
resource
kind
of
for
processing
is
forwarded
to
the
plugin
that
that's
the
the
more
or
less
what
we
are
after.
We
need
to
find
the
proper
mechanism
how
to
build
it,
but
this
kind
of
Association
what
you
did
in
the
arrays
one
one
possible
solution.
D
D
F
D
B
B
It's
a
little
bit.
It
can
be
simplified,
most
probably
in
terms
of
mechanisms
there.
There
are
some
similarities,
but
this
kind
of
having
an
optional
process
or
argument
which
is
optional
and
can
redirect
the
processing
of
resource
management
to
to
a
separate
component.
It's
it's
nice
and-
and
you
did
it
in
the
array-
it
it's
something
which
can
help
us.
Also
here
can't.
D
Modify
oci
specs
Fields
related
to
CPUs,
so
you
couldn't
directly
use
dra
to
do
what
you're
describing
right.
B
Which
is
not
all
to
use
it?
The
array
for
that,
the
the
similarities
on
the
more
or
less
how
we
can
then
identify
the
plugin
for
for
a
pot
and
if,
if
a
bot
doesn't
have
an
identifier
that
it
has
to
be
processed
by
a
plugin,
then
it's
it's
passed
to
the
standard
components.
What
we
have
in
kubernetes,
CPU
manager,
topology
manager,
memory
manager
that
that's
the
idea.
Basically,
yes,.
D
Ideas,
you
would,
you
would
deploy
a
plug-in
on
the
cluster
right.
That
knows
how
to
handle
a
specific
type
of
resource
exactly
or
memory
in
this
case,
and
once
that's
there,
it
has
some
kind
of
name
associated
with
it,
and
then
you
want
a
way
to
have
a
pod
point
at
that
to
say
hey
for
any
resource
requests,
I'm
putting
in
make
sure
this
plug-in
sees
when
I'm
started
or
any
containers
that
I
have
are
started
so
that
it
can
try
and
process
those
resources.
On
my
behalf,.
B
B
If
somebody
wants
to
use
the
cluster
for
something
else,
they
can
still
do.
It's
forwarded
to
CPU
manager,
topology
manager,
whatever
it's
out
there.
If,
if
the
plugin
comes
back
online
survives,
then
we
can
continue
right.
D
Yeah
so
I'm
just
thinking
about
the
CPUs
specifically,
so
if
I
ask
for
so,
if,
in
my
pod,
spec
I,
say
CPUs
to
and
I
also
have
had
some
way
to
associate
and
that's
in
one
of
my
containers
and
then
at
the
Pod
level,
I've
somehow
Associated,
some
plug-in
that
you've
described
here
to
be
triggered
when
any
of
the
containers
of
this
pod
start.
D
Then
what
it's
going
to
see
is
in
your
description
here.
You
have
an
ad
container
called
right
with
a
bunch
of
fields
that
get
passed
in
and.
D
Yeah,
so
I
would,
at
this
point
inside
this
plugin
do
something
similar
to
what
the
existing
CPU
manager
does
in
terms
of
saying.
Okay,
I
see
that.
F
D
B
Coordination
would
happen
through
the
store
basically
and
the
CPUs
we
return
the
CPU
set
and
what
we
need
to
know
from
CPU
manager
perspective.
If
the
allocated
set
was
exclusive
or
not
exclusive
means.
No
other
bot
has
to
overlap
with
with
the
orcas
to
use
these
CPU
sets.
So
what
we
need
to
do
is
compute
the
difference
with,
with
the
available
CPUs
updates
the
the
variable
CPUs.
B
If
it's
not
exclusive,
we
don't
care
more
or
less.
The
variable,
CPUs
doesn't
change,
so
we
we,
our
applications,
can
can
still
get
or
our
best
effort
containers
can
still
get
all
the
cores
available.
It's
really
not
not
very
complex
logic
to.
D
Implement
so
the
interesting
conceptually,
what
you're
trying
to
do
just
and
yeah
so
just
yeah
before
we
go
on,
can
you
explain
why
you
want
to
associate
one
of
these
plugins
at
the
Pod
level
and
not
per
container.
B
B
They
feel
or
we
are
interested
on
on
the
oops.
Theoretically,
it
can
be
part
also
the
container
spec.
The
the
point
is:
we
need
to
make
sure
that
let's
say
the
plugin
is
not
available.
A
Which
is
which
is
before
looking
at
topology
policies
autonomous?
Then
we
can
correctly
place
the
containers
underneath.
B
D
F
D
A
plug-in
defines
a
policy
for
how
it
will
you
know,
handle
these
resources
or
respond
to
Resource
requests
of
this
type
right.
You
say,
okay
for
a
pod,
I
want
to
associate
it
with
this
plugin
and
then
all
containers
I
start
will
be
handled
by
that
plugin.
Yes,.
G
C
And
so
I
attendance
just
to
be
clear
what
you're
proposing
those
this
is
for
phase
one
and
two
I'm
sure
I've
seen
this
transition
to
this
subsequently.
Are
you?
Are
you
implying,
then,
that
it,
the
resource
manager,
this
kind
of
new
plugin,
could
become
the
default?
You
would
not
need
to
associate
with
pods
if
the
administrator
chose
that.
B
We
had
previously
the
discussion
that
a
lot
of
users
still
have
a
minimal
base
of
functionality
which
they
want
to
have
without
going
through
a
plug-in
like
best
effort
containers.
Just
as
an
example,
you
don't
need
a
lot
of
code
to
handle
best
efforts,
so
a
CPU
manager
non-policy
already.
Does
it
more
or
less
so
we
we.
The
goal,
is
yes
to
to
refactor
some
of
those
managers
long
term
to
pull
as
much
as
possible
insights
this.
B
But
it's
really
Community
Driven,
where
Community
sees
the
needs
to
have
or
agrees
that
this
this
can
be
running
as
a
remote
Plugin
or
as
a
plugin,
and
there
will
be
most
probably
some
subset
which
people
like
to
have
as
as
quickly
as
accessible
as
maybe
if
we
can
reduce
the
the
set
of
of
default
capabilities
a
little
bit
and
try
to
offload
more
to
plugins
this.
This
would
be
nice,
but
yeah.
B
B
Their
commonalities
in
the
plugin
API,
maybe
we
can
think
long
term
there.
There
is
the
plug-in
framework
inside
kubernetes.
If
this
can
be
somehow
extended
in
future,
that
we
can
easily
have
plugins
in
in
the
in
components
inside
cubelet
would
be
beneficial.
There
are
a
lot
of
commonalities,
the
just
the
plug-in
framework.
B
B
If
you
look
at
the
original
plugin
framework
of
of
kubernetes
cubelet,
it
just
does
a
registration
registration
with
with
some
name
that
that's
all,
if
we
can
think
about,
if
we
can
somehow
abstract
away
all
this
kind
of
they
are
the
device
plugins
they're,
the
dra
plugins.
If
we
can
find
the
mechanism
where
we
can
abstract
that
and
lead
to
less
code,
it
would
be
nice.
D
Well,
I,
don't
think
we
would
ever
want
to
try
and
combine
those
two
but
I'm
just
thinking,
because
you
request
devices
through
the
existing
device,
plugin
API
and
you
request
memory
and
you
request
CPUs
all
through
the
same
requests
limits
fields
of
your
of
your
container
spec
right.
Whereas
dra
is,
you
know,
separate
beasts.
You
request
those
to
Resource
claims,
which
is
a
which
is
a
completely
different
mechanism.
But
if.
F
D
If
that,
because
I'm
just
trying
to
think
from
an
end
user's
perspective,
you
know
it's
it's
weird
that
you
know
I
would
associate
this
plug-in
and
it's
only
going
to
process
the
CPU
stuff
for
it's
only
going
to
process
the
memory
or
yeah
I,
don't
know
I'm
just
trying
to
think
from
an
end
user's
perspective.
That
can
any
confusion
we
would
have.
If
we
were,
you
know,
we've
got
this
Fleet
of
plugins
that
are
potentially
available
and
I
got
to
know
how
to
associate
one
plug-in
with
one
specific
resource.
B
D
D
I,
don't
have
a
concrete
suggestion,
I'm
just
trying
to
think
through.
You
know
the
complexity
from
an
end
user's
perspective.
If
you've
got
all
of
these
options
and
from
a
developer's
perspective,
do
I
implements
a
dra
plugin
to
implement
one
of
these
types
of
plugins.
Do
I
Implement
a
standard
device
plugin.
Where
do
I
put
my
efforts
into
to
building
these
things
and
then
from
a
user's
perspective?
How
do
I
know
which
one
to
choose
when
I'm
trying
to
implement
my
application
right.
C
This
was
pre-allocated
or
the
scheduling
process,
and
we
end
up
on
this
predictor
node
the
resource
and
that,
let's
say
the
CPU
resource
driver
plug-in,
could
leverage
the
default
system,
the
device
or
detect
which
device
has
been
allocated
and
what
new
zones
they
have
been
allocated
from
and
used
that
as
an
input
to
CPU
and
memory
selection.
D
D
To
do
that
then
I
would
I
would
just
recommend
using
an
NRI
plug-in
rather
than
a
resource
driver
plug-in
as
you're
describing
it,
because
that's
exactly
what
it's
designed
to
be
able
to
do
but
I
know.
We've
we've
gone
through
that
discussion
many
times
and
it's
always
come
up
short
from
your
guys's
requirements.
As
far
as
I
understood.
C
In
addition
to
that,
I
really
want
to
bring
us
back
to
this
idea
the
attribute
set,
because
it
may
not
be
just
a
number
of
CPUs.
It
could
also
tie
to
particular
configuration
of
those
CPUs
and
so
there's
a
lot
of
flexibility
in
this
model.
The
sum
of
I
think
the
architecture
and
dra
that
I
think
you
brought
to
bear
here,
but
in
terms
of
getting
attributes
into
the
system.
D
D
D
C
C
The
coexistence
of
this
set
of
CPU
managers
to
the
existing
set
and
the
resource
manager
I
think
that's
primarily
implied,
certainly
refresh
the
existence
of
the
CPU
topology
memory
in
particular,
and
the
resource
manager,
which
could
also
have
CPU
plugins
their
coexistence
to
address
this.
The
sort
of
fallback
case-
that
is
something
that
could
adjust
over
time.
C
H
Yeah
I
had
a
question
on
how
we
are
handling
the
underlying
CPUs.
Based
on
what
Tana
said
earlier,
it
appears
that
the
resource
manager
and
the
CPU
manager
are
both
responsible
for
updating
the
c
groups.
The
underlying
c
groups
with
respect
to
you
know
a
pod
that
could
be
requesting
it
exclusively
or
it
belongs
to
the
shared
pool.
H
I
just
want
to
make
sure
that
understanding
is
correct,
and
the
next
point
I
have
is
that
that
could
potentially
have
issues,
because
it
could
lead
to
races,
because
two
components
are
trying
to
step
on
each
other's
toes.
Essentially,
there's
no
way
for
us
to
control
right.
G
H
Part
is
allocated
CPUs.
B
B
What
you
have
in
CPU
manager
and
the
only
thing
what
we
have
to
make
sure
are,
as
you
said,
the
the.
If
you
have
those
two
components,
the
operations,
what
they
are
actually
codes,
sequentially
like
the
the
pieces
where
ad
container
remove
container
or
Bean
codes,
they
are
called
sequentially.
So,
in
the
lifetime
loop
from
from
the
container
manager,
we
will
be
calling
in
specific
order,
first,
the
the
resource
manager
and
then
the
CPU
manager.
The
reason
for
that
is,
let's
say,
resource
manager
does
an
allocation.
B
It
does
an
exclusive
allocation.
It's
put
on
the
store.
This
is
a
sequential
operation
or
it
it's
it's
actually
happening
sequentially
in
the
lifetime.
After
that,
CPU
manager
kicks
in
it
does
add
container.
Add
container,
goes
through
the
policy
through
the
new
policy
which
will
go
and
ask
the
store.
What
did
you
allocate?
B
So
it's
really.
It's
made
sure
that
the
state
is
correct,
and
this
is
true,
sequential
ordering
basically
of
this
ad
container
commands.
This
is
a
little
bit
to
to
answer
the
the
race
problems
in
terms
of
kind
of
can
think
of
the
CPU
sets.
We
simplify
a
little
bit
the
situation
going
to
best
effort
on
the
CPU
manager,
but
for
phase
one.
This
is
most
probably
sufficient.
H
So
am
I
understanding
this
correctly
that
you're
saying
for
the
first
phase.
We
would
not
have
the
ability
to
allocate
exclusive
CPU
through
the
existing
CPU
manager.
It
would
be
offloaded
to
the
new
resource
manager
and
I
would
have
to
write
a
Plugin
or
a
driver
to
handle
that
case.
We.
B
H
I
think
one
approach
that
that
would
make
everyone's
life
easier
in
in
this
case
would
be
if
we
let
all
the
existing
resource
managers.
As
is
you
introduce
the
resource
manager
and,
as
you
mentioned,
that
in
the
Pod
spec
we
specify
a
driver
name.
So
that's
what
is
used
to
tap
into
the
resource,
manager's
capability
and
the
rest
of
the
pods.
You
know
which
are
requesting
the
native
resources
or
resources
to
device
plugin
default
to
the
current
behavior.
That
already
exists.
B
Foreign
yeah
yeah
I
think
that
this
was
also
the
original
intent
of
the
policy
and
everything
I
agree.
The
the
the
the
only
thing
is
to
see
what
is
feasible
in
Phase
One
kind
of
we.
We
definitely
start
with
the
implementing
best
efforts,
and
so
we
try
to
pull
in
static
I
think
it's
a
little
bit
question
of
feasibility
and
besting
and
right.
H
B
On
the
resource
manager,
implementation
correct,
the
CPU
manager
is
untouched.
Just
it's
a
new
policy
which
is
referencing
the
store
and
I
think
the
changes
are
not
big.
It's
really
validation
that
everything
works,
correct
and
yeah,
the
avoiding
being
sure
that
there
are
no
race
conditions,
stuff
like
that.
So
but
I
think
it's
feasible
approach
and
it's
feasible
to
make
a
state
which
is
correct
between
the
two
components:
foreign.
B
H
It
does
yeah
I,
think
one
thing
I'd
probably
like
to
see
is
and
pro
you
you
guys
are
probably
already
working
on
that.
If
we
can
capture
some
of
these
details
in
the
cap
yeah,
because
as
far
as
I
like
when,
when
you
explain
this
initially,
my
understanding
was
that
we
are
keeping
all
these
existing
components.
But
then
you
said
that
we
are
going
to
introduce
a
policy
in
CPU
manager,
so
I'm
struggling
to
understand.
You
know
how
we
are
going.
B
That's
required
to
run
the
resource
manager
so
that
you
know
how
to
handle
allocations
done
by
plugins,
so
they
are
proxied
basically
to
the
store,
and
it
will
be
a
requirement
if
you
want
to
run
this
kind
of
plugins.
But
the
policy
can
then
be
used
to
forward
the
locations
of
Standards
definitions
of
pots
to
the
classical
components.
What
you
have
I.
D
B
B
So
it
could
be
part
of
static
policy
with
the
configuration
option.
Something
like
that,
it's
possible.
We
have
to
think
about
that.
It's
good
idea.
F
H
We
could
potentially
do
is
because
say
we
introduce
resource
manager
as
an
alpha
feature.
You'd
have
a
feature
gate
that
would
be
back
in
this
feature,
so
it
will
be
disabled
by
default.
If
people
want
to
experiment
with
it,
they
enable
it,
and
that
means
that
we
are
completely
keeping
all
the
existing
components.
H
If
you
manage
your
topology
manager
and
all
of
them
separate
to
the
way
they
are
doing
and
then
in
the
long
term
identify
a
transition
plan
of
how
we
can
maybe
move
some
of
that
in
and
some
of
the
functionality
can
be
taken
over
by
the
resource
manager
or
some
plugins
that
are
created,
but
I,
think
in
the
in
the
short
term,
or
at
least
when
the
feature
is
introduced.
I
think
we
should
just
isolate
all
these
areas.
G
B
We
we
are
currently
assuming
that
we
are
already
on
mode.
The
decision
on
when
which
nodes
this
has
to
be
executed
was
already
taken.
So
anything
scheduling
related
is
covered
by
other
components.
A
E
B
E
B
D
What
it
sounds
like
is
that
if
we
had
this
framework
in
place,
we
strictly
wouldn't
be
any
worse
than
we
are
today
in
terms
of
being
able
to
schedule
these
things,
but
we
might
be
a
little
better
in
being
able
to
handle
more
policies
and
at
least
give
developers
the
ability
to
write
these
policies
instead
of
constantly
having
to
update
the
Kubota.
To
give
you
those
options.
G
G
Thank
you
everybody
for
attending
today
and
for
your
feedback.