►
Description
Meeting notes and Agenda:
https://docs.google.com/document/d/1ALxPqeHbEc0QOIzJ3rWWPpwRMRlYDzCv0mu2mR4odR8/edit#
A
A
B
A
A
Right
yeah,
thank
you
for
having
time
today
also
for
for
this
workbook
meeting,
so
I
I
wrote
down
some
of
the
five
or
four
points
for
what
we
identified
as
some,
which
might
need
some
discussion
regarding
regarding
a
plug-in
framework
inside
the
kubernetes
for
for
resource
management,
like
one
of
the
points
of
the
points
between
roles
was
plugins
file
safety.
What
happens
if
the
plugin
fails,
and
this
plugin,
let's
say,
is
responsible
for
for
management
of
the
resources.
A
So
one
possible
thought
what
I
get
like
if
we
have
some
small
small
kind
of
piece
of
core
core
code
which
does
more
or
less
the
basic
stuff
needed
to
host
the
Box,
basically
just
just
similar
to
best
efforts,
classes,
best
efforts,
quality
of
service,
what
you
can
get
without
any
CPU
manager,
and
so
on
enable
so
what
we
can
do.
If,
if
a
plugin
fails,
we
can
fail
back
to
this
best
effort,
core
components.
It
can
be
included
in
some
sort
of
central
manager
and
right
I.
D
But
honest
I
have
one
question
so
when
we
when
say
a
resource
plugin
registers
itself,
how
does
it?
How
are
those
resources
observed
on
the
Node
itself
like?
Is
it
an
extended
resource
that
we
see
that
has
been
added
to
a
node
or
is
it
some
other
object-based
mechanism.
D
Yeah,
so
what
I'm
trying
to
understand
is
that
if
I
were
to
write
a
plugin,
what
at
what
with
what
API
semantics
I'm
going
to
expose
the
resource
like?
Will
it
be
so
device
plugin
exposes
an
extended
resource
and
that's
what
we
request
in
the
Pod
spec
are
we
do
we
intend
to
move
in
the
same
direction
or
do
we
plan
to
take
some
plan
to
do
something
else?.
A
For
exposing
the
resources,
one
one
option
we
were
thinking
is:
if
it's
possible
really
to
to
use
Dynamic
resource
allocation,
also
postage
kind
of
or
generalizes
to
to
be
able
to
handle
more
cases,
not
not
only
devices
so
Dynamic
resource
navigation
could
could
be
one
option
to
to
invest
to
be
investigated
in
terms
of
detection,
detecting
the
resources
you
can
detect
the
resources,
if
you,
if
you
have
rights
privileged
access
to
the
kind
of
of
the
underlying
system,
detecting
the
resources,
it's
not.
It's
not
impossible
right.
A
I
think
you
will
have
several
cases
you
have
the
array
basically,
for
we
know
that
dra
has
what
I
heard
that
it
has
some
some
issues,
at
least
in
in
terms
of,
but
also
in
skill,
Mobility,
most
probably
I,
don't
know
exactly
for
for
how
well
it
scales,
Theory
and
so
on,
but
we
we
are
thinking.
A
There
are
three
three
things
which
has
to
be
covered
there
is,
it
would
be
great
if
we
can
use
the
dynamic
resource,
our
location
API,
also
for
for
our
purposes,
then,
of
course
you
have
users
who
who
want
to
use
standard,
CPU,
requests
and
limits,
so
you
want
to
enable
those
two
and
the
the
third
cases,
of
course,
are
standards.
Mechanisms
to
to
use
devices
like
the
standard
device,
plugin
mechanisms,
so
I
think
we
have
those
three
that's
more
or
less
to
cover.
E
A
Those
should
be
not
existing
together.
So
if
you
use
the
array
for
CPU
control
memory
control,
you
should
not
specify
the
other
way
around
white
CPU
resources.
I,
don't
know
if
it's
doable
in
terms
of
implementation
but
yeah
it
would
be
nice
not
not
to
specify
it
twice.
G
Yeah
sorry,
I
I
I
have
general
questions
so
like
is
this
solution
going
to
be
kind
of
generic
replacement
for
all
plugins
mechanism?
I
I
mean
like?
Is
it
only
about
like
device
plugins
and
like.
F
A
A
The
situation
which
we
would
like
to
to
achieve.
Basically,
if
we
have
General
plug-in
solution
which
can
cover
all
three
cases,
basically.
G
Well,
it's
not
three.
It's
four
cases.
There
are
like
the
at
least
CSI
plugins
that
I
know
about,
and
then
those
have
you
looked
at
it.
Have
you
considered
them
or
they
are
kind
of
out
of
the
picture.
A
A
I
A
A
This
is
why
it
in
terms
of
crystals
management,
see
if
you
rely
completely
important
data
several
options,
so
you
can
rely
on
one
time
to
do
Resource
Management.
But
if
you
have
the
capability
to
write,
a
custom
driver
which
can
do
more
and
and
certain
users
will
do,
will
need
more
so
I.
H
Is
it
good
that
kubernetes
managed
pids
should
corres
aspire
to
be
more
granular
on
how
it
manages
pids?
Is
it
should
kubernetes
manage
block?
I
o?
Should
it
not
I
can't
tell
in
this
conversation,
if
it's
a
mistake,
that
kubernetes
manages
CPU
and
memory,
and
it
should
stop
doing
that
and
work
towards
a
world
where
drivers
do
it
and
related
to
that
like?
B
Least,
like
some
people
I've
been
trying
to
talk
for
a
moment,
Sasha
please!
So
as
we
go
forward
the
the
CPU
and
memory
we
with
kubernetes.
The
whole
point
is
for
pluggable
infrastructure.
B
So
when
you
have
particular
components
managed
internally
to
kubernetes,
we
start
getting
the
issues
that
we've
had
where
there's
a
bunch
of
different
managers
being
pushed
into
the
Kubla
which
the
community
has
to
update
right
and
maintain
and
then
more
features.
Now
you
have
to
do
that
there,
with
cni
with
CSI
having
that
plugable
infrastructure
is
really
nice
and
that's
where,
where
we're
trying
to
go,
is
basically
turn
kubernetes
into
a
plugable
infrastructure
or
kernel,
I
guess
going
forward,
go
ahead,
Sasha.
E
Then
I
just
want
to
transfer
a
couple
of
your
questions.
What
you
asked
like
regarding
collect,
work
announcement
or
for
resources
from
runtime
to
a
couplet.
In
my
opinion,
it's
a
long,
john,
what
we
should
be
heading
to
reasons
for
what
is
what
right
now
we
have
a
situation
where,
like
some
of
the
resources,
are
configured
on
both
sides,
so,
for
example,
like
c
groups
driver
like
resolved
with
resources,
amount
and
so
on,
and
we
observed
with
what
many
of
us
thinks
are
misconfigured
or
like
users
have
different
configuration
on
both
layers.
E
So
right
now,
like
all
these
activities,
What
like
this
container
plaque
events
or
like
stats
for
containers,
is
already
moving
to
runtime.
So
it
might
make
sense
to
be
a
bit
more
of
this
resource
Discovery
moving
to
runtime
as
well,
so
at
least
it
will
reduce
some
or
some
misconfigurations
and
simplify
some
of
the
code
inside
kubernetes,
regarding,
like
other
things
like
Pete
and
so
on,
the
question
is
well.
E
The
answer
is,
at
least
from
my
side,
my
personal
opinion,
I,
don't
know,
but
having
in
mind
what
we
have
right
now
like
different
runtime
classes
like
we
have
cut
and
we
have
firecracker
and
so
on
so
assumptions
what
we
previously
had
this
doctors
about,
what
like
we
have
all
we
see
group
we
always
have
like
being
in
the
hospital,
namespace
and
so
on.
This
is
not
true
anymore,
so
technologically
probably
better
to
split
the
responsibilities
between
what
and
how
it
will
recuperate
and
runtime
where
this
borderline
is
drawn.
It's
it's
a
great
question.
F
E
Thought
is
really
simple
is
what
like,
if
you
look
at
what
containerdin
is
doing
so
the
cryo
plugin
inside
of
it
nowadays
is
written
in
a
sense.
What
concept
of
a
Sandbox
or
well
something
similar
to
the
concept
of
the
Pod
is
already
a
native
internal
data
structure
inside
container
again,
so
it's
not
really
kubernetes
specific,
it's
more
abstractions.
What
is
already
industry
standard
of
how
we
are
running
containers,
so
we
know
what
we
need
to
have
a
common
like
sandbox,
which
groups
containers.
So
you
know
what
it
should
contain.
E
F
Yeah,
so
I
think
that
the
challenge
with
pushing
everything
down
to
the
runtime
is
like
Sasha,
like
every
runtime
will
how
to
keep
implementing
it.
The
second
thing
I
worry
about
is
right.
Now
we
have
a
dependency
just
on
cni
right
like
if
we
start,
depending
on,
say
five
six
different
plugins.
How
are
those
plugins
running?
How
are
they
bootstraps?
They
can't
be
containers
easily
because
the
runtime
is
depending
on
them.
Are
they
native
Services?
How
are
they
delivered?
E
I'm
not
saying
I'm,
not
saying
everything
to
run
time
and
I
was
just
answering
to
to
the
next
question
about
the
resource
discovery,
but
in
overall
about
the
plugins
so
like.
If
we
look
at
the
majority
of
kubernetes
installations,
it's
usually
like
Liam's
in
some
Cloud
infrastructure,
so
underlying
the
cloud
provider
infra
is
doing.
A
Justice,
the
the
objective
of
the
plugins
can
be
also
to
enable
more
complex
optimizations
to
users
in
another
way,
so
I
I
think
this
is
not
completely
true
for
that,
it's
only
in
the
specific
case,
it's
only
in
the
specific
case,
because
it's
very
complex
to
configure
that
today.
H
H
Was
to
where
I
was
going.
My
question
is
in
our
prior
meeting
on
Tuesday
the
slide
you
showed
showed
a
world
where
the
manager
of
the
resources,
like
the
thing
that
communicated
from
to
The
Container,
was
the
container
runtime,
at
least
for
some
resources.
You
said,
and
maybe
not
for
some
others
and
I
was
trying
to
figure
out
how
to
evaluate
that
and
then
related
to
that.
The
vehicle
by
which
the
container
runtime
was
communicated
with
was
still
always
the
cubelet
and
the
flow
you
showed
on
Tuesday
and.
D
H
Me
finish
my
thought:
if
that's
okay
for
some
resources
and
not
other
resources,
that
might
make
sense,
but
in
some
conversations
it
seems
like
there's
a
desire
to
to
maybe
go
one
step
further
and
that's
what
I
was
just
trying
to
see
if
there's
a
real
desire
for
if
it
could
be
vocalized,
which
is
like
the
cubelet,
should
not
manage
any
element
of
the
C
group
hierarchy.
H
It
should
not
manage
the
Clause
hierarchy.
It
should
not
manage
the
Pod
C
group
hierarchy
and
it
should
not
manage
the
container
hierarchy
and
it
could
be
that
there's
some
resources
that
would
be
explored
that
you
think
would
be
best
served
by
having
to
meet
a
need
at
every
level
of
that
hierarchy
and
I
was
just
trying
to
figure
out
like.
H
Is
that
really
the
end
gate
that
you
want
to
get
to,
which
is
that
as
a
design
principle,
cubelet
and
kubernetes
generally
does
not
try
to
manage
or
model
any
of
the
C
group
managed
or
non-sea
group
managed
resources
that
might
constrain
your
application,
or
is
it
just
the
Exotic
and
advanced
ones
that
we
want
to
delegate
out
to
drivers
like
I'm?
Just
trying
to
look
for
like
the
lens
by
which
we
would
make
future
decisions.
B
So
when
we
we
spoke
last
time,
we
also
spoke
in
terms
of
bootstrapping
what
happens
when
it
first
starts
up
and
then
what
happens
if
there's
a
crash
Loop
for
one
of
the
components,
so
whatever
your
components
are,
they
could
still
end
up
in
a
crash
Loop.
So
we
need
to
be
able
to
recover
to
some
degree,
or
maybe
we
don't
I,
don't
know,
but
going
forward.
B
The
way
I
would
look
at
kubernetes
is
for
the
vast
majority
of
their
use
cases
like
the
VM,
not
the
vast
majority,
because
a
lot
of
the
customers
we
deal
with,
which
is
a
significant
component,
including
China,
by
the
way
they
are
doing
on-prem.
So
the
on-prem
environments
are
very
different
from
the
VM
environments.
The
VM
environments.
I
would
hope
that
kubernetes
as
is,
would
be
good
enough
right.
So
then
you
don't
need
exotic
plug-ins.
H
But
really
what
you
want
on
the
end
game
is
to
be
able
to
manage
every
layer
in
that
tree
where
possible,
and
if
that's
the
case,
then
I'll
just
kind
of
look
at
like
features
we
have
in
flight
today,
whether
that's
things
like
starting
to
expose
more
secrets
feed
to
Native
Concepts
like
memory
quas,
you
know,
would
that
even
be
a
kubernetes
native
thing
like?
Would
it
be
up
to
kubernetes
to
Define
what
a
memory.request
is
anymore
or
is
it
something
that
your
your
third
party
driver
has
to
Define
what
that
means?
H
H
E
E
But,
but
even
if
it's
in
this
audience,
I
think
like
we
have
like
how
many
like
10
people,
and
maybe
they
will
get
them
10
answers.
I
can
I
can
talk
about
my
opinion
like
answering
to
your
questions
like
we
see
groups
here
are
here
and
so
on.
E
I
think
it's
not
the
place
to
be
done
inside
run
inside
Kublai,
I,
understand
role
with
historical
reasons,
I've
seen
it
in
my
dockershim
times,
because
we
needed
that
like
we
can
the
doctor
required
the
fat
client
to
to
be
to
some
extent,
but
we
got
different
runtime
classes.
We
got
c
groups
version
one
and
two,
so
like
c
groups
version
one
and
two
we
we
needed
to
support
them.
Both
we
needed
to
change
both
couplet
and
runtimes,
and
protocol
between
them
to
just
Implement
that
feature.
E
So
my
my
personal
thinking
is
like
why
we
are
teaching
with
kublet.
Why
are
we
adding
so
much
of
technical
depth
inside
public
for
implementation,
details
which
can
be
hidden
from
it?
So
Kubler
can
still
own
like
the
life
cycle
of
support.
So
it
knows
what
needs
to
be
around
this
Rich
priorities,
but
how
it's
implemented
like
this
is
c
groups
based.
Is
it
Windows
container
results,
I,
don't
know
like
wasm,
runtime
or
whatever,
which
uses
different
sandbox
and
mechanisms?
E
A
Yeah
I
I
have
similar
kind
of
feeling,
more
or
less
basically
with
just
just
touching
some
of
the
code
that
cubelet
you
see
if
you
want
to
develop
some
sort
of
Plugin
or
something
what
you
need
is
only
exactly
somebody
to
create
the
containers,
the
group
for
you
and
to
destroy
it
when
needed,
the
actual
kind
of
things
below
that
right.
You
can
manage
outside.
H
H
I
o,
as
block
I
o
improved
relative
to
the
cube
use.
Cases
like
those
to
me
represent,
like
the
essential
food
groups
that,
like
99
of
users,
would
would
benefit
from
the
Exotic
use
cases.
The
specialization
use
cases
things
like
I
need,
I
need,
pinning
or
pneuma
alignment
or
device
alignment.
I
agree:
they
are
very
often
vertical
and
workload.
Specific
and
part
of
this
discussion
around
the
resource
managers
was
how
to
accelerate
those
use
cases,
but
I
where
I'm
struggling
to
figure
out
is,
if
you
want
to
accelerate
just
those
use.
H
Cases
versus
I
want
to
take
over
total
ownership
of
all
resource
management
in
kubernetes,
and
if
a
new
resource
is
brought
forward
that
says
hey.
Does
the
kubernetes
community
have
an
opinion
on
block?
H
I
o
sort
of
answer,
be
no,
it
doesn't
go,
find
a
third
party
thing
to
do
it
or,
yes,
it
does
because
it
might
be
in
that
that
90
window
Tuesday's
conversation
was
making
me
think
much
of
the
discussion
was
on
how
to
support
the
specialized
use
cases
without
needing
to
get
those
specializations
and
the
Upstream
versus
the
generalized
use
cases
that
the
broader
Community
can
benefit
from
I'm.
A
H
I
think,
alignment
and
and
exclusive
allocation
of
a
resources
where
I've
identified
I,
think
specialized
scenarios,
but
I
think
the
things
that
we
already
had
done
in
Cube,
whether
that
was
General,
CPU
Fair
sharing
General.
You
asked
for
amount
of
memory
we're
going
to
Aspire
to
make
sure
you
can
get
that
memory
and
the
work
we're
doing
right
now
with
V2
to
try
to
get
you
actually
have
memory
requests
mean
something
all
those
are
good
and
I
think
related
to
these
resources.
H
There's
some
reasonable
expectation
that
the
cubelet
tries
to
responsibly
respond
to
scarcity
of
those
resources,
so
I
think
it
would
be
I
think
it's
okay,
that
the
cubelet
tries
to
handle
scarce
memory,
pressure
and
resource
eviction,
or
that
it's
okay,
that
the
Cuba
tries
to
handle
local
disc
exhaustion,
I
view
those
things
as
in
that
broad
Broad
ship
of
function,
I
guess
but
I'm,
really
just
asking
like
I'm,
not
necessarily
adverse
to
restructuring
how
the
code
is
done
to
support
that,
especially
if
it's
in
a
way
that
allows
people
to
to
do
plugability.
H
A
H
That's
that's
kind
of
what
I
was
asking
right,
like
people
reasonably
think
when
I
said
what
I
said
around
CPU
and
memory
and
black
I
o
and
pids
and
ephemeral
disk
like
that's,
basically
what
we
had
in
flight
now.
Is
that
not
a
fair
starting
point,
and
is
there
something
else?
That's
not
there.
That
would
be
missing,
and
would
we
think
that
that
thing
should
be
fulfilled
as
a
first
class
thing
through
this
path
or
not?
H
That's
what
I
was
trying
to
to
figure
out
that
I
totally
appreciate
that
a
lot
of
this
in
Tuesday's
meeting
was
like
I
want
to
do
power,
management
and
I
want
to
do
yeah,
maybe
more
exotic
affinity
and
pinning
scenarios.
E
But
there
are
regarding
the
block,
as
I
mentioned,
it
means
different
thing
based
on
different
runtime
classes.
So,
whatever
we
are
assuming
on
the
host,
it's
not
the
same.
What
we
see
inside
firecracker
or
katabir,
so
if
we
implemented
in
kubernetes
in
any
way,
we
need
to
make
it
abstract
enough
to
actually
hide
the
internal
complexity
of
implementation
of
runtime
from
Google
Earth,
and
regarding
what
you
mentioned
about
the
memory
to
OS
I,
don't
see
the
reason
why
we
need
to
say
no
to
it
or
stop
it
like.
H
To
like,
we
made
a
comment
there,
like
10
people
in
a
call,
and
there
might
be
10
different
opinions,
I
I'm,
just
trying
to
figure
out
how
to.
H
Figure
out
how
how
big
of
a
bite
we
want
to
take
on
this
step
and
or
this
particular
activity
or
not
I'm,
not
trying
to
make
a
value
judgment
on
yay
or
nay,
either
way,
but
having
a
difficulty
like
getting
getting
a
a
a
forward-looking
perspective.
I
guess
for
for
how
to
approach,
maybe
more
General
resources
and
and
the
things
I
enumerated
were
just
things
that
every
container
runtime
supports
today
and
so
I
was
trying
to
think.
D
Yeah
I
think
I
kind
of
share
some
of
the
sentiment
there
Derek,
so
I
I,
don't
think
we
should
try
to
fix.
What's
not
broken.
We
have
Resource
Management
kind
of
working
well
in
entry
in
kubernetes,
but
at
the
same
time
we
want
to
enable
some
of
these
exotic
use.
Cases,
like
you
said
I
was
wondering
if
there's
a
way
that
we
we
have
a
separate
pool
of
resources
that
we
just
hand
over
to
these
resource
plugins,
so
they
can
manage.
A
Them
just
to
interact
quality
I
think
that
the
state
is
of
the
current
management
components
is
not
satisfactory.
It
needs
refactory.
D
Yeah,
so
I
I
completely
understand
that
I
I'm,
okay,
what
I'm
suggesting
is
we
can
refactor
it
and
make
it
pluggable
in
a
way
that
it
adheres
to
the
plugin
based
apis,
but
out
of
the
box,
we
give
people
what
exists
right
now.
So,
for
example,
you
know
if,
if
they
want
very
basic
CPU
pinning
you
have
your
request
equal
to
limit
and
you
get
exclusive
CPUs.
So
things
like
that,
we
can
give
them
that,
as
is,
and
if
you
want
them
as
entry
plugins,
we
can
do
that.
D
We
just
refactor
the
code,
have
maybe
CPU
manager
static
policy
as
an
entry
plugin
available
internally,
but
we
get
that
out
of
the
box.
This.
A
Is
feasible
with
this
approach,
so
you
can
have
an
entry
plugin.
Basically,
for
for
this
exactly
basic
stuff
and
and
some
of
them
you
can
maybe
pull
out
as
we
as
we
wrote
in
in
small
core
components
like
in.
In
some
cases
you
you
even
don't
need
spinning
and
stuff
like
that,
so
you
will.
This
will
be
90
of
the
cases
like
the
best
effort
containers.
Worst
of
all
and
some
memory
limits.
D
Yeah
so
I
think
the
the
missing
piece
so
far
is
that
we
haven't
discussed.
How
do
we
make
sure
that
these
entry,
plugins
and
say
the
new
plugins
that
we
create
don't
take
ownership
of
existing
CPUs
like
the
underlying
CPUs,
and
they
don't
conflict
and,
like
Sasha,
said
track
them
multiple
times
and
there's
accounting
errors
so.
C
B
So
I'm
going
to
go
back
to
topology
management
doesn't
handle
most
of
the
use
cases
that
we're
looking
for
even
in
specialty
cases,
so
I
would
ask
about
whatever
default
plug-in
that
we
have
going
forward
or
default
Behavior.
We
keep
as
lightweight
and
simple
as
possible
and
not
continue
having
to
support
fancy
features,
and
that
includes
penny.
H
Yeah,
so
what
the
the
distinction
I'm
trying
to
draw
Marlo
is
there's
a
difference
between
supporting
CPU
and
memory
out
of
the
box.
Let's
say
versus
someone
that's
trying
to
get
were
each
of
those
resources
are
kind
of
implemented
in
isolation
versus
the
use
case.
H
That
was
requiring
them
to
be
co-scheduled
like
in
a
way
that
is
not
representative
of
the
default
way
like
say
your
your
init
system
would
launch
any
other
service
on
your
host,
so,
like
topology
manager
to
me
is
like
clear,
Advanced
use
case
versus
the
is
it
right
and
proper.
That
cubelet
is
aware
of
you
know
memory
pressure
right,
so
the
only
thing
I
was
trying
to
draw
out
here
is,
as
we
offload
resources
to
these
specialized
managers.
H
We
have
to
keep
in
mind
that
that
manager
is
responsible
not
just
for
assignment,
but
also
for
handling
scarcity
and
pressure,
which
means
every
resource
manager
then
needs
to
become
an
evictor
or
a
potential
evictor
and
I
just
want
just
that's.
What
I'm
trying
to
figure
out
is
what
what
function
as
it's
delegated
out
also
needs
to
go
with
that
delegation.
G
What
I'm
trying
to
figure
out
and
is
how
the
picture
would
look
like
from
the
API
point
of
view
So.
Currently
we
have
like
pluggable
device
manager,
pluggable,
dra
manager,
unpluggable,
topology
manager,
unpluggable,
CPU
manager
and
unpluggable
memory
manager
and
the
the
idea
is
to
somehow
like
combine
it
in
in
one
resource
manager
as
far
as
I
understood,
but
and
make
it
pluggable
so
and
I
kind
of
cannot
figure
what
kind
of
apis
would
be.
G
A
A
You
can
have
the
three
types
of
plugins
you
you
can
have
a
common
for
for
the
basic
stuff.
Basically
the
the
things
what
you
have
today:
CPU
manager,
memory
manager,
topology
manager,
White
package
together
in
in
one
credit,
then
you
can
have
device
plugins,
you
can
have
dra
plugins.
They
just
don't
connect
to
10
000
managers.
They
just
go
to
one
and
right
and
and
then
the
new
themes
here.
Are
you
have
the
capability
to
replace
this
one?
A
If
you're
a
vendor
or
a
company
which
wants
to
implement
something
custom,
you
can
basically
implement
this
one
also
devices
and
everything
else
should
work
as
before.
So
we
don't
want
to
basically
be
incompatible
with
with
Device
plugins
for
for
here,
so,
but
yeah
that
that's
the
the
coordinator
is
actually
just
Resource
Management.
The
central
coordinator.
G
A
Exactly
it's
only
communicating
with
plugins
and
and
coordinating
this
plug-in
kind
of
communication,
actual
allocations
and
stuff
like
that
or
state
handling.
State
machines
are
done
here
in
the
plugins,
so
they
are
externalized.
A
If,
if
we
post
that
as
a
plug-in
repository
similar
to
six
scheduling-
basically,
you
can
have
this
this
as
a
separate
repository,
basically
external
from
Cuba.
Then
you
get
further
options
to
reduce
the
code
later,
maybe
some
of
those
pieces
device,
manager
and
era
they
can
be
simplified
and
merged
in
size.
G
G
Manager,
okay,
so
like
the
idea
is
to
like,
mostly
the
reduction
of
the
code,
will
come
from
the
CPU
memory
and
topology
manager.
Right
because.
F
F
A
A
G
But
I
need
to
think
about
it.
So
I,
maybe
I,
just
misunderstood
The
Proposal,
so
I
I
thought
that
it's
kind
of
like
General
proposal
for
all
kind
of
managers,
all
kind
of
like
managers
inside
kublet
and
just
but
so
okay,
so
device
manager
will
stay
the
same.
The
array
will
stay
the
same.
It's.
F
A
This
would
be
really
nice
if
we
can
extract
from
from
those
three
what's:
what's
the
the
basically
the
most
important
thing
for
for
users
and
and
somehow
put
it
basically
in
three
or
if
they
want
all
the
the
things
they
can
take
this
this
default
plugin
is
one
option,
but
we
can
host
maybe
a
small
piece
which
is
basically
needed
for
90
of
the
use
cases.
Entry.
A
Yes,
so
right,
so
this
was
a
little
bit
on
the
bootstrapping
part.
We
were
thinking
that,
in
any
case,
such
kind
of
small
component
core
component
might
be
needed
in
in
for
fail
safety
reasons
when
the
plugin
goes
down,
there's
something
for
if
you
want
to
cover
functionality,
functionality
without
Credence,
much
faster,
which
is
very
basic
functionality
right
and
yeah.
The
other
very
good,
quite
open
area
is
how
we
can,
if
it's
possible,
to
extend
the
array
to
handle
the
extreme
CPU
and
memory,
because
the
interface
is
nice.
A
G
A
A
So,
basically
we
we
don't
want
to
cover
the
standards,
interface
of
resource
requests
and
stuff,
like
that,
through
the
array
we
we
just
want
to
to
get
the
same
interface
for
basically
CPU
and
memory
for
more
advanced
use
cases
when,
when
you
have
maybe
more
advanced
kind
of
definitions,
how
the
CPU
has
to
be
allocated
how
memory
has
to
be
allocated,
then
the
array
can
pass
more
details.
Basically,
through
that
interface.
G
Well,
the
I
don't
know
at
least
for
now,
CPU
and
memory.
The
allocation
happens
on
the
like
admissions
kind
of
stage,
but
on
the
array
allocations
happens
before
actually
the
workload
scheduled
to
the
node,
so
not
not
sure
it's
it's
so
that's
kind
of.
For
me
at
least,
it
looks
like
a
different
things
very
different.
G
Yeah,
so
that
that's
the
whole
idea
that,
like
the
resource
claim,
is
actually
handled
before
it's
handled
by
the
direct
controller
before
the
workload
is
actually
scheduled
to
denote,
so
they
scheduled
decisions
made
and
allocation
actually
is
done
before
that
and
scheduled
decision
depends
on
the
on
the
controller
part
of
the
DNA
plugin
right,
so
not
not
sure
that
it's
a
good
place
to
actually
allocate
CPU
and
the
memory
so
for
devices,
and
maybe
for
storage
it
it
it.
It
would
work,
but
CPU
and
memory
are
like
local
resources.
A
And
your
you're
scared
about
scalability,
mostly
or
why
are
you
worried
about
it?
Is
the
the
user
actually
wanted
those
resources
exactly
those
resources?
Why
not
allocate
them?
What's
the
issue
with
them.
D
G
So,
for
me,
it's
like
it's
local,
but
the
the
array
is
actually
meant.
Well
it
it.
It
cannot
also
handle
local
resources,
but
mostly
meant
like
to
like
General
more
like
generic
approach
that
that
would
also
kind
of
can
make
the
remote
like
devices
like
network
attached
devices
and
stuff,
like
that.
Those
are
not
local
to
the
node.
So
that's.
Why
actually
the
the
this
Ada?
G
Okay,
because
it's
it's
not
about
local
resources,
so
the
some
resources
there
they're
not
attached
to
the
node
at
the
moment
of
allocation,
so
that
one
of
the
ideas
like
behind
the
array
but.
G
It
already
the
resources
are
already
allocated,
so
they
they
are
just
like
on
the
kublet
level,
on
the
dra
couplet
plugin
level
they
are
just
like
prepared
and
preparation
is
just
a
matter
of
writing
the
actually
the
change
to
the
configuration
runtime
configuration
anti-configure
so
that
that's
what
the
dra
plugin
do
at
least
for
now.
H
So
I
I
mean
I'm,
now
also
a
little
lost
and
so
I'm
trying
to
think
of
ways
to
maybe
get
some
clarity.
H
Is
it
possible,
so
a
clear
piece
of
input
in
this
call
has
been
that
a
perception
that
there's
too
many
managers
inside
the
cubelet
today
for
CPU
memory
devices
and
and
then
their
intersection
with
topology,
and
it
seems
like
the
use
case
through
which
all
of
those
managers
reach
some
Tipping
Point
in
this
audience
is
when
they
need
node
local
scheduling,
awareness
and
the
fact
that
there's
five
of
these
instead
of
one
means
that
you
know
in
the
case
where
they
all
need
to
intersect
as
well.
It
seems
like
their
desire.
H
I
just
wanted
one,
and
it
could
work
for
my
use
case
across
CPU
memory
and
topology,
and
you
know
what
else
might
be
on
in
our
minds
here.
So
maybe
could
we
could?
We
just
do
like
an
exercise
of
saying
in
a
world
where
there
was
just
a
new
CPU
manager
added
following
this
model.
What
would
be
the
bootstrapping
flow,
and
what
is
the
use
case
that
this
new
CPU
manager
is
needing
to
satisfy?
H
That
would
all
give
us
confidence
for
like
the
first
question
he
had
it
says.
If
it's
down,
we
fall
back
to
the
built-in
version.
H
What
would
give
us
confidence,
even
though
that's
the
right
outcome
or
the
right,
safe
Solution?
That's
what
I'm
trying
to
figure
out
how
we
answer
questions
here.
This
is
the
other.
Do
other
folks
have
ideas
on
how
this
could
be
explored
or
thought
through.
A
Maybe
by
by
showing
it
it's
it's
not
a
bad
idea,
we
can
try
to
prepare
some
sort
of
short
kind
of
CPU
manager.
Plugin.
We
have
a
prototype
in
place,
so
we
we
can
basically
try
to
implement
a
a
plugin
which
which
doesn't
follow
in
the
standard,
let's
say
functionality.
What
CPU
manager
covers
today
and.
H
A
I
H
H
And
I'm,
probably
biased
by
a
worldview
that,
because
everything
is
is
composed
as
it
is
today,
fail,
open,
isn't
actually
possible,
and
that
has
biased
users
to
know
that
they,
but
that
things
are
just
going
to
kind
of
work
as
they
think
it's
going
to
work.
Or
hopefully
it
does.
But
I
could
see
how,
if
you
had
a
fail
open
posture
for
these
plugins,
that
that
would
cause
a
lot
of
confusion,
particularly
for
those
users
who
are
expecting
the
most
optimal
placement
right.
So
anyway,
right.
H
H
I
feel
like
having
cubelet
manage
these
things,
as
pods
implies:
I'm,
okay,
with
a
fail,
open
or
post
bootstrapping
position,
but
I'm
trying
to
understand
if
that's
really
where
people
would
be
okay
or
not
versus
terminal's
point
just
the
bootstrapping
problem
is
deployment
of
these
things
as
pods
the
right
approach.
H
H
C
H
Today,
for
example,
I'm
sure
we
all
are
doing
our
best
to
represent
users
in
the
community
who
are
running
workloads
on
kubernetes
nodes.
That
desire
you
know
pinning
right
and.
H
Fail
open
would
mean
okay,
the
CPU
manager
wasn't
available,
so
I
couldn't
make
a
pinning
decision
so
I'm
just
going
to
not
pin
your
workload,
it's
going
to
go
on
all
the
CPUs
or
it's
going
to
get
some
default
decision,
I'm,
not
aware
of
a
user.
That
would
be
happy
about
that
in
the
case
where
they
actually
Define
or
desired,
pinning
they
would.
H
H
E
Like
if
I,
if
I
may,
throw
away
here
to
a
table
so
right
now,
we
have
like
topology
manager
and
the
rest
of
our
managers
is
configure
it
as
a
like
a
whole
node
thing.
So
we
assume
what,
if
a
note,
there's
a
variable
in
in
this
particular
configuration
when
we
can
assume
what
where
the
algorithm
is
working.
E
E
We,
of
course
had
it
in
mind
with
a
bit
different
scenario
so
like
when
you
have
a
host
this
multiple
tiers
of
memory
with
different
type
of
memory
available,
but
it
can
be
used
for
something.
What
what
you
mentioned
was
scenario.
E
A
very
simple
example:
So,
like
if
containers
said,
I
want
to
have
a
ping,
and
until
the
bootstrap
is
finished
on
the
Node,
the
class
of
I
support,
the
CPU
Inc
will
not
be
available,
so
scheduler
will
simply
not
put
report
on
this.
Now.
E
H
But
if
I
need
a
pod
to
launch
pods
that
use
CPU
correctly
on
that
node
that
will
cause
conflicts.
E
E
F
H
Can
say
here
is
that,
like
my
current
view,
is
that
if
we
pursue
this
me
as
a
user
or
a
a
provider
of
kubernetes
solutions,
I
personally
would
buy
us
towards
plugins
that
are
not
managed
as
pods
themselves,
to
avoid
this
chicken
and
egg
scenario,
and
so
maybe
just
as
a
design
tenant.
H
B
B
With
the
time
in
mind.
Do
we
want
to
do
another
meet
up
next
week,
or
should
we
wait
for
the
new
year,
because
I.
H
Know
a
lot
of
people
have
stuff
to
do.
I,
encourage
everyone
to
meet
and
discuss,
don't
really
need
to
block
me.
I
I
I
will
not
be
here
anymore
after
this
week
for
the
meaning
of
the
year,
but
I
I
think
good
discussion
obviously
happens
among
a
group
even
when
I'm.
E
It
would
be
lovely
to
have
a
cabin
on
this
chat,
but
he
is
also
away
for
holidays.
B
B
E
H
So,
let's
just
get
the
meetings
on
the
signaled
calendar
and
anybody
who
subscribes
to
that
calendar
will
see
the
meeting.
I
I
think
myself
definitely
can
do
it.
Sergey
I
think
you
can
do
it,
so
it
I
it.
It
should
be.
Fine
I
think
am
I
mistaken
on
that
Sergey.
You
have
that
power
as
well.
Unless
he
drops.
H
Oh
there's
a
sick
note,
calendar
and
I'll
look
I
I,
know,
I
can
add
things
to
the
signal.
Calendar
I
thought
other
sub-project
owners
could
add
things
to
the
calendar.
So
let
me
let
me
quickly
try
to
look
that
up
and
I'll
ping.
You
more
a
lot
and
we'll
figure
out
how
to
and
make
that
happen.
B
Okay,
thank
you,
so
I
will
get
you
the
recording
survey,
so
we
can
put
this
all
set
on
villains.