►
From YouTube: SIG Node Resource Management WG, 2022/03/07: Kubelet Plugin Discussion: CPU Topology management
Description
Meeting notes and Agenda:
https://docs.google.com/document/d/1ALxPqeHbEc0QOIzJ3rWWPpwRMRlYDzCv0mu2mR4odR8/edit#
A
A
A
Okay,
I
have
also
today
a
live
demo,
but
just
to
give
some
update
where
we
are
I
kept
also
two
three
slides
to
share,
but
after
that
we
can
switch
to
some
more
live
demo
set
up.
Let
me
try
to
switch
to
full
screen.
If
I
can
foreign.
A
Right
just
as
a
reminder
what
we
were
trying
to
investigate
latest,
so
we
are
trying
to
introduce
an
attribute
based
API,
basically
for
more
fine
grains
kind
of
configuration
of
resources,
and
we
we,
we
are
thinking
to
use
the
array
as
for
to
help
us
schedule,
some
of
the
problems
to
deal
with
some
scheduling,
progress
for
that
and
also
to
handle
the
the
basically
user
interface
pieces
representing
this
as
a
claim
more
or
less.
We
start
by
putting
that
in
a
Json
format.
A
I
think
Kevin
mentioned
that
there
are
other
ways
also
through
crds.
For
now
we
start
simple,
just
just
with
a
Json
format.
We
can
adjust
that
later
to
a
crdp
format,
but
also
is
part
of
the
cap.
We
got
some
questions
regarding
what
is
to
be
specced
out
in
such
kind
of
Json
or
crd
format,
and
here
I
have
a
little
bit
more
information.
We
try
to
Define
it
a
little
bit
better,
so
we
are
thinking
similar
to
dra.
We
have
some
sort
of
names
or
identifiers.
A
So
basically,
in
this
example,
we
have
three
pools:
three
carpools,
some
some
powerful
with
two
cores,
some
core
pool
with
eight
cores
and
some
core
two
with
four
cores
this.
This
is
just
an
ID
switch.
We
can
use
to
reference
each
pool
in
the
claims
by,
for
example,
resource
class
argument,
or
something
like
that.
A
Then
we
are
thinking.
This
is
also
open
to
discussion
today.
If
we
want
to
introduce
something
like
capacity,
so
basically
the
we
will
have
a
capacity
we
could
have
a
capacity
field
and
a
course
field.
Basically
capacity
is
something
which
you
could
think
about
like
a
pool
capacity,
more
like
a
static
policy
way
terms.
A
So
basically
you
have
a
static,
pre-allocated
pools
of
maybe
10
cores
and
then
20
cores,
80
cores,
and
then,
when
you
are
claiming
stock
out
of
the
pre-allocated
pools,
we
are
using
this
core
struct,
basically
to
to
pull
out,
pull
out
course
out
of
the
predefined
pools
right
well
in
in
we
added
also
some.
A
How
the
course
think
can
be
defined,
so
we
we
could
have
just
a
single
digit
or
a
single
number
indicating
that
we
in
the
cases
of
exclusive
indicating
that
we
want
more
or
less
two
pin
two
cores
this
is
equivalent
to
pinning.
A
Then
we
could
have
a
scenario
where
we
could
ask
some
some
range
very
similar
to
a
burstable
case.
So
basically
we
could
ask.
We
want
a
burstable
scenario
between
six
and
eight,
the
six
to
eight
cores.
Basically,
and
here
the
question
to
community
will
be
if
it
makes
sense
in
a
static
policy
way.
Let's
say
if
we
have
a
burstable
scenario
like
that
we
can
map
that
as
a
CPU
set
back
to
the
20
cores,
which
we
defined
in
the
capacity
field.
A
This
is
if
we
want
to
start
with
some
static
policy
alternative
here
would
be.
Basically,
if
you
use
okay,
we
if
we
use
actually
this
for
an
exclusive
kind
of
access,
as
we
Define
here,
we
could
think
about,
usually
in
a
in
in
in
a
scenario
where
people
are
requesting
exclusive
course.
They
would
actually
more
or
less
like
to
pin.
A
So
we
could
think
about
actually
requesting
the
upper
boundary
as
a
CPU
set,
so
we
could
request
the
CPU
set
of
eight
cores
and
then
the
set
the
CPU
shares
according
to
this
range,
but
the
other
option
is
basically:
if
we
introduce
this
optional
capacity
field,
we
can
set
the
CPU
sets
back
to
20
or
the
CPU
sets
over
20
over
two
of
20
plus,
and
this
becomes
even
more
interesting
with
shared,
so
in
shared
the
kind
of
yeah
for
the
shared
pool.
This
most
probably
makes
most
sense.
A
C
A
Then
just
set
a
a
smaller
kind
of
CPU
share,
similar
yeah.
A
So
this
is
one
way
how
we
could
do
that
with
that
another
way
of
way
to
deal
with
that,
if,
if
the
shared
pool
can
can
dynamically
shrink,
basically,
if,
if
we
don't
have
a
capacity
field,
we
could
have
basically
we
could
Reserve
initially
the
shared
tool
as
all
core
available
course
what
we
have
on
the
platform
and
over
time,
if
you
add
some
exclusive
kind
of
task,
you
have
to
shrink
basically
the
the
the
shared
so
that
you
can
fit
the
new
exclusive
tasks.
A
So
those
are
first
additions.
What
we
are
thinking
to
introduce
in
the
more
or
less
the
attribute
based
stuff
so
that
we
we
have
but
more
more
control,
then
the
other
additions
are
we
want
to.
We,
we
see
the
needs
for
for
customers
and
users
to
have
some
sort
of
selection
of
numer
Affinity
based
on
devices.
A
So
this
could
mean
also
that
you
have
some
capabilities
in
the
dra
driver
which
which
do
do
device
selection,
but
basically,
you
put
then
specify
some
sort
of
binary
option
more
or
less
similar
to
a
binary
option.
A
Where
you
put
tell
that
you
need
Numa
Affinity
with
your
device
you
like
to
have
normal
connectivity
or
device,
or
you
don't
care,
basically
and
very
similar,
which
we
we
are
planning
to
introduce
a
memory
attributes
which
would
mean
basically,
we
get
two
two
types
of
affinity:
according
to
memory,
attributes
bind
Affinity,
which
is
in
Linux
terms,
is
very
similar
to
to
single
Luma,
so,
basically
bind
means.
A
We
want
closest
Affinity
to
the
memory
controller
and,
let's
say
required,
means
fits
fit.
Everything
fits
the
Fufu
inside
inside
the
the
sockets
or
which,
which
basically
controls
that
memory
controller
preferred
means
start
filling
it
in
if
the
container
runs
over.
Basically,
if
you
need
more
cores,
then
then
actually
needed
you
go
to
the
next.
So
it's
some
sort
of
best
efforts,
kind
of
semantics,
similar
to
best
effort,
semantics
and
then
interleaved.
This
is
a
little
bit
similar
to
Numa
spread.
A
If
you
think
about
it,
it's
basically
maximize
memory
controllers
for
for
the
container,
so
the
course
so
that
you
can
get
more
memory
controllers
here
again
we
could
have
required
and
preferred
basically
required
means
if
you
cannot
get
all
n
memory
controllers,
you
error
out
reports.
You
get
the
max
number
of
available
memory
controllers.
A
Right
and
then
there
are
further
attributes
which
we
we
are
thinking,
so
we
will
need
something
for
huge
Pages.
This
is
more
long
term,
not
only
Alpha,
most
probably
also
for
beta
scope,
as
we
wanted
to
deal
with
CPU
first
in
in
the
alpha
stage,
but
when
we
start
thinking
about
memory,
those
are
the
things
what
we
we
could
pull
in
then
more
interestingly,
also
CPU,
attributes
in
terms
of
siblings,
so
we
could
have
core
siblings,
preferred
core,
siblings
preferred
means.
A
Basically,
if
you
kept
yeah
a
port
and
container
requesting
several
cores,
you
try
to
put
them
first
to
populate
all
the
siblings
on
the
on
the
physical
Court
course
siblings
denied
it's
interesting.
This
is
basically
you
could.
If
your
container,
it
requires
a
core
yeah,
just
one
core,
just
as
an
example,
you
block
the
whole
sibling
with
that
after
that,
basically
you'd
this.
A
This
defines
that
you
don't
want
to
share
the
sibling
with
another
core
with
another
process,
and
there
are
some
kind
of
lower
priority
option
of
that
preferred
and
and
for
their
dedicated
is
basically,
if
you
want
one,
one
physical
core
will
correspond
to
Dedicated.
You
can
have
basically
one
one
process
which
you
want
to
map
to
the
full
physical
core,
so
those
are
some
of
the
options
which
I
wanted
to
throw
in
for
today
for
discussions.
A
I
don't
know
if
this
helps
a
little
bit
to
get
better
overview
of
what
such
an
API
will
will
look
like
any
thoughts.
A
I,
don't
know
if
it's
useful
for
your
use
cases
what
you
have
I'm
wondering
yeah.
If
you
have
some
some
yeah
ideas,
how
we
can
improve
that
or
if
something
is
not
fitting.
B
A
So
those
are,
if
you
are
in
the
classes
of
exclusive
or
if
you
speak
about
the
exclusive
days,
those
will
be
explicit
course
asks
so
or
it
it's
it's,
not
not
CPU
IDs.
If
this
was
the
question,
so
we
are
never
asking
for
CPU
IDs,
we
are
asking
for
two
two
course
basically
or
between
six
a
date
course.
A
A
Yeah
I
have
a
little
bit
illustration
how
it
works
or
how
we
we
think
to
to
make
or
also
a
question
again
to
the
community.
This
is
a
small
scenario
illustrating
basically
starting
with
some
request
for
shared
course,
three
shared
course.
A
So
we
we
have
some
information
like
after
that
we
could
query
how
many
cores
are
still
available.
So
if,
if
you
are
using
a
shared
pool
which
is
does
not
have
capacity,
you
have
all
the
cores
available
for
that,
so
10
course,
and
if
you
are
locate
three,
another
application
which
wants
to
use
shared
course
can
still
allocate
those
or
can
go
on
top
of
those.
A
So
that's
why
we
are
saying
10
shared
course
are
available,
but
if
you
are
an
exclusive
kind
of
application,
the
idea
here
is
not
to
overlap
with
any
other
bots
on
the
system,
so
it
means
the
exclusive
application
gets
new
three
cores
and
after
booking
a
shared
application,
we
get
three
course
less
for
exclusive
applications.
A
Do
you
see
I
have
to
use
some
index
for
exclusive
this
case,
Zero
I,
don't
know
if
it
would
be
a
useful
feature
to
make
it
possible
if
another
application
comes
later,
which
has
the
same
index.
Let's
imagine
the
index
is
the
claim
ID
that
they
can
join
this
this
set
or
they
can
join
on
on.
They
can
share
the
course
similar
to
what
you
have
with
gpus.
That
might
might
be
a
possibility
with
I.
A
Don't
know
if
they
are
is
there
is
a
need
for
for
such
kind
of
controlled
sharing,
but
we
could
do
it
like
that,
so
you
could
go
directly
on
the
course
which
which
a
certain
claim
was
assigned
to
another
option,
is
actually
the.
If
you
look
into
the
second
application
here,
it's
requested
four
course.
The
I
could
optimize
a
little
bit.
The
overlapping,
avoid
overlapping.
A
So
put
this
second
chord
here,
because
this
one
is
more,
it's
it's
still
free,
so
I
could
do
that,
but
it
will
use
more
resources
on
the
system,
so
we
could
think
about
such
kind
of
options.
I
don't
know
if
they
they
will
make
sense,
but
it's
something
which
is
we
we
could
image.
We
could
think
about
supporting
or
not
right
currently
in
in
the
initial
kind
kind
of
scope.
We
are
thinking
to
make
that
just
exclusive.
A
A
Do
some
some
possibility
to
share
based
on
claim
ID
and
the
the
shared
kind
of
pool.
Here
you
can
put
anything
more
or
less
any
kind
of
claim
can
use
the
shared
tool
if
they
are
course
available,
yeah
right
that
that's
a
little
bit
more
about
the
initial
semantics.
What
what
we
were
thinking
about
with
this
shared
and
and
exclusive
boost
right,
so
in
terms
of
possible
realization
with
dra.
A
This
is
also
very
similar
to
what's
my.
My
kind
of
demo
would
be
today
actually
nice
thing
about
the
dra
sites,
all
this
kind
of
tail
safety
concerns
and
and
spots
back
things.
We
we
actually
don't
need
any
Prospect
extension
with
with
Dr
Ray
the
resource
class
option,
the
CCI
director
or
the
driver
name.
Sorry
for
the
name
here,
but
there
is
a
field
called
driver
name
already
in
the
resource
class,
which
is
sufficient.
A
We
could
basically
provide
a
second
socket
for
the
CCI
manager,
which
will
where
basically,
we
can
register
drivers
for
CCI
with
the
exactly
the
same
name
like
in
the
resource
class,
and
then
we
can
use
this.
It's
completely
the
same
mechanism,
and
this
is
what
I
am
doing
in
the
demo
and
we'll
show
it
working.
A
A
Cpu
manager
is
disabled
or
set
to
non,
and
currently
we
we
are
using
to
some
in
the
Prototype.
I
still
did
not
implement
the
crd
communication
back
to
the
state
to
the
schedule,
control
controller,
I,
I'm
running
just
on
one
note.
A
This
is
currently
skipped
in
the
demo,
but
basically
we
will
see
the
dear
age
driver
and
then
combined
with
the
CCI
driver
and
the
C
I
have
two
set
of
callbacks
I,
have
the
node
prepare
callbacks
as
part
of
the
dra
driver
and
then
I
have
another
callback,
a
set
of
callbacks,
which
are
the
admit
that
remove
events
basically
coming
from
from
the
CCI
counters?
A
One
thing
here
is:
if
we
have
the
two
drivers,
let's
say
the
usually
the
way
out
the
array
works.
If
you
want
to
take
scheduling
decisions
cdra,
you
need
to
provide
State
information
through
some
sort
of
crd.
A
We
could,
as
our
kind
of
drivers
will
will
more
or
less
maintain
the
state.
We
could
build
a
single
if
those
two
things
live
in
the
same
thread
or
in
the
same
process
and
basically
two
threads.
We
can
build
a
channel
between
the
two
is.
It
was
just
one
thought.
Currently,
it's
not
the
case
currently
I
have
another
kind
of
communication
through
a
file,
but
it
can
be
a
Channel
or
something
else
if
they
are.
A
A
This
is
just
abstract
box
of
the
whole
thing.
I
have
basically
two
binary
sure
yeah.
You
could
have
two
binaries
One
controller,
binary
and
I'm,
suggesting
to
to
put
the
node
driver
and
and
the
CCI
implementation
CCI
driver
implementation
in
the
same
binary.
They
they
can
be
two
threats,
basically
of
the
same
binary.
A
Basically,
Can
can
do
the
the
pot
admission
allocation
of
CPU
and
memory
resources,
so
it's
the
CCI
manager
calls
admit
at
remove
functions
of
this
calendar.
So
basically
it
will
be
the
the
this
calendar
will
be
called
here
and
basically,
if
I,
let's
say
I
I
want
to
allocate
some
supports
with
with
the
new
kind
of
spec.
A
I
will
get
this
event
and
I
could
send
through
the
channel
a
message
to
the
node
driver
that
the
stage
changed.
Basically,
then,
the
node
driver
can
share
back
to
the
scheduling
through
the
crd
or
something
else.
A
Yeah,
this
was
a
little
bit
other
view
of.
Basically,
the
CCI
resource
driver
is
the
second
threat
it.
We
had
these
three
functions
defined
inside
basically
to
admit
a
container
but
yeah
a
container,
then
to
do
it
at
the
result
or
map
do
some
sort
of
mapping
between
container
ID
and
and
the
actual
container
and
then
remove
the
container.
So.
D
The
way
I
was
picturing
this
working
with
you
know
the
extending
of
the
concept
of
how
dra
does
things
is
that
this
would
that
now
become
your
interface
to
your
CCI
manager,
and
there
is
no
notion
of
node
prepare
resource,
node
unprepared
resource.
This
is
your
API
and
so
I
guess.
That's
why
I'm
confused
on
the
other
slide,
what
is
node
prepare
resource
and
node
unprepared
resource
doing.
A
Students
so
not
prepare
resource
not
prepared
unprepared
resource,
they
are
still
the
classical
dra
driver.
So
if
you
want
to
to
do
some
scheduling
component
for
this
config
Maps
or
for
this
kind
of
attribute
based
stuff,
basically
we
can
use
those
two
functions.
Do
not
prepare
and
prepare
resource.
A
A
Do
you
proceed
like
a
question
there
you
are
in
in
the
node
drivers,
usually
you're
updating
the
state
like
if
you
are
claiming
something
where
should
we
update
the
states?
Is
it,
of
course
we
can
update
it
in
in
the
CCI
Handler
stuff
and
then
send
a
message
back
to
the
controller,
but
is
it
not
the
node
driver
updating
the
state.
D
D
It
advertises
all
the
resources
it
has
available
through
a
crd
in
the
controller
when
it,
when
it's
communicating
with
the
scheduler
and
trying
to
figure
out
what
node
a
specific
pod
should
land
on
that's
trying
to
get
reference
to
a
claim
where
that
should
go.
It's
Consulting,
the
crds
that
have
been
advertised
by
all
the
nodes
that
have
the
Kubler
plug-in
running
on
it.
Finding
an
appropriate
node
to
do
this
to
to
put
the
pod
on
to
allocate
the
claim
on,
puts
a
a
reservation
on
that
claim
into
the
crd
and.
C
D
Schedules
sends
that
back
to
the
to
the
to
the
scheduler
so
that
he
can
actually
schedule
it
once
the
Pod
lands
on
the
Node,
the
kublet
plugin,
that's
running
scans,
its
crd,
that
it's
put
out
sees
what
reservations
are
there
make
sure
that
whatever
claim
is
being
requested
to
have
land
on
this?
This
node
right
now
that
there's
a
reservation
for
it?
That's
been
put
in
by
the
controller.
D
If
there
is,
it
goes
off
configures
the
device,
the
way
that
it's
been
told
to
by
the
controller,
generates
a
CDI
device
for
it
and
passes
that
back
to
the
kublet
and
then
commits
that
this
has
been
allocated
by
writing
that
back
to
its
own
section
in
in
the
crd.
D
D
If
you
want
to
call
it
CDI,
kublet
plugin
for
Dra,
which
is,
which
is
the
only
kind
we
have
today,
but
it
would
be
completely
up
to
you
to
Define
what
this
API
back
to
the
Kubler
was
because
whatever
driver
name
you
put
in
your
resource
class
when
it
gets
in
when,
when
you're,
when
your
pod
lands
on
the
kublet
and
it's
trying
to
allocate
claims
of
your
type,
it's
going
to
know
to
call
into
the
CC
CCI
manager,
rather
than
the
dra
manager
to
to
handle
these
types
of
resources,
and
so
the
API.
D
You
then
Define
with
your
plugin
can
be
whatever
you
want.
You're,
not
beholden
to
the
node,
prepare
resource
and
node
unprepared
resource
apis,
like
the
CDI.
D
A
A
A
A
Integer
you're
right
you're
right,
so
it's
more
gearing,
so
you
are
suggesting
to
skip
still
create
the
controller
and
and.
D
Use
the
controller
for
scheduling,
but
now
because
you're
in
the
same
way
that
we
and
you
can
even
you
know,
continue
using
the
the
helper
library
for
the
controller
to
help
you
do
your
allocation
to
communicate
with
the
scheduler
to
make
sure
that
you
get
your
resources
in
your
case.
Cpu
is
allocated
properly
to
a
node,
but
once
that
process
is
done,
you
can
now
def.
You
know
Define
and
you
know,
decide
whatever
interface.
You
want
from
your
CCI
manager
to
communicate
out.
So,
however,
you
decide
to
design
your
plugins
here.
A
This
might
be
feasible.
We
were
relying
on
on
those
two
callbacks
from
now
just
from
the
standard
dra
implementation,
but
we
could
theoretically
at
at
callbacks
which
which
will
do
corresponding
kind
of
counting
or
communication
back
to
the
scheduler
Norms
yeah,
basically
callbacks,
to
handle
the
scheduler
data.
D
Plugin
comes
online
advertises
resources
through
a
crd
controller
when
he
was
consulted
by
the
scheduler
to
figure
out
where
to
allocate
resources,
looks
at
that
crd
makes
decisions
and
then
reserves
the
resources
that
he's
allocated
into
the
crd,
though,
when
the
kublet
has
the
Pod
land
on
it,
you
can
call
out
to
the
plugin
and
the
plugin
can
see
that
these
reservations
have
been
made
and
do
whatever
is
necessary
to
allocate
them
and
I
think
that
is
a
high
level
process.
It's
completely
doable
in
your
world.
It's.
B
D
A
D
Yeah,
because
then,
because
because
then
at
least
from
what
you
have
to
change
in
kubernetes
perspective
is
is
very
small
right.
All
you
end
up
having
to
do
is
you
need
to
add
the
logic
in
the
kublet
that
knows
how
to
call
out
to
a
CCI
manager
instead
of
a
dra
manager,
based
on
the
on
what
has
registered
itself
with
the
kublet
and.
A
This
is
simple,
as
it
is
basically
similar
approach
like
we
are
calling
this
event
events
already,
so
we
we
need
a
little
bit
additional
to
call
the
the
events
which
we
need
for
scheduling
more
or
less,
but
that
that's
all
it
should
be
easy
to
integrate
most
probably
yeah.
We
can
investigate
that
see
if
we
can
get
rid
of
of.
C
D
Yeah
I
missed
the
last
part.
You
said
about
the
did
some
scheduling
something.
A
C
A
Was
just
saying
that
we
we
will
we
we
can
take
that
suggestion,
try
to
to
implement
it
like
that
yeah,
basically
to
Kindle
all
all
the
data
calls
what
we
need
from
from
the
controller
perspective
in
the
same
entity
for
now
yeah,
we
were
just
doing
admit
that
remove
containers
calls,
but
theoretically
we
can
handle
the
whole
kind
of
scheduling
callbacks.
What
we
need
for
for
that
based
stuff
beside
that
it
should
be
doable.
D
Yeah
I
guess
I'm
confused
by
the
word
callback
here
but
yeah.
The
communication
between
the
controller
and
the
Google
plugin
yeah.
D
Yeah
so
I
would
I
would
highly
recommend
not
having
those
communicate
by
the
grpc
and
instead
asynchronously
through
a
crd.
But
that's
an
implementation
detail
is.
C
D
Whole
bunch
of
grpc
connections
between
a
centralized
controller
and
potentially
thousands
of
nodes
that
have
CPUs
advertising
through
this
is
is
not
scalable
and
it's
unnecessary.
As
long
as
you
have
a
proper
protocol
between
the
crd
and
having
them
communicate,
and
it's
the
way
that
all
current
ERA
driver
that
we've
written
so
far
operate.
There's
no
grpc
connection
between
the
controllers
and
the
Kubla
plugins.
D
A
A
E
A
C
E
B
D
The
gray
box
you
have
here
that
says
grp3,
then
you
talked
about
scheduling,
callbacks
and
once
you're
on
the
kublip.
There's
no
scheduling
going
on.
So
that's
where
to
me.
If
you're
talking
about
callbacks
that
refer
to
scheduling,
I
would
assume
you're
talking
about
something
some
communication
between
the
controller
and
the
kublet
plugin,
and
then
you
said
that
would
be
by
a
grpc
and
so
I
got
confused.
A
A
Right
right,
but
we
we
were
thinking
in
any
case
first
for
the
saint
Pat
through
crd,
if
we
are
communicating
between
the
CCI,
stuff
or
yeah
and
then
the
controller.
If
we
are
quality
between
CCI
manager
and
the
the
the
drivers,
then
it's
grpc,
basically
yeah.
C
D
A
No,
this
makes
sense.
That's
that's.
This
is
a
good
idea,
okay,
right
so
yeah,
but
that
that
was
more
or
less
the
slide
part
yeah.
Maybe
if
there
are
some
questions,
further
questions.
A
A
So
what
I
have
here
is
basically
a
custom
built
kubernetes
which
I
started
OneNote
custom
built
kubernetes,
currently
I'm
still
not
having
one
entity
to
handle
it.
We
will
Implement
that
it
sounds
like
good
good
bad
to
follow
what
I
will
do
basically,
oops
sorry,
it
pressed
I
lost
the
wrong
stuff,
maybe
in
a
short
kind
of
explanation,
I
have
a
small
test
application,
which
is
basically
some
applicable
resource
class.
A
The
theory
resource
class
I
have
the
driver
name,
which
is
some
CPU
driver,
and
this
resource
class
is
basically
claimed
for
for
that
pot
through
standard
era
interface
currently-
and
some
in
in
here
you
see
currently
it's
very
simplified
example,
similar
kind
of
config
map
structs
I,
want
to
claim
five
exclusive
course
and
20
shares,
of
course,
that
that's
the
first
thing.
A
What
we
want
to
try
out
so
in
in
I
have
a
small
driver
implementation
with
dra,
and
currently
my
CCI
driver
is
also
part
of
the
same
package.
Basically,
let
me
shortly
show
that
I
did
it
very
similar
to
The
Standard
examples
of
of
the
of
the
dra
drivers,
but
now
I
have
two
routines:
I
have
a
yes
currently
I
have
two
entities,
but
we
will
fix
that.
I
have
a
theory
entity
which
I
start
with
that
routine
and
then
I
have
a
CCI
entity
which
I
start
with
that
routine.
A
Both
are
called
by
the
corresponding
managers
that
the
differences,
basically,
the
CCI
drivers
will
be,
or
the
registration
for
CCI
drivers
will
be
done
through
this
path.
So
through
the
sockets
put
in
this
path
and
registration
for
Dr,
h
y
versus
the
other
kind
of
the
array,
VAR
VAR
folder
and
the
nice
thing
about
it.
C
A
I
can
use
the
same
driver
name
in
that
case,
so
no
no
need
to
to
do
some
something
new
in
both
spec.
For
that
currently
skating,
rink
piece
on
controller
is
not
implemented,
I'm
just
giving,
and
it's
just
one
node
cluster
yeah,
so
the
the
whole
controller
implementation
is
is
yeah
just
just
to
be
able
to
to
allocate
a
claim.
Basically,
and
as
next
thing
it
needs
to
be
something.
So
my
cluster
is
up
and
running.
A
You
see
that
actually
bootstrap
bootstrapping
Works,
quite
nice,
so
I
can
start
Bots
without
without
problems
other
parts.
This
is
the
first
thing.
What
I'm
going
to
try
starts
a
small.
Let
me
close
this
window,
a
small
kind
of,
but
which
doesn't
have
a
claim
I
have
to
search
yeah.
This
is
my
small
stress
bot
if
I
started
that-
and
you
see
that
I
can
deploy
basically
paths
without
issue,
so
this
is
basically
going
through
with
CPU
manager,
norm
and
and
and
CCI
reacting
to
that
now.
A
A
It's
it
starts
and
you
don't
see
it
on
the
cluster,
and
this
is
the
the
actual
effects
which
we
are
expecting.
You
will
see
it's
under
pots.
I
have
a
painting
pot,
basically,
and
hopefully
the
reason
yeah
it's
currently
complaining,
Dynamic
resource
locational
driver
is
not
up
and
running,
so
we
will
have
similar
failure
with
one
entity.
A
Basically
when
we
have
a
CCI
driver
which,
with
which
candles
the
claims
for
for
us
so
which
will
fail
very
similarly
to
that
now,
I
can
try
to
start
the
the
driver
or
drivers,
and
you
see
controller
came
up.
A
A
A
D
B
A
A
Something
in
my
controller
implementation
is
still
not
perfect,
but
this
time
here
you
see
basically,
first
stage
was
the
controller
and
at
some
point
actually
it
called
the
CCI
driver.
So
it's
it's.
It
got
the
the
request.
It
translated
it
in
some
more
readable
form
for
our
driver
and,
basically,
after
that,
it's
actually
triggered
a
search.
What
course
are
available
and
so
on
so
I
got
two
sets
which
I
can
use.
A
Basically,
some
the
exclusive
sets
this
is
basically
I
was
requesting
for
in
the
claim
for
a
specific
number.
Of
course,
this
is
the
exclusive
set,
and
this
is
the
shared
set
more
or
less
the
true
details
that
this
is
the
exclusive
set
and
then
forces
the
shared
set.
Those
are
returned
back
to
to
cubelet
and
cubelet
can
allocate
them
after
that.
This
is
basically
the
the
mechanism
currently
and
I
Should
Skip
yeah,
and
you
see
my
bot
got
up
yeah.
A
B
A
E
Thank
you
for
the
demo,
I
I
think
it
will
be
helpful
to
start
consolidating.
The
last
changes
you
made
in
the
New
Direction
you're,
following
in
in
the
cap,
to
help
to
make
sure
there
is
the
the
track
record.
I
mean
doesn't
mean
need
to
happen
this
week,
but
in
general
it
could
be
good,
so
we
can
move
forward
more
effectively
as
a
community
I
believe
thank.