►
From YouTube: Kubernetes Resource Management WG 20170613
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
All
right,
so
this
is
the
June
13th
iteration
of
the
resource
announcement
workgroup.
So
today
on
the
agenda.
We
have
about
four
topics:
I
think
an
update
from
my
best
and
because
on
the
research
class
proof
of
concept,
but
we've
done
here
at
Red
Hat
after
our
face
to
face
and
then
time
combination
will
get
through
the
heat,
Raiders
update
device,
plugins
puzzle
and
some
updates
on
CP
manager
work.
That's
been
going
on
plus
community.
So
with.
B
C
A
Just
some
backtrack
on
this
work.
Well
because
spinning
it
up
basically
out
of
the
resource
management
worker,
we
try
to
come
back
here
right
hand
and
figure
out
which
things
we
could
try
to
accelerate
the
proof
of
concept
before
during
a
full
design.
So
I
think
we
had
set
out
for
the
worker,
meaning
that
which
someone
needed
to
prototype
this
to
make
sure
the
scheduler
could
make
the
decision
so
I
think
because
has
finally
gone
and
prototype.
That
and
I
was
going
to
give
it
experience.
C
C
So
we
have
two
nodes
to
620
and
7:30
on
the
621.
We
have
this
NVIDIA
GPU
device
with
less
memory
and
on
730
node.
We
have
one
more
device
which
has
a
bit
higher
memory.
Fourth
type
NVIDIA
GPU,
and
now
we
will
try
to
create
two
resource
classes,
one
requesting
just
that
I
fells,
NVIDIA,
GPU
and
another
will
be
requesting
the
devices
which
another
class
will
request,
that
devices
which
will
need
some
energy
resources
with
higher
memory,
which
can
be
satisfied
only
by
the
resources
devices
which
are
present
on
the
730
node,
which
other
node.
C
C
C
This
can
be
satisfied
only
by
providers
which
are
on
this
another
load
713
or
the
first
one.
So,
let's
see
if
the
scheduler
has
computed
this
thing
correctly
and
yes,
locatable
is
age
for
this
resource
class.
We
shall
it
has
a
break
it
in
the
status
of
this
resource
to
us,
and
now
we
are
going
to
create
some
boats
to
test
if
the
shit-eater
is
able
to
correctly
translate
the
resource
classes
into
these
devices.
So.
C
C
Is
also
correct,
731
for
the
sport,
so
now
out
of
a
divided
sport
are
born.
Let's
see
the
status
of
the
resource
classes
are
correct,
ta
updated,
yes,
so
for
this
high
memory,
resource
class
request
is
for
it
has
got
updated
correctly
and
now
also
another
resource
class,
which
a
request
for
that
I
can
be
a
GP.
Only.
This
request
should
should
get
reflected
there,
also
because
devices
from
both
the
notes
were
satisfying
this,
this
resource
class,
which
was
asking
only
for
type
of
device,
anybody
a
GPU
and
not
any
specific
constraint
like
memory.
C
C
It
should
fail,
it
should
not
be
able
to
launch
I
am
here.
It's
saying
that
the
insufficient
NVIDIA
high
mem
it
has
got
failed
now.
If
we
try
to
launch
a
fall
which
is
requesting
just
an
idea:
GPU,
not
the
high
memory
of
air
units,
it
should
get
launched
on
on
the
first
node,
which
has
any
datp
devices
with
less
memory.
So
let's
try
to
launch
this
poll,
which
is
requesting
this
first
simple
request:
resourceless
request
opaque
units,
so
this
got
launched
and
it
got
this
divisive
correct
device.
C
C
Now,
let's
try
to
launch
another
poor
with
similar
request
as
the
last
one
it
should
feel
like
now.
Let
me
try
to
launch
a
again
the
same:
a
is
the
same
force
based
like
asking
the
normal
and
immediate
GPU,
but
we
take,
but
it
should
fail
because
there
are
only
four
devices
left
which
are
on
the
second
node.
So,
let's
verify
it
should
fail
with
the
proper
message.
C
Event
message:
yes,
it
has
failed
because
it
are
not
available
at
all.
We
have
only
four
left
on
the
second
node,
so
let's
try
to
now
so
it
has
failed.
Let's
try
to
get
the
let's
try
to
get
four
units
of
this
resource-
nuts
n,
not
eight,
and
this
time
this
let's
write
load
this
board,
which
is
asking
for,
and
this
has
got
launched
and
now,
let's
see
accounting
of
resource
classes,
the
requested
of
both
source
classes
should
be.
C
It
should
be
full
traffic
table
now
because
we
have
consumed
all
the
devices
so
for
this
condition
know
that
the
first
two
resource
class,
which
is
asking
this,
can
engage
EQ
so
request
is
reports
and
for
the
second
class
also,
which
is
I,
am
so
both
have
reached
their
maximum
and
let's
try
to
perceive
okay
all
these
boards
we
launched.
So
these
scales
on
the
predicate
and
these
ports
consume
the
total
result
devices
and
now,
let's
see
if
own
belief
it
religious
back
all
the
devices,
and
so
this
gets
updated
correctly.
C
C
Okay,
it's
one
so
again
back
requested
a
zero
and
a
locatable
is
sixteen,
which
is
constant
and
forth
at
the
resource
class.
Also,
it
has
got
a
requested
to
0
because
we
have
deleted
on
the
bus.
That's
all
I
have
got
working
for
now,
so
and
I
am
working
to
test
and
cover
other
complex
scenarios
such
as
group,
the
divided
group
resources
and
overlapping
overlapping
resource
classes
between
the
same
poll,
request
and
I
have
prepared
it.
C
F
C
C
Yes,
there
is
already
one
proposal:
we
are
there
from
me
and
from
one
Nvidia
guide.
So
the
idea
is
that
every
vendor,
a
device
vendor
should
come
up
with
their
plugins
to
order
this
for
their
devices
and
they
should
create
this
cube
unit.
Will,
after
connection
gets
established
between
the
cube
ballot
and
the
device
plug
in
the
device,
plugins
responsible
people
will
discover
the
devices
available
on
the
Norden
gets
created.
All
these
devices
on
the
API
server
gives
objects
for
everybody's.
D
Pick
up
a
bit
yeah
sure,
so
it's
an
evolving
that
mean
analyzation
going
to
happen.
Vendors
like
they
go
to
standardize
on
some
means,
but
that's
not
going
to
be
enough
that,
like
to
resource
out
there
will
be
like,
depending
on
your
deployment,
you
might
want
to
have
instructions
on
what
is
available
or
you
might
just
open
it
up
for
for
all
users.
Right
leg
should
be
sure
having
a
research
or
deployment.
I'm
point
you
wanna
use
like
different
combination,
which
one's
work
best
for
you
for
you
set
up
or
close.
D
F
F
Sort
of,
and
also
you
know
it's
not
only
that
case
then
imagine
this.
So
we
have
Nvidia
Hyman
today
and
then
we
have
all
the
cluster
set
up
and
everything.
So
a
year
later,
Nvidia
comes
up
with
a
new
version
of
the
and
how
we,
if,
if
some1
at
the
customer
level,
wants
to
distinguish
between
a
cluster
that
has
was
originally
hard,
reverse
the
new
version
of
the
harbor.
They
have
to
come
out
again
here.
A
G
F
D
H
D
H
Sorry
to
interrupt
you
wish,
but
and
when
I
was
building
the
device
proposal,
that
was
one
of
the
question.
I
was
actually
asking
myself
and
one
of
the
solution,
or
at
least
somewhat
solution
that
I
came
to
was
that
and
you
should
actually
describe
the
type
of
hardware
or
the
kind
of
memory
and
in
properties
so
in
NMS,
and
the
name
should
only
reflect
either
well.
The
name
should
reflect
some
kind
of
unique
ID
and
you
should
actually
be
able
to
select
with
the
kind.
H
So,
for
example,
you
would
say
GPU
and
then
your
property
would
say
or
the
hardware
the
hardware
is
both
up
or
the
hardware
Tepper.
Although
generation
is
kepler,
the
generation
is
Pascal,
but
I
think
we
still
have
this
dispel
aware.
If
a
vendor
check
to
this
exchange
the
property
name,
then
it
becomes
a
problem,
but
somewhat
defers
the
naming
problem
to
the
property
and
not
just
you
wouldn't
say,
hi
memory,
Nvidia
diamond,
you
would
just
say,
I-
want
an
Nvidia
and
then
with
property
hardware
or
memory
equals
8
gig
exactly.
F
I'm
sure
sure
yeah
I
just
wanted
to
say
one
thing:
quick,
I
kind
of
liked
the
words
or
not
set,
but
at
the
same
time
the
user
should
come.
We
be
able
to
commit
all
the
details
if
they
want,
for
example,
a
user
may
not
care
at
all
about
this
version
or
whatever
I
just
want
time
to
give
you
whatever
GPUs
available
is
fine.
That
should
be
possible.
As
your
we
see,
we
should
address
port
size
right
now.
A
So
it's
just
as
well
as
I
mean
one
of
the
things.
I
was
something
because
could
maybe
spend
a
minute
or
two
on
was
talking
about
the
things
he
had
to
add
when,
during
this
prototype
that
we
didn't
have
previously
captures
sort
of
thing
that
stood
up
to
me,
fish,
in
contrast
to
what
we
originally
had
written
up,
was
that
he
added
a
status
section
to
resource
class
and
I,
wasn't
really
quite
sure,
because
if
you
can
conduct
why
you
thought
that
was
necessary
versus
just
the
scheduler
holding
that
status,
local,
its
cache
just.
C
Because
the
user
can
have
a
if
you
like
resources,
I
can
and
what
quantity
I
can
request
in
a
poll.
Otherwise
a
its
conditioner
knows
like
how
much
you
so
that's
particularly
remaining
user
may
request
it
and
it
may
fail
so
just
for
the
user
cycle
so
that
she
knows
how
much
is
left
which
he
can
very
request
in
the
port.
Okay,.
A
So
the
scheduler
is
not
necessarily
using
that
to
make
the
schedule
decision.
It's
still
looking
at
the
load
to
individual
count
valid
capture
if
I
guess
what
I
won
again
certified
coupled
so
the
posterior
autoscaler
like
I,
would
expect
that
that
number
would
go
up
and
down
all
the
time.
So
let
me
quite
sure
how
critical
the
status
section
of
whispers
last
ones
for
driving
the
actual
system
behavior,
which
is
what
could
potentially
uncle
avoid.
G
A
G
A
The
interests
of
running
trucks
are
next
year.
Okay,
just
also
people
here
we
said
something
like
resource
classes.
We
needed
to
support
heterogeneous
device
deployments.
So
when
we
go
back
to
our
roadmap,
I
don't
know
if
we
end
up
saying
that
this
needs
will
be
here
before
after
GPS
equipments
up
I.
Think
that's
up
for
debate,
but
I
think
in
a
long
line.
This
is
a
potentially
in
a
person.
At
least
you
can
know,
we
were
able
to
prototype
to
know
that
we
could
differentiate
devices
in
this
way
and
it's
achievable
so
well.
A
D
It's
that,
like
resource
classes,
are
essential
if
you
want
to
have
any
more
advanced
scheduling
if
we
are
okay,
if
you're,
okay,
with
like
having
primitive
scheduling
there,
you
just
get
like
one
along
end
devices
and
you
can't
like
filter.
Among
those
end,
devices
then
like
just
the
existing
filters
could
be,
should
should
be
good
enough.
So
it's
up
to
us
when
we
decide
that
these
additional
scheduling
parameters
are
necessary.
So
I'll,
try
to
start
discussion
on
Gonzaga
this
week
as
to
direct
resource
dancer,
should
sit
in
an
outdoor
man.
D
A
I
So
Derek
previously
presented
the
the
proposal
how
to
integrate
huge
pages
of
the
first
class
citizen
in
the
cuber
notice,
but
essentially
there
are
two
ways
of
using
huge
pages.
One
of
them
is
directly
by
our
Cisco's,
like
JVM
use,
assist
like
the
transparent
way
and
another
way
is
use
it
as
a
huge
TLB
FS.
I
So
this
is
the
case
act,
for
example,
the
PDK
application
is
using
that
in
that
way,
so
I
am
proposing
like
another
approach,
so
basically
creating
a
new
volume
plug-in
huge
pages,
so
it
basically
creates
a
huge
TLB
FS
mount
and
mount
it
directly
into
the
pod.
Sandbox
and
phase
2
would
be
implementing
the
admission
controller.
I
know
the
name
is
maybe
not
very
cool,
but
I
consented
and
trance-like
translates
the
volume
mounts
into
their
huge
pages.
I
A
I
J
Click
typo.
D
If
it
depends
on
depends
on
what
huge
pages
are
right
like
if
the
few
spaces
are
the
kind
of
visa
reside,
you
can
have
like
multiple
shapes,
and
then
you
want
like
more
advanced
key
link.
Primitives
for
resources
or
trying
to
tackle
then
putting
in
the
volume
substructure
is
going
to
complicate
everything,
and
volumes
of
section
is
great
because,
like
you're
giving
you
shear,
we
inserted
like
so
it's
a
file
system
thing
and
like
while
you
sort
of
naturally
not
side.
A
A
D
A
A
D
D
D
D
D
I
feel
like
we
can
follow
that
same
pattern
that
Norwood
advertises
what
like
huge
patience,
come
on
exist
and
I'm
like
use
the
same
scheduling
stack
to
to
schedule
huge
page
volumes
about
our
region
without
a
road
of
like
not
taking
volumes,
we're
not
abstracting
nutrients
and
ones,
and
then
abstract
occurs
like
this
regular
resources
and
have
to
let
do
what
is
necessary
for
that.
I
think.
A
You
need
to
do
both
I,
don't
know
if
we
need
at
home
or
too
much
here,
but
if
I
I
do
not
really
want
to
have
to
reallocate
huge
page
climb
for
particular
sizes
and
I
feel
like
that's
kind
of
what
you're
suggesting
here
versus,
but
Peters
demo
was
showing
us,
he
was
dynamically.
Creating
the
volumes
like
it's.
D
A
You're
not
dynamically
allocating
new
huge
pages.
You
had
an
existing
reservation.
We
had
an
existing
amount
of
pre-allocated
each
pages.
He
was
just
saying
that
the
volume
or
having
a
volume,
much
backed
by
huge
pages
from
that
already
pre-allocated
pool,
can
be
created
dynamically,
but
he's
not
dynamically,
creating
new
pre-allocated
mutaters
yet
effects.
Yes,
yes,.
I
I
have
pre-allocated
like
huge
pages
beforehand.
It
has
to
be
done
manually
by
the
operator,
but
but
then
I
create
dynamically.
The
thoughts
that
are
using
those
so
if
I
will
create
another
thought
who
wish
to
request
the
huge
pages
it
will
just
create
another
huge
tlvs
s,
volume
and
mounting
it
to
directly
into
some
books.
Yeah.
A
A
I
Can't
is
can
always
go
with
the
roads
that
you
can
explicitly
like
defining
the
pots
bag,
the
huge
pages
volume
mount
and
the
request
size.
If
you
want
to
like
account
it
because
I
just
didn't
want
for
a
user
to
specify
explicitly
in
their
volume
spec.
So
just
right
here
we
don't
specify
the
request,
but
you
could
put
yeah.
D
I
mean
like,
if
you
go
down
the
volumes,
but
we
couldn't
imagine
using
sources.
For
example,
you
can
say
that
you
have
a
huge
page,
storage
class
of
like
size,
food
and
so
by
default,
you're
gonna
get
huge
pages
of
size,
10
gigs
or
whatever
photo
size
EOP
allocated
so
yeah
I,
don't
like
talking
about.
A
A
H
A
Okay,
if.
H
And
I
think
it's
more
interesting
too,
because
it
seems
like
a
lot
of
people
want
to
go
in
different
directions.
I
think
I
was
mostly
taking
like,
in
your
case,
driver
installation
and
and
I
think
there
was
an
other
part,
but
what
I'm
wondering
is
that
is
my
PR
sufficiently?
It
is
a
scope
of
which
you
are
sufficient
for
everyone.
All
right
can
I
move
forward
with
this
pillar
or
should
I.
Should
we
build
something
more
on
top
of
with
it?
I'm.
D
Going
to
first,
a
blister
Lake
figure
on
a
solid
requirement
or
I
think
that's
what
I
would
recommend
like
make
sure
that
the
requirements
are
upon
by
the
community
level
before
before
you
spend
time
at
cement
again,
so
I
still
feel
like
the
requirements
are
not
fully
solidifying
your
proposal.
So
should
you
think
that
week
them
to
like
attempt
to
finalize
the
requirements,
because
I
don't
think
everyone
got
a
chance
with?
Actually
the
viewer,
given
that,
like
once
that
was
happening,
I.
H
K
Right
this
is
a
non-key,
so
what
I
meant
is
we
need
to
think
little
beyond
just
GPUs
I'll
try
to
make
it
as
genetic
as
possible
to
other
type
of
device
with
specific
months,
Nick
I'm
actually
looking
into
that
this
week.
So
you
know
I
really.
D
Round,
just
to
give
you
a
Exxon
falls,
don't
get
the
expectation
like
I.
Think
like
this,
this
general
proposal
and
like
the
work
that
you're
doing
is
like
going
to
be
really
critical
for
the
project
and
gentle
it's
going
to
lay
the
foundation
for
like
many
new
features
of
you
that
we
want
intimate.
So
it
might
take
some
time
to
finalize
your
oh
I.
Haven't.
J
H
H
L
J
L
L
Just
we
don't
have
to
do
that,
but
it
can
be
done
and
it
makes
it
easy
to
show
right
off
the
bat
here
that
the
what
the
shared
pool
is
so
I
mean
the
cube
DNS
running
here
and
if
we
cuteness
is
a
versatile
pod.
So
if
I
go
down,
come
in
the
cpu
set
controller
here
secret
controller,
so
if
I
go
down
into
pods,
first
of
all,
you
can
see
here's
the
so
there's
three
containers
in
the
cube,
DNS,
pod
and
plus
deposits
container,
that's
four,
and
if
I.
L
L
So
if
I
go
in
here
and
look
at
this,
so
this
is
what
what
we're
calling
a
guaranteed
container,
which
is
kind
of
crossing
the
streams
since
guaranteed,
is
applause,
level,
concepts
and
Clause
applied
to
pods.
But
in
this
case
all
I'm
referring
to
is
a
container
in
a
guaranteed
pod.
That
has
an
interval
CPU
request,
just
what
we're
deciding
that
the
static
manager
is
going
to
lock
on
to
you
right
now.
So
basically
that's
limits,
and
so,
if
I
create
this.
L
Okay,
so
now
it's
running
and
if
we
go
over
here
now
we're
doing
slots,
you
can
see
that
it's
switched
to
only
CPUs
2
&
3.
That's
because
these
versatile
Pollitz
are
in
the
shared
pool
and
they've
been
shoved
off
of
core
one
which
was
allocated
to
the
guaranteed
container,
and
it
is
the
bottom
of
my
screen.
Getting
cut
off.
I'm
can
you'll
see
that
bottom
line
with
the
cursor
yeah.
L
L
B
L
L
L
D
L
To
question:
we
change
that
in
that,
if
you
don't
use
in
regal
CPU,
then
it
so,
then
you
don't
get
it
you're,
not
a
guaranteed
container.
So,
for
example,
a
request
of
one
will
get
you
a
you
know
a
core
assignment
like
this.
If
you
say
one
point,
one
you're
in
the
shared
pool
and
no
part
of
your
CPU
request
is
fulfilled
by
a
dedicated
core
library,
because
it's
a
mess
for
CFS.
L
Basically,
you
know
to
have
100%
of
one
core
and
then
10%
of
another
one
and
having
that
having
an
asymmetric
CPU
set,
you
know
the
scheduler
goes,
nuts
I
tried
it
and
it
doesn't.
It
seems
like
they're,
almost
disables
load
balancing
for
some
degree,
especially
if
your
workload,
a
single-threaded,
it
doesn't
make
much
sense.
Yeah.
D
I
mean
if
it's
single
turn.
It
then,
like.
You
just
need
single
hot
beds,
but
like
if
it's
if
it's
marketed,
and
if
your
application
is
smart
enough,
that,
like
it,
is
able
to
figure
out
what
exclusive
core
it
gets
and
then
it
fins
depends
the
important
dreads
to
that
core.
The
rest
of
its
turrets
can
run
on
any
core
and
it
still
Edition.
How
would
it
know
which.
D
B
A
A
M
Your
pin
set
you
can
at
least
look
at
prod
self
status
them
you
can
see.
Ur
your
CPU
flags
mask
there
in
terms
of
knowing
what
the
exclusive
core
is.
That's
something
that
we
could
get
pushed
down
into
an
environment
variable
for
the
static
policy.
You
know
that,
but
that
won't
change
your
lifetime.
That's
guarantee
that
the
static
policy
gives.
So
if
we
somehow.
M
L
There's
it's
really
a
nightmare.
I
would
have
delineated
all
the
problems
as
I
encountered
them,
but
allowing
the
mixing
of
exclusive
cores
and
shared,
of
course,
is
really
really
doesn't
work
very
well
from
scheduler
standpoint
or
from
so
with
with
CFS,
for
example,
if
you
so
what
I'm
doing
right
now,
if,
if
it's
a
guaranteed
container
I'm,
not
setting
CFS
limits
on
it,
because
it's
implicitly
confined
to
one
core
and
therefore
cannot
use
more
than
one
core
and
so
I
don't
have
to
use
GFS.
D
It's
an
update,
there's
an
application,
or
this
choice
right
so
and
that
like,
if
you
really,
if
you
really
really
want
CSS
out
of
the
way,
then
like
always
request
integral
course
very
good
application
is
one
bar
like
me
not
like
get
integral
calls
all
the
time
right
and
it
can.
Some
part
of
the
application
can
survive
public
course
or
shared
course.
It
is
a
bit
an
application
problem.
L
Can
see
applications
that
already
exist
today,
looking
at
prot
self,
like
Connor,
is
suggesting
and
figuring
out,
and
if
you
run
you
know,
if
you
don't
allow
mixing
of
exclusive
into
your
cores
and
a
pod,
then
the
application
can
assume
that
all
of
the
processors
it's
allowed
to
run
on
if
you
configured
it,
this
way
are
exclusive.
So
you
don't
have
to
have
this
guessing.
A
About
is
like
I
think
if
we
could
do
this,
something
like
a
core
application
type
like
I,
think
JVM.
Doesn't
it
just
look
at
the
actual
enumerator
instead
of
this
and
assume
it
has
full
access
to
those
but
I'm
wondering
some
of
these,
like
common
middleware
platforms,
if
you
my
run
how
much
space
can
slice
and
dice
where
I
don't
know
if
the
majority
of
applications
are
as
savvy
as
the
ones
that
dishes
and
their
wording
yeah.
L
N
N
D
All
I'm
saying
is
like
not
saying
that
the
use
kids
of
like
getting
rid
of
CF
is
as
special
as
possible
is
not
valid.
I'm
saying
that
there
is
also
an
additional
use
case
that
you
might
want
to
run
part
of
your
application
on
on
the
course.
So,
if
even
that
aside,
the
other
issue
that
has
been
dot
up
here
is
that
of
like
lack
of
standards.
D
Index
container
API
is
around
resources
like
you're,
adding
new
primitives
and
policies
as
part
of
converters,
and
probably
like
other
orchestration
systems
allowed
there
and
like
vacation
general,
are
not
like
container
ready.
So
we're
going
to
have
the
problem
for
legacy
ads.
So
we
have
to
set
the
path
for
like
modern,
Coronas
apps
that
are
probably
being
rebuilt
for
Carrodus
and
like
give
them
a
good
down
a
DJ
and
then
another
hand,
also
figure
out
a
transition
path
for
legacy.
A
L
A
Guess
I
wonder
if,
like
the
people
expect,
the
pause
like
is
that
a
bad
choice
for
people
to
see
the
positive
hit
or
not
on
the
explicit
or
dislike
it?
I
thought
a
multi
container
application,
I,
didn't
I,
didn't
even
know
which
one
but
wanted
to
be
on
Rihanna.
So
like
I
was
just
wanting
a
share
core
with
a
conscious
to
it.
It.
L
D
H
A
D
The
reason
I
ask
is
that
leg
pauses
with
the
pod
level,
so
I'm,
trying
to
like
understand
how
the
policy
looks
as
part
of
this
proposal.
It's
a
part
of
guaranteed
then
like
do
and
if
all
its
contributors
have
integral
colliculus
is
that
when
is
it,
then
they
will
all
get
exclusive.
L
L
D
A
D
A
Was
like
the
other
question
I?
Have
it
I,
don't
know
as
obvious
my
question
kind
of
the
answer
now,
but
I
think
it's
worth
talking
through
I
know
me
and
Connor
met
to
discuss
like
the
idea
that
did
we
need
to
move
everything
off
of
the
core
before
starting
the
pod
if
it
had
excess
of
cores-
and
maybe
you
can
talk
through
like
the
rationale
we
made
at
that
time
and
why
we
made
it.
L
Right,
so
we
made
the
decision
that
or
are
we.
This
is
the
way
it
isn't
proof
concept
that
DCP
use
set
needs
to
be
configured
before
the
container
starts,
so
the
CPU
set
is
actually
configured
between
create
container
and
start
container
in
CRI
terms.
That
way,
when
the
application
comes
up,
it
could
probe
professed
to
figure
out
which
GPUs
its
confined
to.
A
D
A
I
think
right
now,
let's
just
load
up
the
container
level
because
it
sounds
like
doctor
was
passing
all
allowed
peers,
it's
not
asking
if
we
can
get
that
bug
fixed,
so
we
can
iron
out
that
detail.
Maybe
we
can
only
do
it
at
a
higher
level,
but
then
you
start
to
do
at
the
container
level.
If
you
want
to
let
a
sidecar
container,
not
not
on
the
shared
pool.
L
M
L
I
just
there's
ways
of
doing
it
where
that
was
kind
of
details,
and
so
I
didn't
implement
that
in
the
B
or
C.
But
you
can
look
at
you
know
course
siblings
in
proc,
CPU
info
and
see,
if
you
know
siblings,
of
course,
match
if
siblings.
This
twice
course,
then
you
can
assume
hyper.
Threading
is
enabled
okay.
A
Okay,
I
think
we're
up
on
the
hour
good
to
have
had
full
agendas
in
the
interest
of
getting
people
to
after
time
and
another
one
at
k17
done.
When
I
end
the
meeting
now,
if
that's
okay
and
I
turn
to
next
week
and
if
folks
want
to
add
additional
topics.
Next,
we
go
up
in
the
follow
up
on
the
device
plug
in
Louisville,
feel
free
and
I
will
get
this
recording
uploaded
this
afternoon.
But
any
other
last
word
that
people
have.