►
From YouTube: Mesos Containerization WG 06 01 2017
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
All
right,
I
think
we
should
get
started.
This
is
like
a
today
is
june
1st
2017.
This
is
the
metros,
containerization
working
group
meeting
and
just
a
reminder.
This
meeting
is
recorded
and
it
will
be
prohibition
and
will
be
published
later
so
can
I
see
my
screen?
A
Yes,
okay,
cool,
so
I
think
we
have
two
agenda
item
today.
One
is
trying
so
Dmitry
is
going
to
talk
about
on
this
design.
Proposal
of
CPU,
painting,
lasers,
there's
a
link
to
that
dog
and
if
we
have
more
time
we're
going
to
go
through
this
planning
sheet.
So
basically
we
have
the
the
matrix
I
put
them
into
a
spreadsheet
and
I'm
going
to
track
the
progress
there
in
that
spreadsheet.
A
So
if
you
have
more
time
going
to
like
the
on
trying
to
update
that
spreadsheet,
with
status
updates,
if
you
have
any
more
items,
feel
free
to
add
to
today's
meetings
agenda
or
like
using
the
future
future
meetings
agenda.
If
you
want
to
talk
about
something,
so
this
is
the
tentative
agenda
today,
so
I
think
we
should
go
first
with
the
design
proposal
discussion
from
Dmitry
deepening
NATO's,
you
need
to
do
you
want
to
do
you
want
to
share
the
screen
if
you
have,
if
you
want
to
present
anything
yeah.
C
C
A
C
E
C
For
us,
like
it
stated
here
in
the
ground,
like
most,
people
are
probably
interested
in
memory,
locality,
processes,
relation
and
device
locality
like
most
rigorous
work
on
GPU
related
stuff.
That's
like
not
the
current
target
target
for
us
is
it's
more
like
we're
more
like
most
interested
in
memory
locality
and
maybe
in
processes
elation
Oh.
C
While
we
need
this,
we
did
some
experiments,
wait
for
the
memory
bank
processes.
This
really
helped
in
complete
performance.
But
this
one
interesting
scenario
where
Eric
use
were
pinned
to
the
first
aid
course
and
like
that
was
related
to
network,
have
so
a
sort
of
special
configuration
for
us.
So
it's
not
addressed
in
this
proposal,
but
we're
mostly
focusing
on
memory
bar
processes
which
really
can
can
be
improved
this
non-european.
So
here
is
a
simple
Python
script.
It
is
Aslam
bhai
to
sort,
and
you
generate
random
numbers.
C
C
A
Okay,
so
can
I
ask
them
questions
so
you're
saying
that
so
I
see
like
that,
there's
a
huge
differences
in
terms
of
like
no
loud
noises.
What
is
that,
like
a
cache,
miss
or
some
so
basically,.
C
I,
if
you
have
a
processor
which
contains
several
course
and
two
sockets
like
the
memory,
if
you
access
memory
on
which
is
local
to
another
node,
it
has
to
travel
through
bus
and
all
stuff.
So
it's
really
slow.
But
if
you
can
access
your
local
memory,
like
you
have
their
direct
connections,
you
has
direct
connection
to
it.
Then
it
works
faster.
A
C
C
F
C
E
F
A
C
E
A
C
Gosh
I
think
basically
and
some
other
workloads
which
are
also
going
on
yeah.
We
maybe
have
some
interesting
processes
elation
but
play
this
kind
of
tricky
situation
because
you,
like
a
sign,
also
P
use.
If
you
assign
process
to
just
subset
of
CPUs,
then
this
process
cannot
burst
and
that
could
be
an
issue.
So
I'm
I'm,
not
sure
if
you
wanted
this
or
not
yeah.
A
I
think
like
this
is
like
a
discussion
that
we
can
have,
especially
like
I,
think
I'm
CUBAN
Aries
has
that
discussion
I
think
you
have
that
link
in
McDonald
I
think
it's
really
interesting
to
understand
what
their
rationale
is.
I
go
through
the
the
issues
we
have
a
lottery
and
also
their
like
a
notes
in
their
face-to-face
meetings.
There's
a
lot
of
interesting
notes
there,
and
by
the
way
we
should
take
notes
whose
spawns
your
chick,
who
can
take
notes
on
your
Gilbert,
can
can
do
that.
Like
maybe
like
take
some
notes.
A
A
Cases
from
memory
located
so
sounds
like
this.
This
dog
title
should
be
at
the
memory
pinging
rather
than
CPU
kini.
Somehow,
my
job
based
on
the
content
of
this
talk
and.
E
C
A
I
think
it's
not
just
for
memory,
I
think
it's
also
for
GPS.
If
you
have
this
topology
to
like
I,
think
kevin
is
here
too,
so
he
can
come
out
more
on
this
one
I
think
you're,
not
just
for
CPU
or
memory.
It's
also
like,
for
example,
like
maybe
in
the
future,
at
some
other
devices,
and
you
have
some
like
a
I
turn
me
so
that
we
have
some
speed
when
you
actually
divide
around
different
CPU
note
on
yellow.
C
C
C
Are
the
existing
systems
I
checked
could
be
netted?
There
was
a
proposal,
you
know
sort
of
exposed
to
be
utility
masks
and
people
with
work.
Experience
suppose
that
I'm
also
not
fan
of
these
because
well,
it
would
be
difficult
to
like.
If
your
system
is
running
and
stuff,
when
you
can
task
is
finished,
you
have
a
gap
in
basically,
you
have
a
fragmentation
of
CPUs,
and
now
this
becomes
difficult.
Thank
youing
needs
to
defragment
it
and
how
to
expose
these
resources
due
to
the
framework
and.
C
Now
they
said
this
is
too
low-level,
like
you
have
less
control
over
this.
When
you
do
things
like
this,
what
they
wanted
basically
was.
Some
high
level
requires,
like
use,
you
express
the
intent
like
I'm,
a
lighting,
so
sensitive
services,
I'm
a
memory
bar
service
or
uncomputable
service,
and
then
based
on
this,
the
system
should
decide
what
it
should
be
like.
It
is.
C
E
C
A
A
Exploding,
CPU
set
of
darker
darker
parameters
like
basically
like
the
some
people
wants
to
say:
hey
when
I
launch
a
pod,
I
can
say
CPU
set
equal
to
what
so
that
they
can
ping
a
worker
to
a
given
node
ox
or
a
given,
CPU
I
think
the
pushback
run
flowing.
Everything
is
because
they
don't
want
to
expose
that
low
level
of
control,
because
they
have
some
optimization
in
mind,
for
example,
work
this
and
something
very
smart
if
they
expose.
A
If
you
set
information
the
API
level,
then
they
don't
have
the
opportunity
to
do
those
videos
to
do
those
optimization
in
the
future
and
based
on
their
assurance.
This
said
it's
better,
not
exposing
those
in
the
API
level
and
that
they
will
figure
out
how
to
do
that
on
later
on.
I
think
that's
the
main
push
that
around
the
world.
People
as
I
understand
the
threat
of
discussion,
pressure.
C
A
C
Also
selected
the
paragraph
I
found
in
the
meeting
notes,
so
that
gives
some
insight
on
what
they
do
like
they
collect
some
statistics,
I
guess
and
based
on
the
behavior.
They
do
some
decisions
in
the
system.
So
some
point,
this
problem
is
really
automated.
Extrusions
give
sounds
hint
or
describe
your
fresh
/and
and
it
just
works
so.
A
Points
yes,
it
would
be
great
that
you
can
digest
that
documentation.
I
think
that
I
asked
in
the
documentation,
especially
the
notes
from
their
face-to-face
meeting.
There's
a
lot
of
interesting
point.
There
I
think
we
should
do
a
thing
consist
of
like
what
are
the
points
I
like
I,
think
I
think
they
mention
something
since
I'm
constantly
on
our
resource
model,
like
a
Dear
Leader's
discussing
like
resources
classes
things
like
that,
so
it
will
be
great
to
have
a
synthesis
of
that
discussion.
A
Somehow,
because
I
think
you
really
very
relevant
to
this
discussion
and
always
unofficial
to
learn
for
others.
So
I'll
suggest
like
let's
do
do
some
exercise
to
to
go
through
the
notes
and
their
issues
to
figure
out.
Where
are
the
exact
argument
like
against
this
and
were
like
what
their
solution
potential
proposal
solution
is,
propose
the
solutions,
and
then
we
can
have
a
discussion,
see
how
well
it
can
be
fit
into
our
model
of
resource
management.
I
think
that's
my
kind
of
suggestion.
Possession
here
on.
A
C
A
C
C
A
E
C
E
A
That's
a
fantastic
sounds
like
there
are
multiple
issues
here,
so,
first
of
all,
I
think
you
care
about
locality,
because
you
want
to
make
sure
that
the
works
are
placed
on
those
CP.
You
know
that
close
to
the
memory,
node
and
I
think
you
also
care
about
CPU
interference.
Do
you
want
like
dude,
you
don't
want.
The
CPU
use
by
those
cache
service
is
being
used
by
some
other
services.
Is
that
something
that
you
guys
want
as
well
ecstatic
dedicated
CPU
cores
so
that
you
don't
allow
other
other
tasks
to
burst
into
those
cores?
A
Well,
that's
not
really
the
case,
because
this
cluster
runs
uniform
for
Claus
okay,
so
you
do
that.
Don't
want
that!
It's
okay,
that
some
other
work
will
burst
into
those
cores
potentially
affecting
some
the
performance
of
those
cache
services,
but
you
guys
are
okay
with
that.
What
we.
C
A
I
think
it's
not
about
the
cash
terms
itself
is
about
like
do
you
want
to
make
sure
that
other
services,
or
any
other
thing
host
not
be
able
to
use
the
CPU
cores
reserved
for
their
cache
service?
Do
you
want
to
guarantee
that
turnout
I?
Think?
Yes,
okay,
so
you
want
both.
You
want
memory,
meaning,
okay,
yes
to
me,
this
is
like
a
separate
issue.
It's
like
a
lie
quietly
locality
like
how
do
we
play
like
how
do
we
expose
it?
A
Apology,
information
and
the
other
thing
is
like:
how
do
we
make
sure
that
the
CPU,
how
do
you
reduce
the
interference
making
sure
some
resource
are
dedicated?
No
one
can
person
to
those
resources.
I
feel
these
are
two
separate
issues.
Black
could
be
wrong
like
anyone
else
has
any
okay
on
this
one.
H
Is
only
tempered
to
me
just
because
now,
while
you
might
solve
that
curse
one
having
we
could,
do
you
see
peeps
such
that
you're
not
actually
dedicated
on
those
course.
It
sounds
like
maybe
what
it's
being
done
for
Dmitry
is
they're
running
on
dedicated
hosts
for
a
particular
service,
and
so
they're
kind
of
avoiding
that
problem
over
here
is
or
someone
burst
into
the
core
I
seem
to
have
a
different
solution
for
that
problem.
H
C
In
my
experience,
my
my
experience
the
course
we're
assigned
exclusively
to
the
process.
So
basically,
I
was
running
four
processes
on
two
new
models.
Each
process
was
assigned
to
half
of
luminoled,
and
that
was
why,
in
pretty
good,
so
the
London
interference
that
was
possible
is
from
the
operating
system
or
some
other
stuff
run
in
the
culture
outside
of
the
complain
right.
H
C
H
Got
it
and
is
there,
do
you
have
any
thoughts
on
within
the
container
like
let's
say
you
have
four
cores
within
the
container
ensuring
that
for
those
four
cores
like
the
containers,
not
interfering
with
itself,
are
you
binding
processes,
two
cores
within
the
container.
C
Or
are
you
just
menus?
So
there
was
just
one
process
per
container
and
okay
I
think
it
was
using.
It
was
exclusively
using
the
course.
Okay
well
exclusively.
Probably
the
wrong
word
worked
because
basically
in
CPU
sets
under
measures
hierarchy,
it
was
the
only
one
using
it
all.
Other
processes
running
on
the
host
could
use
that
for
right.
H
Would
be
helpful
for
me
when
looking
at
something
like
this
is
the
use
cases
like
I
actually
was
guessing
that
this
would
be
more
like
low
latency
workloads
like
a
cache
or
like
high
frequency
trading,
or
things
like
that.
So
you
know
it
turns
out
that
looking
at
this
stuff
there's
actually
some
memory
throughput
workloads
as
well.
H
A
I
think
I
think
that
makes
sense.
I
think
the
other
thing
is
I
kind
of
agree
with
like
poor
people
is
like
probably
exploding.
Cpu
set
itself
is
too
low
level,
and
that,
like
prevent
us
from
doing
like
some
future
optimization
if
possible
in
the
future,
like
I,
feel
like
we
should
think
about
like
a
like.
How
do
we
expose
that
in
API
level,
rather
than
just
just
just
just
expose
that
low-level
CPU
set
directly
to
the
user?
A
So
that's
that's
my
kind
of
feedback
and
I
think
I
think
one
of
the
things
that
we
should
definitely
do
is
I'm
trying
to
look
look
through
those
who
knows
I
think
those
really
valuable
notes
and
trying
to
figure
out
like
what
are
the
thoughts
those
people
has
and
see
if
we
can
borrow
anything
from
them
and
what
makes
sense
for
us
on
our
model.
A
F
C
C
C
C
A
C
From
which
unit
you
want
to
select
units
for
silent
like
if
it's
from
system,
then
you
can
learn
too
many
CPU
like
if
it's
from
single
lumen
out,
then
you
will
use
only
CPUs
in
the
Timbaland
example.
This
allows
Schneider,
like
you,
want
to
bind
to
half
of
Numa
note
and
use
this
course
exclusively.
I.
C
A
A
C
No,
there
was
no
changes
to
locator,
no,
it's
working
based
on
CPUs
resource
and
that's
all,
but
then,
when
the
workload
lands
on
some
node
CPU
said
subsystem,
that's
all
the
job.
Ok,
if
you're
excited
right,
go
ahead
yeah.
So
it
looks
at
this
anything
for
of
all
containers
on
this
system
and
tries
to
balance
them
like.
C
There
are
free
resources
to
that
are
satisfying
distance
trains.
Then
it
just
designs
CPUs
and
that's
all.
If
I
couldn't
do
this,
then
it
needs
to
rebalance
like
probably
try
to
try
different
assignments
of
processes
to
nodes
and
it
could
find
a
solution.
But
the
problem
is
that
this
is
actually
being
parking
problem,
which
is.
A
Empty
heart,
yeah,
yeah,
okay,
I
see
so
your
scene
is
not
an
allocation
problem
because,
like
a
declare,
don't
aware
of
those
things,
but
the
isolator
like
whatever
like
the
agent
running
on
the
node,
will
need
to
figure
out
on
that.
The
placement
based
on
those
considering
and
it's
an
np-hard
problem
until
all
that
yeah,
the
small.
C
It
works
for
small
systems
because,
like
if
you
have
two
numerals
and
really
easy
to
solve
this,
but
for
large
systems,
yeah
drizzling
it
it's
done
left
to
work
slowly.
I
tried
a
heuristics
like
best
fit
like
a
sign,
a
sign
process
to
the
node,
which
has
more
which
access,
after
assignment,
which
you,
which
would
have
less
spill
free
CPUs
after
assignment.
But
you
know
it's
not
an
exact
solution
and
it
could
fail.
Sometimes
when
they
take
solution,
exists.
A
Another
question
I
have
is
like:
it
is
always
possible
to
find
a
solution
or
like
in
the
possible
that
no
solution,
it
could
be
the
case
when
there
is
no
solution
vector
okay.
How
do
you
deal
with
that
case?
If
there's
no
religion
like
you
cannot
find
a
read
like
a
placement
on
to
to
to
to
satisfy
all
these
constraints.
Yeah.
A
So
for
my
prototype,
I
just
failed
to
ask
see
any
you
rely
on
scheduler
to
reschedule
that
attach
to
a
different
nodes
so
that
way
that
I
can
on
the
feet
be
solved,
got
it
over
there.
That's
a
little
unfortunate
the
fact
that
I
don't
know
what
the
solution
should
be,
but
this
is
a
little
import.
I
wish,
like
this
kind
of
constraint,
can
be
surface
to
alligator
and
when
alligator
make
those
kind
of
allocations,
take
those
things
into
account,
I'm,
not
sure,
like
I'm,
just
area
3
like
throwing
ideas
and
comments
here.
Oh
yeah.
C
F
So
there's
a
so
there's
a
there's
another
dimension,
I
think
in
this
binding
stuff,
which
is
hierarchy,
news
and
network
resources,
so
by
the
way,
I
thought
about
Dimitri's
motion
of
distributing
across
limit
as
to
why
you'd
want
to
do
that.
Well,
one
of
the
reasons
you'd
want
to
do.
That
is
because
you
have
network
cards
down
to
Numa
nodes.
So
if
you're
going
to
Express
topology
in
this
very
fine-grained
manner,
I
think
one
of
the
things
you
want
to
express
is
IO
resources
as
well.
E
F
D
F
C
You're,
referring
to
this
disciplines
right,
yeah,
yes,
yeah!
So
as
far
as
I
understood,
it
was
like
trying
to
calculate
to
solve
some
problem
to
find
common
resources
into
water
and
then
expose
there
as
over
subscribable
resources
and
then
like
now.
This
could
be
used
by
whatever
framework
goes.
Oh
I
didn't
like
this
solution,
actually
cuz,
it's
what
it
seems
like
non-deterministic
for
me.
Oh.
A
It
would
be
great
to
have
a
like,
like
it
would
be
great
to
have
some
in
background
on
like
what
they
did,
especially
like
I.
Don't
understand
what
they're
doing
so,
James
that
if
you
say
you
can't
take
a
look
a
but
I
haven't
dig
deeper
into
that,
like
an
article.
A
F
I
think
that
I
think
I
haven't
actually
deeply
up
to
the
code,
but
I
just
more
glance
to
that
and
I
believe
me,
but
my
impression
was
that
they
look
at
the
resources
used
by
the
task
and
they
do
they
rule
of
hand-wavy.
They
kind
of
go
hey
this
thing.
This
task
needs
like
two:
we
use
so
our
client
to
unpin
signals
and
pin
to
two
CPUs
hi.
F
So
the
idea
is
basically
using
the
what
you
know
about
the
task
resources
to
automatically
pin
to
specific
hardware
resources,
so
you're
not
going
to
get
it
probably
not
going
to
get
as
strong
results
as
if
you're
expressing
something
directly
from
a
scheduler.
But
maybe
you
would
automatically
get
better
results.
A
rule
yeah.
A
I
think
I
think
that
that
idea,
I
thought
so
yeah.
That's
exactly
the
idea
found
dictionaries
who
knows
I.
Think
one
there's
a
high
place
like
the
know,
we'll
figure
out
assignment
magically
based
on
some
signals
and
that
signal
can
be
any
how
it
counters
or
whatever,
and
we
don't
they
don't
allow
they
don't
allow
sorry
I
have
to
move
to
a
different
room.
You
kick
out.
Oh
okay,
start
sorry
about
that.
I
have
to
moocher
different.
A
All
right,
sorry
about
that,
yeah
I
saw
that
discussion
on
that
on.
Like
one
approach,
it's
like
asking
the
middle
to
figure
out
on
magically
assign
all
those
course
may
be,
based
on
some
like
a
high-level
hint
from
the
user
like
a
correlative
service,
QoS
classes.
Things
like
that,
but
I
think
that's
approach
that
Brooke
took
as
well
that
they
saw
my
understanding.
Is
pork?
A
Maybe
if
that's
possible
I,
don't
know
what
they
did,
but
that
yeah
that's
well
approach
to
solve
the
problem
that
but
each
other,
like
it's
letting
user
pick
on
the
system
just
magically
pick
and
do
assignment
based
on
some
heuristics
and
and
the
downside
with
that
is
you
don't
have
predictability,
so
you're
not
guarantee
anything,
but
in
most
of
the
cases
the
performance
should
be.
Okay,
I
know
you
have
some
benefit,
but
the
cop
I
don't
have
predictability.
Trade
on
yeah.
F
I
think
I
think
the
other
downside
to
that
is
you're
not
able
to
possibly
over
subscribe
anymore.
I
know
for
a
deployment
we
basically
yeah.
We
basically
give
everybody
basically
the
most
tasks.
A
very
small
amounts
of
super
moves
from
the
resource
allocator
and
you
know
assume
that
may
be
okay
for
them
to
first
into
in
the
space.
A
A
A
To
do
that,
I
should
put
excuse
me
that
they
define
three
classes
and,
like
basically
say
if
it's
a
best
effort,
I
don't
set
I
said
I
can
set
the
CFS
Kota
for
you
for
the
best
effort
at
but
I,
don't
I
set
a
very
small
share,
but
they
can
person
to
like
using
a
lot
of
CPU
if
there's
no
workload
using
on
that
box.
A
F
I
think
there's,
like
I,
think
there's
an
argument
to
be
made
for
a
class
of
tasks
which
requires
a
pin,
I
believe
the
process
which
requires
pin
them
I
dedicated
seeking
resources
and
there's
a
separate
kata
task,
which
is
a
lot
more
fuzzy
in
the
head,
worried
about
how
much
they're
actually
going
to
use
whether
they
really
live
or
not.
Right.
There
yeah.
A
I
think
I
yeah
I
think
like
social
mention,
and
an
idea
is
trying
to
have
two
tools.
One
is
dedicated.
Learners
have
exclusivity
on
your
course,
and
the
second
pool
is
like
a
share
pool
where
everyone
can
person
to
each
other
and
interfere
with
each
other.
I
think
with
own
mansion
expect
one
poor
people
mentioned
that
on.
They
did
that
at
Google,
and
they
very
regret
on
that
and
the
desperately
wants
to
revert
that
I.
Don't
know
why
they
didn't
mention.
Why
that's
what
they
say?
F
Very
similar
to
you
know
the
problem.
The
Dimitri's
address
them
with
these
two
proposals:
right
yeah,
so
I,
wonder
too:
how
far
do
you
think
you
could
get
if
you're
able
to
express
just
the
difference
between
a
task
that
requires,
but
just
to
say
yet,
really
super
like
a
flag
to
say
this
task
requires
dedicated
resources
in
some
sense
and
then
I
definitely
look
at
that
and
automatically
do
some
cleaning
as
opposed
to
a
task
that
is
happy
to
do
just
yet.
I'm
huntable
resources.
C
Yeah
that
could
be
relatively
easy
done,
but
I
don't
have
like
requirements
for
a
lot
of
people
like
this
idea
of
with
distribution.
I,
don't
really
have
a
use
case,
for
this
I
mean
what
we
really
need
is
hiking
back
in
rehearsal
thinking.
Process
is
slightly
into
Numa
ones,
but
maybe
someone
else
has
different
use
cases,
and
if,
if
this
is
beyond
the
case,
then
you
can
just
have
like
one
local
class
in
the
task
and
that
would
actually
patient,
but
this
should
be
extant
in
the
minnows
yeah.
A
And
we
do
have
a
use
case
for
GPU
we're
like
that.
We
want
to
somehow
expose
the
apology
information
because
for
GPU
like
if
you
use
two
GPUs
over
a
PCI
bus,
it's
going
to
be
really
still
compared
to
some
like
a--.
It's
very
similar
to
memory
like
you
have
some
topology
of
those
GPUs
and
if
you
use
the
to
GPU
close
to
each
other,
then
that
would
give
you
a
lot
of
performance
improvement
then
to
GPU
is
connected
by
cheap
PCI
bus
things
like
that.
Oh.
A
G
The
way
that
I
was
picturing,
it
was
to
send
along
whatever
topology
information.
We
could
I,
don't
know
what
the
format
would
be,
but
I
was
picturing
sending
that
along
with
every
offer,
and
then
you
could
have
a
scheduler
sort
of
make
some
decisions
about
what
it
would
prefer
to
where
it
would
prefer
to
schedule
the
jobs
and
what
resources,
what
specific
resources
it
would
want.
And
if
you
know,
if
the
scheduler
can
honor
that,
then
you
know
that's
that
we'd.
A
Have
to
deal
with
that
separately,
yeah,
but
I
think
I'm.
The
question
pixel
camera
I
think
and
the
issue
is
that
is
like
the
Maysles
needs
to
make
allocation
decisions
as
well
like
when
mesas
making
allocation
decision.
How
does
our
advocare
make
such
a
decision
that
give
one
framework
resources
has
close
topology
like.
G
H
So
one
interesting
thing
about
GPUs
is
that
we're
basically
doing
device
level
allocation
already
right
like
we,
we
don't
let
you
consume
half
a
GPU,
it's
not
a
fractional
vector
like
CPUs.
Are.
H
The
only
thing
is
that
effectively
the
decision
about
which
TV
we
give
you
is
occurring
in
the
isolator
right
now
and
if
there's
different
cards
on
the
machine,
that
might
not
be
what
you
want
right.
You
might
want
to
specific
card
that
matches
your
workload,
but
you
can't
currently
choose
switch
card
right
because
we're
doing
that
an
isolator
so
maybe
for
GPS.
It
makes
more
sense
to
be
exposing
like
a
this
particular
GPU
device.
Is
this
it's
connected
to
this
other
GPU
device
with
an
envy
link?
H
A
Yes,
look
so
bad.
Are
you
seeing
like
the
device
device
assignment
for
those
GPUs
can
be
done
at
a
node
like
aged
inside,
on
three
nodes
down
at
the
agents
in
the
isolator
I
see
so,
and
then
we
can
just
beam
our
logic
there
to
make
sure
that
when
I
went
when
it
has
asked
for
two
GPU
resources
make
sure
they're
on
their
close
together.
A
H
A
B
A
H
G
For
the
for
the
MVP,
we
kind
of
ignored
this
because
most
workloads
we
looked
at,
were
they
either
wanted
one
GPU
when
this
doesn't
factor
in
or
they
wanted
all
the
GPUs,
which
means
they're
getting
all
of
them
anyway,
some
of
them
are
connected
by
MV
link.
Some
aren't,
but
once
you
move
to,
you
know,
having
left
lots
of
GPUs
on
a
single
machine,
and
people
want
to
give
you
that
up
amongst
their
applications.
That's
when
it's
really
stretched
another
iç
ùm.
A
Okay,
okay
I
think
we
have
only
have
10
minutes
left
I.
Think
that's
a
good
discussion
I'm,
not
sure
how
good
the
notes
are,
but
like
I,
think
that
the
action
either
is
Demetri
I
think
we
should
my
suggestions.
We
should
definitely
look
at
those
notes
that
you
post,
like
the
links
that
you
post,
especially
like
the
discussion
on
that
those
guys
intranet
community,
has
and
to
see
if
we
can
synthesize
those
discussions,
you
a
kind
of
a
talk
so
that
everyone
can
digest
what
your
solution
is.
A
Whether
a
proposal
is
and
we
can
move
from
there
and
see
what's
the
best
for
us,
I
think
that's
my
proposal
for
solving
this.
As
a
first
step
and
I
mean
it
will
be
great
to
list
like
the
pros
and
cons
like
they
think
for
each
potential
proposal.
They
have
right
now.
A
We
have,
or
yesterday
we
saw
okay,
yeah
I-
think
the
wish
you
another
thing
that
we
should
follow
up
is
like
collecting
all
these
kind
of
use
cases
that
I
think
we
have
a
concrete
case
use
case
for
GPU,
and
you
have
a
concrete
case
for
CPU
Newman
node
and
if
anyone
has
any
other
use,
cases
feel
free
to
wish
a
create
a
doc
and
collecting
those
use
cases.
Maybe
courier
like
to
your
ticket
and
collect
those
students
correct
those
use
case
between
the
d'haran
ticket,
so
that
we
know
like
well
what
are
the
problem?
A
Cool,
okay,
so
sounds
great,
so
we
have
seven
minutes
left
I
do
want
to
thanks
Dimitri
for
for
sharing.
This
really
appreciate
that
I
think
we
have
like
two
follow-up
items
when
it's
crack
use
cases,
the
other
one
is
I
trying
to
think
the
sides
with
a
discussion,
micronized
community
regarding
this
topic
and
see
what
the
options
are
and
then
we
can
go
from
there.
A
A
A
This
is
like
for
me
on
a
forestry
track
who
is
working
on
what
and-
and
this
is
based
on
the
discussion
last
time
so
I
think
one
thing
we
should
do
is
every
working
group
meeting
let's
go
through
this
talk,
see
more,
are
the
status
of
those
things
that
we
are
trying
to
do
so
that
everyone
is
on
the
same
page,
I
think
which
is
good
right
now,
like
CPU
affinity,
we're
in
design-
and
we
just
have
a
discussion
today
and
if
you
want
to
join
like
one
particular
feature
effort,
feel
free
to
put
your
name
there
and
contact
the
owner
the
MC
owner
on.
A
B
A
F
A
Happy
so
I
think
what
I
plan
to
do
is
I
try
to
have
a
link
to
a
juror
ticket
if
possible,
for
those
items,
if
not
I'll,
try
to
add
some
description
there
if
there's
not
insane
to
your
ticket
but
I
think.
Ideally,
we
should
have
a
ticket
for
each
of
those
items
so
that
I
can
have
a
description
there
like
what.
A
That
is
why
it's
important
things
like
that,
like
I,
think
ownership,
like
maybe
the
ownership
search
updated
the
this
talk
to
have
a
link
to
the
Jo
ticket
unchristian
most
of
the
items
we
have
at
your
ticket
already,
there
might
be
some
dirt,
like
some
issues,
don't
have
any
erotic.
I
can
try
to
do
a
pass
before
the
next
meeting.
To
have
a
terrific
is
possible
by
I
want
you
guys
to
especially
the
owners
if
you're
working
on
that
feature,
make
sure
to
have
a
link
to
be
sweet,
like
the
description
here
make
sense.
A
Okay,
so
I'm
not
sure
to
Todd
is
on
the
line
I'm
just
trying
to
go
through
this
dog
I.
Think
the
second
island
volume
ownership
group
and
a
group
coming
up.
I
think
I
chat
with
Vidya
and
he
agreed
to
do
a
presentation
on
next
week
necessary
the
next
working
group
meeting
on
this
topic
so
so
see
what
we
can
do,
especially
do
some
background
mattes
and
see
what
we
can
do
to
clean
up
the
volume
group
ownership
issues
so,
on
its
I'll
say,
seeing
design
it's
started,
documentation
so
Gilbert.
A
What's
the
fact
of
the
documentation,
are
you
still
working
on
that?
I
have
a
couple
patches
which
is
discarded
but
I
can
reopen
them
and
and
which
contain
the
explanation
for
each
isolator
and
there's
a
metric
for
all
the
isolator.
We
currently
have
almost
around
like
20
hours
later.
You
know
Cubase
so
yeah,
that's
my.
A
We
can
ship
it
for
one
but
the
for
okay,
so
if
you're
working
now
do
any
help
from
anyone
else,
I
think
Tobias
and
some
some
other
people
offer
again
a
lease
offer,
help
unruhe
view
those
documentation,
I
think
because
we
can
brick
the
documentation
into
the
attic
and
then
make
it
as
a
separate
a
couple
other
tiers.
So
different
people
can
pick
up
different
items
so
which
I
can
definitely
step
on
sharing,
okay,
so
on.
So
you
still
working
on
that
one
so
on.
So
you
still
have.
E
E
A
To
just
how
recent
we
gonna
have
a
sink
like
six
hours
later
and
and
then
we
we
bring
the
different
different
implementation
into
pieces
so
that
we
can
have
a
you
can
have
a
different
progress
on
which
stage
we
are
currently
in.
So
the
for
the
whole
feature.
This
still
target
41.4,
yeah
I.
Think
another
thing
like
I
want
to
mention
here
is
like
I
think
Chan
is
who
is
working?
Oc
I
support
right
now.
A
It's
also
interesting
that
once
maybe,
like
Asia
collaborates
with
Chancellor
on
this
one,
because
he'll
do
the
same
thing
for
oti
as
well
I
mean
maybe
it
makes
sense
to
just
consolidate
with
LTI
implementation,
so
maybe
chat
with
him.
Yes,
I
will
push
it
chin
in
India,
OOP,
okay,
yeah
I
just
make
sure
like
I
change
the
loop,
because
he's
working
on
OC
I
support
right
now,
cool,
so
I
think
we
just
chat
about
CP
affinity.
We
have
some
action,
Island,
I,
think
design.
A
That's
good
and
and
support
Prada
registering
aqua,
credential
helpers
have
a
child
with
mouth
goober,
I
think
he's
not
on
the
line.
I
came
in
as
you
guys
there.
I
did
not
have
the
chest
yet
so
he
provided
a
link
to
a
github
repo,
but
I
did
not
have
the
chance
to
look
at
that
repo.
Yet,
okay,
okay,
we
have
some
message.
A
Okay,
so
I
think
we
are
out
of
time.
I
think
that's
I'll
reach
out
to
some
the
people
for
those
high
priority
items
and
see
what
success
there
in
and
trying
to
update
this
talk
and
I'll
take
the
astronaut
and
to
have
a
link
as
soon
as
possible
for
those
items
so
that
everyone
everyone
is
on
the
same
page.
Alright,
thank
you
very
much
for
the
discussion
today
and
it's
really
gray
and
I'm
so
happy
that
you
guys
can
join
this
meeting
and
hopefully
to
see
you
again
next
time
in
two
weeks.
Alright.