►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Thank
you
and
thank
you
everyone
for
joining.
B
I
am,
I
really
plan
to
be
this
to
be
a
short
session,
rather
mostly
to
report
on
the
progress
we've
made
towards
providing
a
better
usability
for
virtual
gpus
and
kubert
in
general,
but
before
we
jump
into
the
new
changes
we've
made,
I
wanted
to
take
a
step
back
and
kind
of
recap
on
the
already
existing
support
for
host
devices
and
the
immediate
device
assignment
in
in
kubert,
simply
because
this
new
work
builds
on
top
of
that.
B
B
Letting
the
administrator
deploy
a
dedicated
device
plug-in
for
every
device
on
every
node.
We've
introduced
an
api
that
with
this
api
that,
mr,
can
simply
at
least
the
permitted
devices.
A
B
The
cluster,
with
the
corresponding
resource,
name
and
kubert,
will
then
discover
these
devices
on
nodes
and
we'll
start
the
device
plug-in
for
for
for
this
resource,
and,
of
course,
this
configuration
can
be
done
outside
of
keyboard
completely.
B
Anyone
who
is
interested
in
running
their
own
device
plug-in
and
taking
care
of
the
allocation
in
their
device
plug-in
all
the
administrator
needs
to
do
in
this
case
is
just
to
list
the
device
and
indicate
that
this
resource
is
being
provided
by
a
res
by
an
external
device
plugin,
and
in
that
case,
kubrick
will
just
permit
the
device
to
be
used
in
the
cluster.
So
virtual
machines
can
still
request
this
device,
but
all
the
handling
will
be
will
be
done
by
an
external
device.
B
Plugin
and,
of
course,
the
removal
of
any
entry
in
that
api
will
lead
keyboard
to
disable
the
relevant
device
plugin
and
yeah,
and
that's
it
so
here
we
can
see
an
example
of
virtual
machine
instance
requesting
an
existing
resource
and,
for
example,
on
the
left
side.
The
virtual
machine
instance
is
requesting
two
of
these
devices
and
it
will
land
on
the
node
where
these
devices
are
being
advertised.
B
B
So
to
create
a
virtual
gpu.
A
relevant
driver
needs
to
be
installed
on
the
on
the
host
and
some
vendors.
They
already
have
their
drivers,
part
of
the
kernel,
and
so
it
makes
it
very
easy
to
consume.
B
For
example,
in
the
case
of
nvidia,
there's
a
need
to
install
the
driver
separately
and
also
the
creation
of
a
virtual
gpus
is
requires
a
license,
but,
generally
speaking,
every
gpu
card
that
supports
virtual
gpus
can
be
partitioned
differently
and
depending
on
the
driver.
B
Each
of
these
partitions
are
being
presented
as
different
types,
and
these
types
are
pre-created
by
the
by
the
driver,
and
so
these
types
they
are
different
in
they're
different
between
each
other
and
they're
different
in
the
frame
buffer
size,
their
different
resolution
that
they
provide
and
they're
pro
different
in
the
in
the
density.
B
B
Yeah,
so
it's
it's
really
up
to
the
administrator
to
to
decide
which
types
will
become
available
in
the
cluster.
B
What
exactly
the
cluster
needs,
depending
on
the
density
or
perhaps
a
use
case,
whether
it
serves
virtual
gpus
or
like
training
models
or
anything
so,
but
also,
on
the
other
hand,
the
creation
of
these
virtuous
gpus
in
general.
It's
it's
pretty
straightforward,
but
the
problem
becomes.
When
you
have
a
node
with
the
multiple
cards
on
it
which
may
differ,
there
can
be
different
cards
on
the
on
that
node.
B
That
serves
different
types
of
of
gpus,
and
there
can
be,
of
course,
multiple
nodes
that
serves
multiple
cards,
and
this
is
the
problem
that
the
kubert
is
starting
to
solve.
With
the
with
the
introduced
newly
introduced
mediated
devices
configuration
api.
B
So
with
this
api,
all
the
administrator
needs
to
do
is
to
list
the
desired
types
that
you
would
want
to
become
available
in
the
cluster
and
these
devices.
They
have
these
types.
They
have
a
priority
in
listing,
so
the
first
one
will
have
a
higher
priority
for
configuration
than
than
the
one
listed
below,
and
this
configuration
can
be
expanded
to.
B
B
So
here
we
have
a
configuration
that
that
the
administrator
has
provided,
and
it
has
a
mix
of
different
types
of
that
belongs
to
different
types
that
belongs
to
different
cards.
B
Sorry
about
this,
and
and
what
what
kubrick
will
do
in
this
case
is
that,
for
example,
on
node
number
one
it
will
create
ring
buffer
and
the
string
buffer
will
be
filled
with
the
with
the
intersection
of
the
desired
devices
that
has
been
listed
by
the
administrator
and
what
is
actually
possible
to
configure
on
that
on
these
cards
that
are
available
in
the
node.
B
So,
for
example,
in
for
tesla
t1
t4,
there's
a
there's
a
list
of
different
types
that
it
supports,
and
I
just
listed
a
few,
but
in
general
it
can
support
up
to
17
different
types,
and
the
intersection
here
will
show
us
that
there
are
three
types
that
can
be
configured
here,
but
there
are
four
cards,
and
so,
in
that
case,
we'll
just
iterate
over
the
the
ring
buffer
and
we'll
will
configure
each
card
with
a
single
type,
and
in
that
case
you
can
see
that
the
importance
of
of
the
order
takes
place
here
and
the
one
of
the
cards
will
be
configured
with
the
nvidia
230
type
twice
and
on
the
other
hand,
on
note
2.
B
B
So
I
have
a
small
presentation,
a
small
demo
that
I
compiled.
I
I
didn't
have
a
lot
of
nodes
and
I
don't
have
a
lot
of
cards
on
them,
so
it's
just
one
node
with
one
card
but
yeah.
B
I
just
want
to
present
a
general
process
of
how
the
administrator
would
configure
the
these
immediate
devices
and
then
at
least
them
for
kubrick
to
discover
them
and
start
device
plugin
for
them,
and
then
how
a
virtual
machine
can
request
to
consume
one
of
these
devices
so,
like
I
said
there
is
only
one
node
that
we
have
by
the
way.
I
will
list
the
link
to
this
to
this
demo
afterwards,
if
the
font
is
too
small,
so
there
is
only
one
card.
B
I
said
there
are
no
immediate
devices
configured
we'll
configure
one
of
the
types
it
doesn't
really
matter
just
for
for
the
sake
of
the
demonstration
we'll
configure
type
222.,
and
there
are
16
instances
of
this
type
available
on
that
node.
B
So
here
we
are
requesting
to
create
nvidia
222
type
and
we
will
also
permit
the
use
of
this
type
in
the
cluster.
These
two
are
this:
to
correlate
it's
just.
Sometimes
it's
easier
to
use
it
by
the
shirt
name,
but
it's
also
possible
to
to
use
here
the
full
name
grid,
t4
1b
as
well.
B
So
we
asked
to
create
this
immediate
device
and
we
will
start
the
device.
We
expect
kuber
to
start
to
discover
it
and
start
the
device
plug-in
once
this
type
is
created.
B
B
B
B
Yeah
we
can
see
from
the
log
that
he
discovered
that
device
insert
the
device
plugin
for
it.
So
now
we
can
request.
Now
we
can
start
a
virtual
machine
that
requests
this
type.
B
And
the
this
will,
this
virtual
machine
will
start
and
run
will
connect
to
its
console.
But
during
this
presentation
I
didn't
have
pci
utils
installed,
so
I'm
installing
it
here,
but
will
save
some
time
and
fast
forward
to
when
it's
installed
all
right.
Now
it's
installed
and
we
can
see.
We
can
use
a
lspci
to
see
that
the
instance
of
this
card,
the
instance
of
the
t4
card,
is
being
present
on
the
node,
and
this
is
actually
what
we
want.
B
We
want
to
see
virtual
virtual
gpu
present
in
that
assigned
to
that
virtual
machine.
B
So
here
we
saw
a
process
where
we
configured
the
immediate
devices
and
then
we
advertised
them
to
to
convert
to
consume,
and
then
the
virtual
machine
requested
this
device
and
now
we'll
delete
the
virtual
machine
and
do
the
opposite.
We'll
remove
these
media
devices
and
we'll
remove
the
this
device,
this
resource
from
being
permitted
in
the
cluster
and
the
expectation
is
that
kubert
will
remove
all
all
of
these
16
instances
of
the
of
these
mediated
devices
and
we'll
stop
a
device
plug-in
for
it.
B
B
B
So
in
the
allocatable
section
we
can
see
that
there
are
zero
devices
available
and
we
can
see
that
the
kubert
does
did
that
by
looking
at
the
log
again.
B
Yeah,
so
this
is
the
entry
that
shows
that
devices
this
device
plugin
has
been
disabled.
B
A
So,
thank
you.
Vladimir
awesome
presentation,
very
cool.
This
question
here
so
andre.
Well,
let
me
see
so
andrew
is
asking.
Can
we
have
one
gpu
like
nvidia
t4
is
licensed
in
different
size.
You
know
four,
you
know
slides
with
one
giga.
I
think
four
with
two
and
one
with
four
something
like
that.
B
So,
specifically,
for
for
this
t4
card
that
we
can
configure
only
one
type
of
vgpu
per
card
and
yeah,
we
can
use
different
types.
Each
type
represent
a
different
frame
buffer,
but
yeah.
It's
only
one
type
per
card.
B
Yes,
so
basically
updating,
kyber
cr
will
lead
to
the
deactivation,
so
we
will
remove
all
the
all
of
these
mediated
devices
and
we'll
configure
something
else
that
has
been
requested.
A
Ash
is
also
asking
a
question
here,
so
I'm
going
to
read:
okay
yeah
I
he
has
so
I
have
a
clarifying
question
here.
Can
you
specify
multiple
potential
types
that
a
vm
can
request
in
priority
order
so
that
if
the
ideal
type
is
unavailable,
it
will
be
scheduled
on
the
next
available
and
if
it
cannot
be
scheduled
to
do
it
to
a
resource
type
be
unavailable?
Can
you
just
have
it
run
on
cpu?
B
I'm
not
entirely
sure
how
to
answer
that
in
general,
when
we
are
requesting
a
resource
on
the
via
virtual
machine
instance,
this
has
to
be
met,
so
we
will
not
schedule
a
virtual
machine
if
we
cannot
meet
all
the
requirements.
This
is
just
how
kubernetes
works,
so
I
don't
I'm
not
entirely
sure.
A
B
Actually,
you
can
you
can
just
email
me
if
you
would
like.
A
Do
you
wanna
ask,
do
you
wanna
talk,
speak
or
I
can
enable
you
if
it
doesn't?
If
it's
answered.
A
A
A
B
In
general,
it's
possible
so,
for
example,
here
in
cubeworth
we,
this
is
a
open
source,
so
we
can
have
the
drivers
pre-installed
and
the,
but
we're
speaking
about
products
like,
for
example,
openshift
and
then
nvidia
has
a
openshift
operator,
openshift
gpu
operator,
and
this
gpu
operator
provides
vendor-specific
drivers
and
it
will
deploy
it
on
every
node
where
this
is
needed.
So
this
is
currently
in
works,
but
this
is
the
goal
that
we
are
trying
to
achieve.