►
Description
This sesion introduces PCI device passthrough to containers and VMs managed by KubeVirt.
An overview of PCI passthrough and the Generic Device API is provided, illustrated with a specific practical use case that uses Intel QAT to accelerate VNF/CNF in edge computing.
Presenters:
- Vladik Romanovsky, Principal Sofware Engineer, Red Hat
- Le Yao, Intel SSE/CSE
B
Yeah,
so
my
name
is
lyric
this
session
today,
like
we
said
there
will
have
two
parts.
First,
I
would
like
to
talk
about
was
this
whose
device
assigned
in
convert?
How
does
it
work
and
some
of
the
recent
changes
we've
made
and
then
we'll
see
a
practical
example
how
the
intel
qat
device
is
being
used
in
kubert
and
then
perhaps
some
time
for
questions
so
just
a
little
bit
of
history.
B
Some
time
ago,
in
collaboration
with
nvidia,
we
have
introduced
the
support
for
gpu
and
vgpu
device
assignment
to
keyboard,
using
the
nvidia
keyboard
device
plugin,
and
what
it
does
is
that,
once
this
device
plug-in
installed
in
the
node,
it
would
look
up
for
all
the
nvidia
devices
on
the
host
and,
if
bounded
to
vfio
driver,
it
will
advertise
these
devices
to
the
kubernetes
scheduler
and
on
the
other
side,
when
the
user
it
would
require,
I
would
request
to
start
virtual
machine
with
the
gpu
assigned
to
it.
B
It
will
have
to
post
the
virtual
machine
instance
back
requesting
this
device,
the
desired
device
by
its
resource
name,
and
this
will
result
into
everett
launcher
pod
being
scheduled
in
the
node,
where
this
exa,
where
the
desired
device
exists
and
the
device
plug-in
on
stern
will
allocate
the
device
and
the
node
and
will
move
it
in
the
con
into
the
container.
B
It
will
also
create
an
environment
variable
where
it
will
reference
the
the
allocated
device
either
pci
address
for
pci
devices
or
uid
of
vgpu
for
mediated
devices,
and
the
name
of
this
variable
will
be
constant.
It
will
be
either
gpu
or
vgpu
pass
through
devices
then
divert
launcher
will
consume.
This
variable
extract
the
device
id
and
will
form
a
configuration
for
for
libre
to
start
the
vm.
B
Now
this
model
works
fine
for
allocating
gpu
and
vgpu
devices,
or
it
doesn't
allow
us
to
allocate
devices
which
are
don't
call
themselves
gpu
and
we've
requested.
We
received
lots
of
requests
for
for
integrating
different
devices
into
keyboard.
I've
seen
that
every
time
that
we
would
need
to
integrate
a
new
device,
this
device
would
have
to
have
its
own
variable
name
with
its
own
prefix
and
its
own
handling
code.
That
would
configure
this
device.
This
would
create
a
lot
of
duplication
and
wouldn't
scale
for
us
going
forward.
B
B
So
we
had
to
make
a
few
changes
and
position
yourself
better
for
the
future,
and
we
started
with
introducing
a
mechanism
to
allow
administrators
to
have
a
better
handling
over
the
devices
which
are
allowed
in
the
cluster,
and
I
will
talk
about
this
in
the
next
slides.
We've
also
established
an
interface
between
device,
plugins
and
kubert,
and
we
require
we
will
require
the
device
plugins
to
create
a
uniform
variable
names
so
for
keyboard
to
consume,
and
then
we
generalized
a
lot
of
the
device
assignment
process.
B
Some
of
the
code
is
shared
now
across
across
the
project,
so
here's
an
example
of
how
the
administrator
can
create
a
list
of
permitted
devices
in
the
cluster.
It
basically
a
list
of
ideas
followed
by
a
resource
name
of
the
device
and.
B
This
this
list
will
have
to
be
posted
on
convert
and
keyboard
cr
to
get
this
functionality
enabled-
and
we
spoke
previously
about
the
the
interface
between
keyboard
and
and.
B
B
Environment
variable
started
with
a
prefix
indicating
the
type
of
of
a
resource,
so
pci
resource
for
pci
devices
and
the
and
fpci
resource
for
mediated
devices,
and
this
have
to
be
followed
by
a
resource
name
incorporated
in
this
variable
name,
and
this
will
allow
us
to
correlate
the
the
allocated
device
and
to
the
api
object
and
post
additional
configurations
such
as
rombar
for
devices
that
require
that
or
even
going
farther.
B
B
This
basically
allows
us
to,
without
adding
any
additional
code
to
keyboard,
to
use
any
properly
configured
device
on
the
node
that
was
allowed
by
administrator
in
the
cluster
to
be
assigned
to
a
virtual
machine
just
out
of
the
box.
Without
any
additional
steps,
the
way
it
would
work
is
that
kubert
will
use
the
list
of
permitted
devices
in
the
cluster,
and
it
would
discover
these
devices
on
the
nodes
and
it
will
start
a
device
plug-in
according
to
the
type
for
these
devices
and
advertise
it
to
the
kubernetes.
B
It
will
then,
so
these
genetic
devices
are
basically
anything
you
you
would
expect
from
device
plugin
to
be
it's.
It
handles
the
allocation
and
the
accounting
for
devices
and
and,
of
course,
monitoring.
B
So
some
of
the
some
of
the
vendor-specific
device
plug-in
they
provide
the
additional
functionality
and
additional
features
such
as
advanced
monitoring
and
and
other
features
as
well
and
as
part
of
this
new
model.
B
Kubert
will
support
external
external
device
plugins,
and
this
simply
gets
enabled
by
the
administrator
indicating
that,
given
resources
being
provided
through
an
external
resource
provider
on
the
list
of
permitted
devices
and
what
it
will
do,
it
will
allow
the
the
device
in
the
cluster,
but
kubert
will
not
attempt
to
to
start
the
device
plug-in
for
this
device
and
we'll
expect
the
allocations
to
come
from
an
external
device
plug-in
and,
of
course,
cable
will
continue
to
support
the
nvidia
keyboard
device.
B
Plugin,
we'll
just
need
to
make
some
changes
to
this
device
plugin,
so
it
will
use
the
new
the
new
interface
but
users
who
were
previously
using
this
device
plugin.
They
will
continue
to
do
so
in
the
future
and
here's
just
a
slide.
That
shows
how
everything
may
work
together,
and
this
assumes
that
the
administrator
has
already
posted
a
list
of
permitted
devices
in
the
cluster
and
kubert
has
started
the
generic
device
plugins
on
the
nodes
for
the
discover
devices.
B
It
doesn't
have
to
be
so
just
an
example,
and
we
also
see
that
the
users
has
already
posted
the
spec
requesting
certain
devices
to
be
attached
to
their
virtual
machine,
and
this
resulted
in
the
virtual
sorry
divert
launchers
to
be
being
scheduled
on
the
nodes
where
the
desired
devices
are
actually
exist
and
on
their
turn,
the
device
plugins
will
allocate
the
devices
and
move
them
into
the
launcher
compute
container.
B
But
we
could
see
the
the
environment
variables
that
these
device
plugins
will
form
according
to
the
new
interface
indicating
the
type
of
resource
and
the
resource
name
incorporated
inside
the
the
environment
variable,
this
will
be
consumed
by
by
virt
launcher
and
which
will,
in
its
turn,
form
a
configuration
for
liberty.
To
start
the
virtual
machines,
this
practical
compute
can
concludes
my
presentation.
I'm
sorry,
I
probably
forgot
half
of
the
things
that
they
want
to
say,
but
that's.
C
A
Okay,
I
will.
There
is
a
second
part
of
the
presentation
which
is
leo's
recording
of
his
use
case,
which
I'm
going
to
stream
here.
A
C
A
D
D
D
D
First
of
all,
I
would
like
to
introduce
the
projects
that
I'm
focusing
on
and
why
we
chose
to
integrate
kuberwort.
Next,
I
will
introduce
the
user
cases
acceleration
of
intel
q80
used
in
the
project
and
how
I
use
the
google
works
new
api
provided
to
implement
pci
password
of
qat
devices.
Finally,
I
would
introduce
the
performance
after
integrating
word
with
qatar.
D
D
It
can
integrate
infrastructure
or
registration
to
ensure
the
requirements
of
hash
cloud
services
are
installed
and
controlled,
and
it
is
a
cloud
native
compute
and
network
framework
to
integrate
every
application
to
the
factor
standard
and
setting
framework
to
address
5g,
iot
and
various
linux
foundation.
I
have
user
cases
in
coordinate
table
why
we
chose
google
working
acronym
because
in
icm
cluster
we
need
to
manage
the
vms
like
a
plant.
Ports
and
vm
should
be
integrated
into
connected
cluster
and
we
need
to
enable
the
standard.
D
Google
controller
commands
to
manage
recovery
of
many
rubrics,
and
there
are
some
workloads
need
to
be
wrong
in
ways
installation
required
in
some
user
cases,
it's
better
to
use
vm
yeah.
So
after
investigation
we
chose
kuberwort
as
we
are
managers
in
asean.
D
Why
we
use
the
needed
a
qat,
preset
device
to
do
acceleration
because
the
software
defined
the
added
one.
It
is
a
project
in
icm
and
I'm
focusing
on
it.
It
is
a
solution
to
enable
sd1
functionalities,
include
multiple
one-link
support
when
traffic
management
id
firewall
episode
and
traffic
shipping
with
fox
to
address
the
challenge
when
applying
on
edge
computing
environment
like
resource
limitations,
no
public
id,
and
I
did
all
this
traffic
sentinel
data
automation
under
other
cases.
D
So
it
has
a
below
component
test
cf
and
we
have
a
crd
controller
and
a
central
controller
to
control
these
edge
cases,
edit
class
clusters,
and
then
I
will
give
a
brief
introduction
to
intel
qatar's
device
as
short
for
intel
quick
assist
technology
and
has
a
generic
pcr
device.
You
can
find
more
information
on
this
web
link
and
the
security
can
be
involved
in
the
ipsec
in
the
implementation
tool.
Is
a
workload
of
cpu
on
data
encryption
and
decoration?
D
It
can
be
unlocked,
so
the
dpdk
crypto
data
api
in
this
kernel
crypto
framework
as
well
as
open,
ssl
yeah.
Oh,
it
can
offload
the
specific
workloads
from
cpu
to
qh.
Here
it
makes
it
easy
for
develops
to
integrate
accelerations
in
their
data.
It
can
increase
the
business
flexibility
and
reduce
the
damage
on
the
platform,
especially
cpu
utilization.
D
Where
we
use
it
in
the
ic1,
because
in
sd1
we
need
a
buildup
or
kernel-based
ipsec,
acceleration
back
q80,
where
I
have
in
container
in
there.
They
are
launched
by
google
world
yeah
and
the
vpn
is
templates
required
configuration
integrity
and
authentication
circuit
to
graph
so
on.
The
qet
can
accelerate
this
workload
and
the
initial
exact
is
used
to
connect
different
edge
nodes
and
the
traffic
have
zero
one
because
of
different
scenarios
and
requirements.
Both
then
I
have
an
awareness
needed
in
icl
and
to
maximize
the
cpu
utilization.
D
D
Before
kubovos
released
version
0.36,
if
you
want
to
use
your
generic
pc
device,
you
need
a
lot
of
code
work
to
do
first,
for
example,
qap
which
need
to
build
intellectuality
driver
on
host
and
the
config
host
lmu
group
and
a
builder
and
the
precedes
our
focus
here
in
cluster
3b
faster
and
the
deployed
interactivity
directory
in
dbtk
mode
in
cluster,
and
I
had
a
there
are
a
lot
of
code
work
to
do
to
add
function.
Another
day
to
google
word
to
handle
security
device
element.
D
This
is
the
pr
I
have
submitted.
No,
no!
I
think
we
don't
need
it
yeah
and
we
need
to
config
the
google
scale
to
create
a
vmi
where
security
feels
specific
on
the
launch
of
them
with
qrtvmsu8.
D
It
has
a
lot
of
work
to
do
so.
After
google
world
release
version
zero,
pointer
366.
There
are
two
ways
for
us
to
use
the
ucl
device.
One
is
for
the
kubernetes
device
and
now
another
is
the
google
world
internal
diversitarian,
using
your
own
dress,
plugin,
for
example,
qat
device
value.
We
need
the
direct
flying,
config
specific
container
environment
value
in
the
container
runtime.
D
It
will
use
the
in
the
converter,
api
yeah
and
we
need
the
brand
we
have
our
pc
address.
We
interact
with
and
configure
kubernetes
there
to
accelerate
the
external
device
results.
It's
it's
also.
The
resource
name
is
the
key.
It's
hosted
by
the
qt
device,
parking
yeah
and
the
config
host
dash
field.
D
Another
way,
it's
a
very
simple
and
recommended
way
is
using
the
in
converting
internal
device
targeting
we
don't
need
any
other
pci
device
plugin
to
develop
when
only
what
we
only
do
is
to
configure
the
relevant
like
banned,
the
vfl
pcr
driver
to
your
device
and
configure
the
wetlands
house
device
the
way
convey
the
house
devs
filled
in
the
new
york
convenience
so
yeah.
D
So
we
have
launched
the
coverage
vehicle
to
do
some
testing.
First
of
all,
open
ssl,
we
can
say
the
algorithm
is
rsa
esa,
tsa
and
ies,
and
the
software
is
only
run
by
cpu.
Under
the
synchro
is
with
the
q80
device
we
can
say
in
the
16
kilo
batteries
we
could
have
six
times
better
performance
and
yeah.
The
stands
very
fast
sounds
very
fans
in
the
with
qr8.
D
Here
is
a
better
performance
than
only
runned
by
security
run
by
cpu
software
yeah
and
for
ipsec
we
launched
the
two
or
cooper
world
war
amps
under
a
one
first
server,
one
for
client.
We
use
where
then
two
on
qtvfs
in
the
robot,
vm
and
use
branch,
one
to
is
established
the
ipsec
and
the
proposal
is
this:
this
yeah
and
then
you
see
the
links
kernel
crypto
framework,
so
we,
when
we
build
the
security,
we
have
struggle.
We
needed
some
some
feature
enabled
yeah.
D
We,
the
testing
toys,
are
upper
three
and
the
way
this
is
for
the
down.
This
is
it
for
qatar.
D
This
is
a
detailed
result.
When
testing
on
the
central
s3,
essential,
s7
and
ubuntu,
we
can
say
one
qra
app
the
bad
ones.
They
have
almost
two
times
of
qt
down.
So
we
can
say
it
has
a
better
performance
when
using
the
intel
qatar
app
in
the
cooper
worthwhile.
D
And
sorry
for
my
absence
again,
so
if
you
have
some
questions,
you
can
connect
communication
with
me
on
with
my
using
my
email,
learn:
point
y'all
at
intel.com
and
the
phone
detail:
information,
maybe
yeah.
D
A
Okay,
so
I
guess
now
it's
time
for
questions.
There
was
a
question
raising
the
chat
that
because
the
chat
will
disappear
after
the
event,
it
is
a
mine,
nikola
and
vladik.
I
will
repeat
the
question
again
here
live,
so
you
can
probably
vladik
answer
it
and
elaborate.
A
bit
more
so
nicola
was
asking
in
the
example
slide
slides
with
two
nodes.
Could
we
specify
the
gpus
as
host
devices
rather
than
gpu,
and
if
not,
why.
A
B
And
so
do
you
want
me
to
review
the
answer?
It's
just
yeah.
This
will
be
honored
in
in
both
of
the
fields
it's
possible.
It's
just.
Some
people
are
more
comfortable
with
the
requesting
gpus
directly,
but
yeah.
It
can
be
posted
in
both
of
these.
A
A
Okay,
are
there
any
other
questions
for
vladik
here
or
okay?
I
see
a
question
from
shang.
Is
there
a
performance,
different
difference
between
requesting
the
gpu
directly
instead
of
host
devices?
A
C
A
Okay,
those
were
the
questions
so
far
as
a
reminder,
you
can
follow
up
with
more
questions
if
needed
in
in
the
kubernetes
slack
in
hash,
virtualization.
C
A
No
okay.
Well,
then,
thank
you
very
much
vladik
and
thank
you
very
much
le
who
could
not
join
us
today,
but
a
great
effort
to
record
and
share
the
recording.
Thank
you
both
and
see.
You
all
see
you
in
slack.