►
From YouTube: Deploying VNFs with Kubernetes pods and VMs
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
folks
hi,
my
name
is
puja
gombre.
A
I
work
as
a
software
engineer
at
platform
line
systems,
and
today
I'm
gonna
talk
about
how
do
you
deploy
virtual
network
functions
with
kubernetes
parts
and
vms,
so
the
agenda
this
session
is
to
cover
the
following
items:
firstly,
and
talk
about
virtual
network
functions
in
general,
like
what
it
really
means
and
what
are
the
advantages
of
running
virtual
network
functions
and
what
kind
of
application
performance
enhancements
we
can
do
use
the
two
technologies
here,
one
being
sriov
and
after
that
we'll
talk
about
obvious
dpdk
for
each
of
these
technologies,
I'll
talk
about
at
a
high
level
what
it
really
means.
A
How
do
you
really
deploy
it
on
a
kubernetes
cluster
in
terms
of
any
kind
of
configuration
that
needs
to
be
done
on
the
host,
and
once
that
is
done,
how
do
you
actually
deploy
a
vnf
application
using
the
kuford
virtual
machine
platform
here
and
I'll?
Do
a
quick
demo
after
that
demonstrating
vnf
applications
or
other
vms
and
parts
that
can
be
created
using
sriv
and
dpdk
networks?
A
So,
let's
get
right
into
it.
So,
firstly,
what's
a
virtual
network
function
right,
so
network
functions
are
basically
all
the
networking
capabilities
that
you
typically
expect
from
expect
to
run
on
hardware.
A
That's
dedicated
for
running
such
kind
of
applications
and
virtual
networks
is
a
new
way
of
kind
of
doing
it
using
the
virtualization
layer,
and
there
are
several
advantages
to
it
that
we'll
talk
about
next
before
we
get
into
that.
We
hear
these
terms
nfvs
and
vnfs
a
lot,
so
I
just
want
to
first
kind
of
give
an
introduction
to
what's
a
network
function,
virtualization
or
nfb
in
short
and
then
move
on
to
vnf's.
A
A
So
nfv
is
more
of
the
entire
architectural
paradigm
and
in
terms
of
the
three
basic
components
of
that,
these
are
the
three
framework
level
components.
First
being
the
actual
application,
which
is
your
virtualized
network
functions,
so
that's
vnfs.
A
Secondly,
for
running
these
applications,
you
need
a
infrastructure
to
run
it
on,
so
that's
the
nfv
infrastructure
or
nfvi,
and
lastly,
the
management,
automation
and
network
orchestration
layer,
which
is
basically
abbreviated
as
manual
here
and
I'll
talk
briefly
about
all
of
them
right.
So
with
the
telco
cloud.
A
This
is
like
the
new
way
of
doing
it
without
having
dedicated
hardware
for
each
network
function
and
the
primary
goal
is
to
improve
agility
and
scalability
for
for
different
telco
service
providers
right
for
being
able
to
add
new
applications
on
demand
and
that
shouldn't
require
additional
hardware
resources.
A
So
if
we
talk
about
vnfs,
these
are
the
software
applications
that
would
deliver
functions
such
as
file
sharing
directory
services,
routing
firewall
mechanisms,
and
all
of
that,
so
there
could
be
different
applications
that
you
can
run
as
vnfs,
and
this
could
run
either
in
a
virtual
machine
or
a
kubernetes
part.
A
A
So
apart
from
running
these
network
apps,
you
also
need
a
framework,
that's
capable
of
managing
the
nfv
infrastructure
itself
and,
secondly,
also
handle
the
life
cycle
of
these
new
vnfs
that
are
getting
deployed.
So
that's
the
third
component,
the
management,
automation
and
network
orchestration
layer.
A
Now
we
spoke
briefly
about
what
vnfs
are
and
primarily
they
they
are
applications
that
run
on
top
of
this
nfv
infrastructure,
like
I
said,
and
mostly
they
are
going
to
be
deployed
as
virtual
machines
and
that's
the
vnf
part
of
it
and
corresponding
to
that
there's
also
a
cnf
technology
which
basically
runs
it
on
containers.
A
The
common
vnf
applications
that
I
spoke
about.
These
are
just
some
examples
that
you
can
run
and
the
goal
here
is
to
not
run
it
as
one
monolithic
vm
right.
So
the
way
it
kind
of
does
it
is
by
doing
a
service
chaining
in
the
sense
that
you
can
have
different
functions,
run
as
part
of
independent
network
functions,
and
you
can
kind
of
put
them
together
as
building
blocks
in
a
process
and
the
service
chaining
is
the
entire
of
the
entire
network
function.
Components
is
something
that
you
can
simplify
using
vnf
technology.
A
So
what
are
the
benefits
of
using
vns?
So
network
functions
have
always
been
there
right
and
with
proprietary
hardware
you,
you
could
get
a
good
performance
out
of
these
applications.
A
The
the
problem
there
was
that
it
kind
of
becomes
monolithic
at
a
certain
point
and
scaling.
Individual
components
is
a
problem.
So,
with
virtual
network
functions,
you
get
improved
network
scalability,
because
each
of
the
network
function
is
running
on
its
own.
Vm
has
its
own
resources
and
you
can
enable
it
disable
it
or
manage
it
as
needed,
independent
of
any
other
components
in
that
service
chain.
A
The
power
utilization
of
your
data
center
goes
down
drastically
so
that
that's
another
side
benefit
of
it.
You
can
also
implement
better
security
policies
with
vns,
because
we
have
more
fine
grained
control
over
each
of
the
functions.
A
A
Looking
at
the
efficient
memory,
access
and
resource
allocation
over
for
an
aggregated
system
right
and
when
you're
dealing
with
a
high
amount
of
network
traffic,
what
you
would
the
performance
that
you
would
achieve
with
the
native
linux
kernel
stack.
That's
that's
not
gonna.
Suffice
in
this
case,
because
the
number
more
the
number
of
vnfs
that
you
pack
on
a
single
host,
it's
gonna,
drive
up
the
aggregate
usage
to
a
very
high
level
that
cannot
be
met
by
standard
linux
stack.
A
A
So
first,
let's
talk
about
sriv,
so
that
stands
for
single
root.
I
o
virtualization
and
with
sriv
the
benefit
that
you
get
is
you
can
have
dedicated
pci
devices
for
each
of
your
virtual
machines
and
the
the
advantage
that
you
get
out
of
that
is
because
you're
creating
multiple
virtual
functions
outside
of
out
of
a
single
pci
device.
You
can
allocate
them
to
independent
vnfs
and
not
have
any
overlap
between
them.
A
The
performance
benefit
that
srav
gains.
It
comes
from
the
basic
fact
that
in
this
case,
it's
by
the
network
traffic
is
bypassing
your
hypervisor
and
without
getting
any
kernel
interrupts
for
any
data
data
that
comes
in
or
out
of
the
vm.
You
are
able
to
access
the
nic
directly
and
that
that's
primarily
how
it
it's
able
to
give
you
faster
packet,
switching
to
actually
run
sriv
on
a
host.
A
You
need
support
in
the
bios
level,
as
well
as
at
the
operating
system,
so
those
are
kind
of
the
two
two
requirements
for
running
it
and
in
terms
of
how
it's
presented
to
a
vm,
it
doesn't
really
make
a
difference
to
the
vm,
because
it's
it's
on
the
host
level
that
you
are
kind
of
slicing
a
pci
device
into
multiple
virtual
functions,
but
from
the
guest
point
of
view,
it's
only
going
to
see
it
as
on
the
neck
card,
so
the
application
running
inside
it
does
not
really
change
change
a
whole
lot
in
terms
of
the
terminology
used
here.
A
A
So
this
is
just
a
pictorial
view
to
kind
of
explain
how
sriv
really
helps
you
bypass
the
hypervisor
in
this
case,
for
instance,
if
you
had
a
vm
utilizing
the
ovs
bridge,
that's
the
open
vswitch
bridge
on
the
hypervisor,
without
srlv
any
data
packets
that
come
in
or
out
of
the
vm.
A
Now,
when
we
talk
about
deploying
vnf
apps
on
the
virtualization
platform,
the
solution
that
we
are
looking
at
here
is
keyboard,
which
is
a
virtual
machine
framework
that
runs
on
top
of
kubernetes
with
huford.
There
is
inbuilt
support
for
sriv.
A
A
Plugin,
slav
device
plugin
is
the
one
that's
responsible
for
kind
of
detecting
and
discovering
the
any
srav
resources
that
are
available
on
a
host
once
it
detects
that
it
also
advertises
them
to
kubernetes
and
the
resource
kubernetes
resource
manager
will
then
allocate
them
as
it's
requested
by
any
parts
and
vms.
A
So
this
is
a
read-only
kind
of
service,
in
the
sense
that
it
won't
modify
anything
on
the
host
in
actual
actuality,
and
the
only
thing
that
it
does
is
basically
advertises
these
resources
and
updates
the
capacity
section
of
each
cluster
node
so
that
that
can
be
used
by
the
kubernetes
scheduler.
When
allocating
resources
to
vms.
A
For
any
virtual
machines
that
you
run
on
qport,
it
allocates
a
vfio
device
for
each
of
those
parts,
and
this
is
the
only
driver
that
that
is
supported
today
with
keyboard.
A
Secondly,
there's
the
sriv
cni
plugin.
So
this
is
the
plugin
that's
responsible
for
actually
configuring,
whatever
sriv
resources
allocated
to
a
specific
vm
and
in
terms
of
configuring,
it
will
kind
of
modify
the
host
resources
to
prepare
them
to
be
used
for
a
virtual
machine
instance.
So
so
this
is.
This
is
not
a
read-only
plugin
in
that
sense
and
what
whatever
commands
it
runs.
It
basically
uses
net
link
etcetera
to
move
these
sriv
virtual
functions
into
specific
pod
namespaces.
A
So,
depending
on
the
namespace
in
which
you
are
creating
a
vm,
it
will
allocate,
it
will
run
the
net
link
command
and
move
over
any
virtual
function
or
physical
function
into
the
respective
namespace.
A
Lastly,
multis
so
multis
is
more
like
a
meta
plugin.
So
when
you
create
a
network
attachment
definition
in
cube
word
or
in
kubernetes,
you
can
basically
use
markers
with
the
sri
vcni
plugin
whenever
you
attach
a
vmi
to
a
sriv
interface.
So
this
is
something
that's
part
of
the
network.
Attachment
definition
crd
object
that
I'll
show
shortly
and
what
it
basically
helps
with
is
it
identifies
the
object,
annotations
and
based
on
the
resource
name
that
you
configure
when
you
set
up
the
device
plugin
and
whatever
reference
you
put
in
there.
A
Keyword
is
automatically
able
to
fill
that
in
and
the
word
launcher
part
that
is
running
a
specific
vm
and
it
would
go
ahead
and
update
the
requests
and
limit
section
for
those
parts
there
so
by
by
using
multis
in
combination
with
the
srlv
device.
Plugin
the
resource
name.
Annotation
is
something
that
the
pod
will
receive
a
device
from
the
pool
that
is
allocated
by
the
device
plugin,
and
this
allows
any.
This
allows
us
to
basically
avoid
any
manual
intervention
and
passing
in
the
required
pci
address.
A
In
terms
of
any
configuration
on
the
sriv
host
rate,
the
cluster
hood
that
host
that
you
want
to
use
for
sriv
vns,
you
basically
need
a
net
card
that
is
sriv
capable.
So
not
all
hardware
is
capable
of
supporting
sriv
by
default.
A
Once
you
have
that
there
are
also
certain
bios
settings
that
you
need
to
enable
to
actually
utilize
that
next
function
and
on
some
of
the
physical
servers,
you
would
see
your
setting
in
bios.
That's
there's
a
global,
enable
flag
for
sriv
that
needs
to
be
enabled
and
there's
also
a
per
next
setting
that
you
can
toggle
and
enable
it
for
specific
mix
once
it's
enabled
in
the
bios.
A
A
So
if
it's
intel
processor,
then
you
would
pass
intel
iom,
u
equal
to
on,
if
it's
amd
would
do
amd
underscore
iommu
and
similarly,
you
would
also
add
these
pci
equal
to
reallocate
and
pci
equal
to
assign
buses
command
line
parameters
to
the
kernel
config.
A
If
you
do
make
that
change
in
the
graph
command
line,
then
you
need
to
kind
of
save
that
and
rebuild
the
ram
disk
and
reboot
your
host.
In
that
case,
if
this
is
the
first
time
that
you're
setting
up
the
host
for
sriv
cluster,
like
I
mentioned
before
for
qfword,
it
only
supports
the
vfio
uses
space
driver
to
pass
through
these
pci
devices
into
qmu
for
running
your
vms.
A
A
You
would
always
have
the
part
network
for
the
kubernetes
listed
as
the
first
interface,
and
there
are
like
two
modes
that
you
can
put
that
in
so
the
default
network
here
is
put
on
the
mask
rate
mode
I'll
not
get
into
the
details
of
that
here,
and
the
second
interface
is
attached
to
srav
network
these
sriv
net.
As
you
can
see
on
the
right
side,
it
specifies
that
the
network
type
for
this
is
maltus,
and
you
can
specify
the
network
name
here
which
corresponds
to
a
network
attachment
definition.
A
So
this
is
how
a
network
attachment
definition
for
sriv
would
look
like.
What
I
was
referring
to
earlier
is
the
sorry
is
the
resource
name
that
you
specify
as
the
annotation
in
metadata
section,
and
here
this
this
is
a
custom
prefix
that
you
can
have
in
this
case.
I
have
it
set
to
intel
dot
com,
slash
intel,
underscore
sriv,
which
means
the
kubernetes
scheduler
will
look
for
any
nodes
that
have
resources
of
this
annotation
and
accordingly
schedule
parts
or
vms.
On
on
such
a
node.
A
A
You
can
have
optionally
a
vlan
id
added
here,
but
but
that's
something
that
you
can
choose
to
not
have
if
it's
a
flat
network,
the
ipad
section,
the
example
I
have
here,
it's
using
whereabouts
plugin
for
ipam,
in
which
case
you
would
specify
a
range
of
ips,
a
subnet
along
with
a
allocation
range
start
and
end,
and
you
can
also
specify
gateway.
A
A
So
the
first
thing
just
to
kind
of
talk
about
open
v,
switch.
What
it
really
is.
It's
a
production,
quality,
multi-layer,
virtual
switch
and
in
terms
of
the
main
components
of
open
v
switch.
We
have
the
forwarding
path
and
the
v
switch
d,
so
forwarding
path.
Here
is
basically
your
data
plane
network,
forwarding
path
and
the
module
is
implemented
in
kernel,
space
to
achieve
higher
performance
and
the
second
one
v
switch
d.
Is
the
user
space
component
of
it?
This
is
the
one
that
actually
does
traffic
switching.
A
A
The
goal
of
dpdk
environments
is
basically
to
allow
you
to
do
faster
packet
processing
for
any
telco
apps
that
require
a
higher
throughput
similar
to
sri
sriv.
Here
it
tries
to
achieve
that
by
bypassing
the
kernel
linux
kernel
network
stack
for
implementing
switching
in
the
user
space.
It
relies
on
pole
mode
drivers
and
dpdk
is
also
something
that
you
can
combine
with
open
v
switch.
So
when
you
have
open,
we
switch
on
a
host
and
you
combine
that
with
dpdk.
A
You
get
accelerated
performance
because
it's
bypassing
user
space
in
both
the
layers
in
terms
of
comparison
of
network
throughput
with
dpdk
and
sriv.
A
If
you
enter
that
conducted
a
study
where
tried
to
compare
the
performance
in
both
the
cases
and
what
it
basically
concluded
was
that
if
you
have
e-stress
traffic
between
vnf
apps
that
are
running
in
the
same
server,
in
that
case,
dpdk
would
be
a
better
alternative
than
going
ahead
with
srlv,
which
is
more
desirable.
If
you
have
north
south
traffic
that
kind
of
exits
the
neck,
in
which
case
you
can
actually
take
advantage
of
virtual
functions.
A
A
So
so
this
diagram
kind
of
shows
very
well
how
how
you
would
use
a
bull,
more
driver
with
an
obs
with
dpdk
case.
So
on
the
left
hand,
side,
it
shows
the
forwarding
plane,
which
is
running
as
a
kernel,
space
module
and
in
user
space.
You
have
the
switch
id
here.
A
What
you
basically
lose
out
on
is
any
interrupts
that
happen
in
the
kernel
space,
add
to
kind
of
make
up
cause
a
bottle
like
that,
whereas,
on
the
right
hand,
side
the
user
space
module
has
a
component
has
a
dpdk
forwarding
module
which,
like
I
mentioned
before
it's
relying
on
the
pole
mode
drivers
to
do
the
packet
switching
in
in
each
of
the
case,
you'll
see
that
the
v-net
that's
associated
with
every
vnf
will
kind
of
go
through
ovs,
but
it
will
do
the
packet
processing
or
the
forwarding
in
the
user
space
instead
of
doing
that
in
the
kernel
module
that
was
used
for
forwarding
path
earlier.
A
A
This
is
still
not
fully
merged
upstream,
so
the
demo
that
we
do
will
be
based
off
these
changes
and
we
have
kind
of
implemented
that,
on
top
of
the
upstream
keyboard
version,
the
latest
one
that
we
have
the
main
components
that
you
would
need
for
implementing
any
dpdk
apps
with
keyboard
the
intel
user
space
cni
plugin
again,
you
would
use
the
multis
meta
plugin
to
attach
interfaces
to
a
queue
for
vms
and
in
terms
of
the
packages
that
you
need
on
the
post,
you
would
need
obvious.
A
So
these
are
the
host
configuration
steps
that
you
would
need
to
perform
on
the
host,
firstly,
install
the
appropriate,
dpdk
and
obvious
packages
on
host
use,
based
on
whatever
distribution
you're
using
the
obvious
dpdk
package
is
part
of
the
over
triple,
so
you
can
install
it
from
there.
You
would
typically
need
to
configure
some
number
of
huge
pages
based
on
your
physical
hardware
capacity,
so
you
can
configure
that
using
ctl
and
once
it's
persisted
in
that,
you
should
be
able
to
utilize
huge
pages
in
your
bmi
spec.
A
A
Once
the
module
is
loaded-
and
you
have
the
obvious
bridge
and
dpdk
port
setup
done
sorry
and
dpdk
configuration
done,
you
would
need
to
create
a
dpdk
port
in
obs
as
well.
For
that
you
can
use
the
obvious
vsctl
command
line
I've.
Just
given
you
like
one
example
where
you
first
add
a
bridge
with
data
path
type
as
net
depth.
A
So
instead
of
using
a
host
net
device,
it's
going
to
use
net
depth
and
you
add
a
dpdk
port
to
that
bridge
for
every
physical
dpdk
device
that
you
want
to
associate
with
that
bridge
and
that
can
be
specified
using
the
dpdk
depth.
Arcs
argument
here
and
you
would
specify
the
pci
address
that
would
match.
Whatever
was
the
id
that
you
passed
in
the
command
above
with
driver
ctl
set
override
where
you
attach
it
to
vfi
or
pci
driver.
A
Similar
to
the
kuford
vmi
spec
for
sriv
you
here,
you
would
pass
in
the
interface
names
and
sorry
the
network
name
as
we
used
usernet1
as
an
example,
and
the
type
would
be
v
host
user
for
the
second
interface.
The
first
interface
is
the
same
default
pod
network.
In
this
case,
the
network
name
that
you
specify
under
multisection
on
the
right
is
a
net
one
here
and
net.
One
is
what
would
be
matched
with
your
network,
attachment
definition
name.
So
here
is
a
sample
of
the
network,
attachment
definition
for
dpdk.
A
In
this
case,
contrast,
in
contrast
to
the
earlier
yaml
file
that
we
looked
at
the
type
here
would
be
user
space.
Again
you
specify
a
cni
version,
and
then
there
is
this
host
and
container
section
that
you
need
to
specify
under
the
host
dictionary.
You
have
the
ancient
type
as
ovs
dpdk.
A
A
So
vhost
mode
is
client
in
this
case
and
what
the
bridge
name
that
we
added
with
obs
psctl
adbridge
and
that's
the
one
that
you
would
specify
under
the
bridge
name
section
again
under
the
on
the
container
side,
you
would
specify
interface
type
sv
host
user,
and
this
is
the
server
side
of
it.
So
what
what
it
really
means?
A
A
So
those
are
the
two
different
modes
where
either
host
is
acting
as
a
client,
in
which
case
it's
we
host
user
and
the
other
one
is
where
container
can
act
as
a
client
or
other
qmu
can
act
as
a
client,
and
all
it
really
mat
means
is
that
your
one
of
them
is
responsible
for
creating
the
socket
and
the
other
one
is
going
to
try
and
establish
a
connection
to
that
socket.
A
So
after
this
we'll
cover
the
demo
side
of
qfort
and
see
how
we
can
deploy
vnf
apps
using
sriv
and
dpdk,
both
so,
let's
get
into
the
demo
for
running
vnf
applications
using
qport
vms
I'll
first
cover
sriv
interfaces
attached
to
vms
and
show
how
you
can
create
a
vm
using
queue,
work
and
then
move
on
to
creating
a
vnf
using
a
vf
vm
that
has
dpdk
interfaces
so
for
sriv.
I
already
have
a
cluster
created
here.
A
I'll
show
the
parts
that
run
as
part
of
keyboard,
since
I
have
that
installed
here
already
so
for
the
cluster
you'll
see
the
word
api
word
controller
and
what
operator
parts
running
and
for
every
host.
There
is
a
word
handler
part
that
runs
since
I
just
have
one
node
in
the
cluster.
There's
a
single
word
handler
part
that's
running
here
and
it's
responsible
for
launching
any
vms
that
get
scheduled
on
that
specific
cluster
node.
A
I
also
have
cdi,
which
is
for
basically
creating
data
volumes
and
associating
those
with
those
as
disks
for
your
vms.
Since
that
is
already
running,
you
would
see
cdi
parts
for
api
server
deployment
operator
and
upload
proxy
running
here
in
terms
of
the
other
component
that
we
have
here,
I
have
luigi,
which
is
the
open
source
plugin
for
configuring,
sriv.
A
Virtual
functions
before
so,
if
you
look
at
an
interface
and
under
the
sys
class
net
interface
name
device,
there
is
a
file
called
srev,
num
vfs,
which
will
show
you
how
many
vfs
have
been
created
on
this
interface
here,
since
I'm
using
luigi
I'll
also
show
the
yaml
file
that
you
would
create
for
configuring.
Your
device
plugin
that
we
spoke
about
in
the
cni
section.
A
The
resource
list
here
is
what
specifies
resource,
prefix
and
a
resource
name.
This
is
what
we
would
need
to
use
later
on
in
the
network,
attachment
definition,
spec
file,
resource
prefix,
in
combination
with
the
resource
name,
to
kind
of
decide
which
interfaces
you
want
to
be
enabled
with
the
srav
device
plugin.
A
There
are
different
formats
in
which
you
can
specify
the
selector
section.
The
one
I
have
used
here
is
just
straightforward:
physical
function
names
I
have
two
necks
here
that
are
capable
of
srlv,
eno3
and
eno4,
and
luigi
would
basically
help
configure
them
with
the
vfi
or
pci
driver.
So
that's
the
driver,
I'm
specifying
here
in
the
config
map,
that's
used
by
a
device
plugin
once
that
is
done.
A
A
Only
on
those
selected
nodes
for
each
physical
function,
you
can
specify
the
number
of
vfs
the
vf
driver
to
bind
using
and
the
mtu
size
if
you
want
to
set
that.
So
this
is
a
sample
template
using
which
I
had
created
virtual
functions
and
that's
why
you
see
seven
vfs
that
are
present
on
this
host.
A
So
the
network
attachment
network
attachment
name
here
is
sriv
network
eno,
which
would
be
part
of
your
vmi
spec
file.
The
thing
to
note
here
is
basically
the
type
sriv
line
and
optionally
you
can
set
a
vlan
id
and
for
ipam
you
could
use
whereabouts
or
any
other
plugin
as
desired.
A
In
the
vmi
spec
queue
it's
similar
to
how
you
would
create
a
typical
virtual
machine
instance
object
in
q4
I'll
focus
on
the
parts
that
are
specific
to
sriv
here,
which
is
basically
your
network
interfaces
section
in
the
domain,
spec,
the
first
network
being
the
pod
network
and
the
second
one
being
the
sriv
type
and
the
network
section
here
which
specifies
the
multis
network
name,
and
this
is
what
matches
your
network
attachment
definition
that
we
looked
at
earlier.
A
In
this
case,
I'm
just
using
a
fedora
test
image
here.
So
this
is
this
image
is
something
that
you
can
replace
with
your
specific
vnf
application
and
put
any
associated
cloudant
script
as
part
of
the
cloudant
data
here.
A
A
So
if
you
look
at
the
network
interfaces
inside
this,
vm
here
notice
each
zero
that's
attached
to
the
default
pod
network,
and
then
there
is
eth
one,
which
is
your
sriv
interface.
In
this
case-
and
here
this
is
again
the
ip
of
the
vm
that
was
already
running,
but
this
is
on
the
default
pod
network
with
srav.
A
It
doesn't
really
get
an
ip
assigned
to
inside
the
vm,
so
you
would
need
to
configure
that
via
a
cloudant
script,
the
ip
that
was
associated
using
whereabouts
will
just
get
assigned
to
the
pod
itself,
but
not
for
the
actual
vm,
that's
running.
So
that's
something
that
needs
to
be
done
either
using
an
external
dhcp
server
or
using
the
cloudnet
script
here.
A
So
that's,
basically
how
you
would
create
a
vmi
using
srv.
It's
it's
really
straightforward.
The
the
only
thing
that
changes
here
is
how
you
create
this
network
attachment
definition,
and
once
you
have
that
you
basically
just
need
to
use
the
utilize,
the
network
name
in
your
vmi
spec.
That's
the
only
difference.
A
Let
me
just
export
the
cube,
config
and
you'll
see
again
that
there's
a
single
master
node
that's
available
here.
It
again
has
the
exact
same
deployments
here
there
is
keyword
installed
and
cdi
installed.
This
also
has
so
since
luigi
doesn't
support
dpdk.
Today,
that's
still
a
some
work
in
progress
item
for
adding
to
luigi
I'll,
just
showcase
the
obvious
level
info
in
this
case.
A
So
if
you
look
at
the
ovs
via
ctl
show
output,
you
will
see
that
there
is
a
obvious
bridge.
That's
created.
The
ovs
bridge
created
here
has
two
interfaces:
there's
two
ports,
basically
eno2
and
eno4.
Again
these
are
two
net
cards
that
are
being
utilized
for
dpdk
traffic
in
terms
of
configuring
or
adding
the
port.
The
option
that's
utilized
is
dpdk
dev
arcs,
to
which
you
assign
the
pci
address.
A
So
since
I'm
utilizing
the
entire
eno2
and
eno4
interfaces
here
as
dpdk
ports
and
specifying
their
respective
pci
addresses
as
part
of
the
port
config.
A
A
So
we
spoke
about
this
earlier.
The
host
and
container
sections
are
the
main
ones
that
differ
for
the
user
space
type
network
attachment
definition.
A
The
bridge
name
here
again
needs
to
match
what
you
created
in
obs,
and
this
host
site
should
be
in
a
client
mode
and
container
site
would
be
in
a
server
module
same
thing.
The
ipan
section
could
be
anything.
Whereabouts
is
just
an
example
here.
A
The
network
name
net1
is
what
would
be
part
of
the
vmi
spec,
so
you'll
notice
here.
This
is
a
virtual
machine
named
test
dpdk.
A
A
Inside
the
vm,
I'm
just
running
ipa
command
and
here
again,
you'll
see
two
interfaces.
Each
zero
is
a
default
pod
network
and
since
it
was
using
masquerade
mode,
this
will
show
you
the
internal
ip
address
that
snap
it
and
on
the
pod
level,
you
would
see
the
ip
address
that's
shown
up
here,
eat
one
is
your
actual
dpdk
interface.
A
A
So
I
just
want
to
quickly
show
the
packages
that
you
would
need
for
dpdk
on
this
host.
Since
I
had
already
configured
it
it's
using
the
pdk
package
itself,
which
is
for
the
runtime
and
even
for
openvswitch.
It
needs
a
package
from
the
over
triple
because
that's
the
one
that's
compiled
with
dpdk
enabled.
A
So
that's
the
only
config
that
you
would
need
here
outside
of
what
you
configured
with
obvious
vsctl
show
and
in
driver
ctl.
You
would
run
the
command
to
basically
set
an
override
for
the
vfi
or
pci
driver
like
we
spoke
about
earlier,
so
so.
That
concludes
the
demo
on
running
vnf
applications
using
kuford
vms,
with
both
srlv
and
dpdk.