►
From YouTube: Advanced networking with Luigi and Hostplumber
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Today,
I'm
gonna
be
talking
to
you
guys
about
advanced
networking
with
luigi
and
host
plumber.
My
name
is
arjun
bandur
and
I'm
a
software
engineer
at
platform,
nine
working
primarily
on
the
kubernetes
and
the
networking
the
networking
stack.
A
So
you
might
be
wondering
you
know
advanced
networking.
What
does
that
mean?
As
you
know,
normally
with
kubernetes,
you
have
one
cluster-wide
network.
You
know
you
generally
have
one
cni,
all
the
pods
kind
of
connect
to
it
might
not
fit
all
your
networking
use
cases,
for
example,
for
a
virtual
network
functions
for
virtualization.
A
You
know
connecting
to
legacy
legacy
and
networking
networks
in
your
data
center,
so
we're
kind
of
here
to
help
you
with
all
of
that
I'll,
be
talking
about
two
different
operators
that
we
have
developed,
as
I
did
earlier,
they're
called
luigi
and
host
plumber,
so
first
luigi
is
basically
a
cluster
add-ons
operator
for
advanced
networking.
Plug-Ins
you've
probably
seen
different
types
of
advanced
add-on
operators.
A
You
know
under
different
projects,
but
we
decided
to
develop
our
own
because
you
know
they
kind
of
didn't
all
fit
our
use
cases
and
they
were
kind
of
missing
certain
plugins
like
sriov
or
ipam
drivers,
for
example.
So
we
just
decided
to
have
our
own
one.
That's
kind
of
focused
on
these
advanced
networking
plugins
with
the
goal
of
deploying
multis,
which
you
will
almost
always
need,
along
with
all
the
other
cni's
and
plugins,
that
you
need
to
get
set
up.
So,
for
example,
you
know.
A
Obviously
you
probably
heard
about
this
project
from
multis,
which
is
the
first,
which
is
the
first
plugin
that
you'll
need
to
support
any
multiple
networks
and
kubernetes.
If
you
want
to
go
from
one
c9
to
more
than
any,
you
will
always.
This
luigi
also
supports
sriov
and
the
device
plugins
needed
to
support
sri
ob.
It
will.
A
It
supports
a
particular
ipam,
plugin
called
whereabouts,
which
will
you
know,
provide
your
pods
with
ip
addresses
in
the
absence
of
a
dhcp
server,
it'll
install
open
vswitch
as
well
as
cni
for
you,
it'll
install
node,
feature
discovery
for
you,
which
can
be
handy
tool
and
host
humber,
which
is
the
other
operator
that
I
just
mentioned.
You
know
which
will
help
you
actually
configure
the
nodes
that
you
need
to
use
all
these
plugins
right.
It's
you
can't
just
install
the
plugins
and
then
start
using
them.
A
You
probably
have
to
prep
the
nodes,
and
you
know
you're
wondering
how
to
do
that.
So
host
number
is
our
second
operator
that
I'll
be
talking
about
later
today,
and
this
is
used
to
actually
configure
the
nodes
for
these
advanced
networking
use
cases.
So,
for
example,
you
know
creating
sriv
virtual
functions
right,
even
so
you
suppose
you've
installed
multis
and
you've
installed
that
sorry
vcn
device
plug-in
you
still
need
to.
You
know,
go
on
each
node
load.
The
virtual
function.
A
It
can
also
do
various
other
things
like
vlan
interfaces,
configure
obs,
so
you
can
view
the
routing
tables
on
all
your
nodes,
and
this
allows
you
to
do
it
basically
from
a
kubernetes
custom
resource
operator
without
without
having
to
go
on
each
node
and
configure
it
just
surprise.
The
kubernetes
kind
of
centric
way
of
configuring,
your
notes,
so
with
that
I'm
gonna
be
running
through
a
demo
shortly,
hi
guys.
A
A
A
A
You
probably
heard
of
open
b
switch.
You
know
you're
wondering
how
do
I
install
these?
How
do
I
configure
my
hosts
to
be
able
to
use
them?
How
do
I
get
my
ip
addresses
assigned
on
these
other
networks?
You
know
how
do
I
install
an
ipam
plugin
does
dhcp
work
so
luigi
and
host
plumber
are
basically
here
to
kind
of
help
you
out
in
this
regard
and
simplify
the
deployment
process.
A
The
first
is
that
this
requires
an
already
working
kns
cluster.
So
you
know,
if
you
have
a
you,
need
a
working
kubernetes
cluster.
It
should
work
on
any
kind
of
deployment.
It's
it's
agnostic
to
which
vendor
you
have
kubernetes
from,
but
you
just
need
some
kind
of
a
working
page
cluster
with
a
core
dns
and
a
primary
cni
up
in
my
example
cluster.
A
Here
I
have
a
kubernetes
cluster,
that's
running
calico
and,
as
you
can
see,
it's
fairly
simple,
I
don't
have
any
deployments
on
nothing
else,
really
mainly
just
calico
right
now
and
that's
it
so
to
deploy
so
first,
let
me
actually
get
into
what
luigi
is
and
what
what
plugins
it
supports.
A
So
the
first
is,
you
know:
multis,
you've
probably
heard
of
it.
This
is
always
always
required
if
you
want
to
add
multiple
cni's.
A
It
also
supports
sriv
the
device
plug-in
open,
v-switch,
it
installs
actual
switch
game
in
itself,
along
with,
like
the
cli
tools,
also
install
the
obs
cni
plug-in
you'll
install
a
ipam
driver
called
whereabouts,
which
you
may
have
heard
about.
It
also
supports
node
feature
discovery
and
the
one
that
also
will
talk
about
later
is
called
host
plumber.
A
This
is
for
now
it's
located
within
the
same
repo
in
platform
and
luigi
slash
host
plumber,
it's
another
operator,
independent
operator
actually,
but
it's
mainly
an
operator
that
allows
you
to
configure
your
nodes.
Networking
prerequisites
also
view
your
node's
networking
state,
and
this
is
mainly
to
get
your
nodes
set
up
and
working
with
sriv
open
v
switch.
A
You
know,
mac
vlan,
where
all
of
these
plugins
right,
because
you
might
need
to
load
certain
sriov
drivers,
configure
the
vfr
interfaces,
create
vlan
interfaces,
create
obs
bridges,
view
the
routing
ip
information
of
your
nodes
so
forth.
So
you
know,
unless
you
have
your
own
automation
and
way
to
prep
your
nodes
and
bootstrap
your
nodes,
you
can
use
host
pumper,
which
is
a
plug-in.
A
A
This
is
just
the
initial
what's
supported
so
you
know
for
now
it'll
take
in
like
you
can
choose
whether
what
name
space
to
deploy
a
plug-in
in
what
the
image
override
is
so
right
now
luigi
will
deploy
a
stable
release
version
of
each
of
these
cni's
suppose
you're
doing
in
a
dev
test
environment
where
you
have
a
custom
bug
fix
or
you're
working
on
the
actual
cni's.
A
A
So
let's
get
into
the
first
example
here
of
how
to
actually
deploy
all
these
plugins.
Here's
a
very
comprehensive
example
here
so
with
luigi
there's,
one
crd
and
it's
just
called
network
plugins-
and
you
know
so
you
can
define
the
for
now.
The
you
can
set
best
for
the
private
registry
base
we're
not
using
one
right
here,
so
we
have
the
plugins
that
are
supported,
and
I'm
here
I'm
saying
deploy.
You
know
host
number
with
the
default
options.
A
I'm
not
setting
any
of
these
configurations
here,
deploying
node
feature
discovery,
deploying
multis,
deploying
srv
and
obs
to
the
default
options.
Whereabouts,
on
the
other
hand,
as
you
can
see,
has
some
configuration
overrides,
I'm
not
going
to
get
detailed.
The
specifics
on
these.
These
are
all
specific
to
the
actual
plugin.
A
So
if
you're
wondering
what
the
ip
reconciler
is,
for
example
and
whereabouts,
you'd
have
to
actually
go
to
the
whereabouts
github
and
see
what
this
feature
does
otherwise,
like
I
said
it's
best,
if
you
don't
know
what
you're
doing
you
can
leave
all
these
blank
like
this
and
all
the
luigi
will
do
deploy
with
what
you
know
is
a
good
set
of
defaults.
This
is
just
to
show
what
some
advanced
customizations
may
look
like
here's
another
one
that
you
may
want
to
actually
do
right
by
default.
A
It's
gonna
deploy
these
in
the
default
name
space
here,
I'm
just
overwriting
them
and
I'm
saying
deploy
these
in
the
cube
system.
Namespace,
you
know,
depends
on
what
you
want
name
space
you
wanted
to
put.
Similarly
luigi
will
take
care
of
upgrading
these
plugins
for
you,
each
version
of
luigi,
when
you
upgrade
luigi
the
luigi
operator,
if
it's
managing
any
of
these
plugins
like
multis
or
whereabouts
or
sriov,
it
will
then
in
turn
upgrade
these
plugins
to
a
new
release.
A
So
let's
go
to
my
cluster
here
where
I
have
like
I
said
earlier,
I
have
calco
I've
already
deployed
luigi
and,
as
you
can
see,
it's
just
running
right
now,
it's
just
a
controller,
it's
a
deployment
of
one
replica,
I
deployed
it
and
you
can
deploy
it
either
way
anyway,
from
the
sample.
I
have
here.
Just
cube
cuddle
apply
luigi
plugins
operator
in
the
samples
folder
at
the
our
repo
here.
A
So
if
you
go
to
samples,
this
is
the
one
you
want
just
get
the
raw
version
of
this
and
keep
cuddle
apply
it.
So
this
is
a
quick
glance.
You
can
see.
You'll
define
a
namespace
for
itself.
This
is
the
crd
for
the
plugins
resource
that
I
just
described,
and
this
will
show
kind
of
what
fields
are
actually
supported
if
you
want
to
get
technical
and
these
other,
our
back
resources
creates
yourself.
A
A
I
have
a
sample,
a
crd
here.
I
think
it's
the
same
one
from
the
example
on
github
I,
for
now
I'm
deploying
actually
going
to
deploy
all
these
plugins
right
here.
A
Actually,
first,
I'm
not
I'm
going
to
show
you
how
to
use
it,
I'm
not
going
to
deploy
sri
ob,
because
I
need
to
create
the
virtual
functions
first
and
then
I
shall
deploy
sriob
so
for
now.
Let's
comment
sriv
out
and
I'm
also
going
to
comment
obsl
just
because
I'm
not
going
to
demonstrate
obs
in
this
demo,
but
do
know
that
if
you,
if
you
wanted
to
play
obvious,
like
I
said
earlier,
this
will
deploy
open,
vswitch
itself
as
well
as
a
cni.
A
So
we'll
deploy
both
of
those
everything
you
need
to
basically
use
obs
for
now
like
these
should
be
fine,
so
I'm
gonna
deploy
all
these.
Basically,
I'm
gonna
deploy
whereabouts
multis,
no
feature
discovery
and
host
pump,
so
save
that.
A
Okay
watch
our
deployments
and
make
sure
that
all
the
plugins
that
we
deployed
are
coming
up,
so
we
can
see.
Oh
before
we
just
to
show.
I
have
three
nodes
in
my
cluster.
A
I
have
a
masternode
and
I
have
two
workers
where
sriv
has
been
already
been
enabled
in
the
bios.
So
I
have
three
host
plumber
running
one
on
each
node.
I
have
a
multis
that's
running
on
each
node.
I
have
a
whereabouts
that's
running
on
each
node
and
we
have
also
deployed
the
node
feature.
Discovery
deployment
as
well.
A
So
now
that
these
are
all
running
your
next
question
might
be
you
know
how
do
I
actually
use
it
right?
We
just
have
these
plugins
running,
that's
the
first
step
before
you
can
actually
use
them.
So
now
we
come
to
host
number
right.
A
So
here's
just
an
overview.
If
you
know
what
all
it
can
do,
can
configure
vs,
configure
vlan
interfaces,
configure
obs
bridges
and
attach
interfaces
configure
and
view
the
ip
addresses
and
routing
tables
of
your
knowns
and
more
planned
for
the
future,
such
as
you
know,
enabling
ctl
configuration
bonding
interfaces.
A
A
How
to
use
this.
Here's
a
comprehensive
example
that
kind
of
shows
a
lot
of
what
it
can
do,
so
you
just
want
to
create
a
host
network.
Template
object,
give
it
a
name.
You
can
use
node
selectors
to
select
what
nodes
it
will
target.
So
in
this
case,
you
can
see
that
I
am
applying
a
template
and
it's
going
to
apply
to
every
node
that
has
the
sriv
label
as
true
and
the
foo
equals
bar
label
is
true.
A
So
where
did
this
first
label
come
from?
Well,
this
came
from
automatically
node
feature
discovery
and
that's
why
this
plugin
is
included
this
plugin.
If
you
automatically
deploy
it
it'll
automatically
discover
node
capabilities.
You
know
from
srv
to
cpu
flags
and
append
them
as
labels
to
every
node.
So,
for
example,
if
we
did
cube
cuddle.
A
We
have
a
lot
of
labels
here
called
feature.node
that
lists
a
lot
of
flags
here,
the
one
that-
and
this
is
all
automatically
added
the
ones
that
you
know
we
are
interested
in-
are
these
sriv
ones?
And
you
know
the
reason
for
that
is
is
we're.
Obviously
configuring
certain
networking
configuration
not
all
our
nodes
may
support
this
topology.
A
In
my
example,
my
masternode
is
not
as
very
capable,
so
you
know,
we
don't
want
to
config,
have
configure
anything
and
have
it
error
out
by
default.
It
will
go
and
apply
this
configuration
globally
to
every
node
in
the
cluster.
If
you
don't
have
any
node
selectors.
So
now,
let's
look
at
actually
what
this
is
doing.
It's
configuring!
A
A
A
This
is
needed
because
you
know
some
cni's
don't
support
vlan
tagging
of
their
own
mac
vlan,
for
example,
being
one
example,
so
this
will
kind
of
allow
you
to
automatically
go
and
create
vl
interfaces
on
everything,
and
then
here
we
have
an
example:
obs
config,
where
we
create
a
bridge
named
ovsbr01,
and
this
is
again
possible
because
we
deployed
obs
using
luigi,
so
this
is
actually
going
to
create
the
bridge
on
each
node
for
you
and
it'll
attach
en02.1000,
which
we
just
created
up
here
as
the
nic
assigned
to
this
bridge.
A
So
this
is
obviously
a
very
comprehensive
example,
we're
kind
of
combining
everything
into
one
we're
using
node
selectors.
You
can
split
this
up
in
as
many
templates
as
you
want.
So
you
know
here's
I'm
just
going
to
skim
through
this,
but
you
know
here's
an
example
where
we've
split
it
up
and
just
sriv
config,
and
we
have
a
different
template
for
each
particular
nick
right.
We
have
a
template
for
configure
eight
vfs,
just
on
the
f1
nick
here.
Another
template
on
just
f0
nick
here
that'll
create
four
vf's.
A
A
So
in
this
case
you
know,
it'll
go
and
select
every
intel
nic
because
that's
the
vendor
id
for
intel
and
every
nic
that
matches
this
15
20
model
number
and
go
and
create
32
virtual
functions
and,
as
you
can
see
again,
we
have
no
node
selectors
here.
So
this
is,
you
know:
gonna
create
every
node
in
your
cluster
and
apply
this
config.
A
You
can
also
configure
by
pci
address
where
you
know,
instead
of
naming
nick
or
the
vendor
id,
you
know
the
pci
address
of
each
nic,
and
this
is
just
another
way
to
do
it
in
my
case,
for
example,
you
know
this
would
thing
be
enough.
This
would
be
nl2
and
so
I'll
be
applying
this
to
two
to
separate
nyx
between
30
virtual
functions.
A
A
The
interface
config
is
allows
with
you
know,
configure
ip
addresses
on
nodes
as
well
as
vlan
interfaces
for
now,
as
I
said
earlier,
we
might
add,
support
for
configuring,
bonding
and
other
any
other
ethernet
level
can
an
ip
level
configuration
here,
but
for
now
you
know
ip
address
configuration
only
makes
sense
if
you're
using
a
target,
a
particular
node.
Otherwise
you
know
you
will
learn
ip
conflicts.
If
you
apply
this
the
whole
cluster,
but
you
know
this
is
a
way
to
configure,
say
secondary
ip
addresses
on
nodes
or
set
an
interface
mp.
A
The
vlan
interface,
as
I
mentioned,
is
one
of
the
more
useful
features
here,
just
as
I
mentioned,
because
some
cns
may
not
support
vlan
tagging.
So
here
I'm
creating
actually
four
vlan
interfaces
on
two
separate
links.
I've
created
three
vlan
interfaces
on
eno2
and
I've
created
one
on
eno1
and
apply
this.
This
will
again
apply
to
every
node,
that's
only
sriv
capable
and
then.
A
So
now
I'm
going
to
talk
about
the
so
actually,
let's
go
and
configure
the
nodes
right.
So
let's
say
I
have.
A
As
I
mentioned,
I
have
two
two
worker
nodes
here
that
are
srv,
capable,
I'm
not
gonna
be
demoing
obs.
A
So
for
now
let
me
I'm
just
going
to
remove
the
obs
config
and,
let's
see
I
might
know
the
nics,
where
I
had
sriv
enabled
are
eno2,
I'm
going
to
create
8
virtual
functions,
I'm
going
to
be
using
the
kernel,
driver,
i40,
ebs
and
let's
just
go
and
apply
this
template.
A
A
A
A
A
So
now
we
have
eight
virtual
functions
here:
number
zero
through
seven
that
are
present
under
the
en02
device.
I
have
three
vlan
interfaces
on
my
node
here
and
I
have
on
both
these
nodes
that
I
applied
to
here
for
abf's
and
eight
vf's
on
en02
to
rearrange.
That's
because
this
is
the
template
that
I
applied.
You
know
only
sriv
capable
nodes.
A
Eight
vfs
on
en02
create
three
vlan
interfaces
and,
as
I'll
show,
you
I'll
be
using
the
vlan
interfaces
for
my
mac,
vlan
network
and
I'll,
be
using
the
v8
vx,
obviously
for
an
srav
based
network
great.
So
now
how
do
I
actually?
You
know
so
now,
we've
used
luigi
to
deploy
all
these
plugins,
the
host
number
multis
whereabouts,
the
ipv
reconciler,
is
actually
part
of
whereabouts.
A
We've
deployed
also
node
feature
discovery
to
automatically
label
our
nodes,
so
we
have
all
kind
of
and
we've
also
gone
and
configure
nodes
to.
Actually
be
able
to
use
these
features
so
how
to
actually
create
a
pod.
If
you
go
to
our
github,
we
have
some
samples
that
might
help
you,
actually,
you
know,
create
the
multis
networks,
as
well
as
create
the
sample
pods
to
use
these
networks
and
the
samples
that
I
just
deployed
are
in
the
top
level
samples
folder.
A
This
is
again
to
deploy
the
luigi
operator
itself
and
then
here
are
some
examples
to
actually
go
and
configure
your
network.
These
would
actually
you
know,
change
it
to
your
host
spec,
but
now,
let's
navigate
into
our
first.
Let's
do
a
mac
vlan
based
network
right,
so
here
we
have
a
definition
for
a
network.
A
So
again,
these
are
all.
This
is
going
to
be
specific
to
the
cni
that
you're
using
or
the
ipam
they're,
using
in
my
example,
we're
using
whereabouts
and
using
mac
vlan.
So
I'm
saying
you
know,
create
a
mac
feeling
network
on
this
vlan
interface
that
I
just
created
and
we're
going
to
be
using
whereabouts
and
here's
my
subnet
definition.
So
it's
just
a
simple
slash,
24
network
going
out
this
interface,
so
let's
go
and
create
this
mac.
Vlan
network,
so
I've
checked
out
the
repo,
obviously.
A
So
create
the
network,
it's
not
going
to
do
anything
yet
in
the
pod,
so
you
know
nothing
will
deploy.
This
is
just
deploying
the
actual
multis
network.
So
now
let's
go
and
create.
Let's
go
create
a
stateful
set
just
because
that
shows
more
of
it.
I'll
show
you
what
a
pod
spec
would
look
like
as
well.
Here's
the
pod,
as
you
know
this
is
this-
is
anything
related
to
luigi.
A
This
is
standard
multis
annotation,
where
you
create
a
multis
network,
and
then
you
just
specify
the
annotation
for
the
name
of
the
network
here,
so
my
network
was
actually
called
whereaboutsconf
in
this
example
here
and
so
in
my
pod.
It's
just
going
to
refer
to
that
network
as
one
network
to
attach
it
to,
and
this
is
just
a
simple
alpine
pod,
nothing
fancy.
A
Here's
a
stateful
set
example
that
I
had
that
will
create.
You
know
a
staple
set,
which
is
consists
of
three
replicas
here,
and
this
is
nothing
fancy.
It's
just
a
staple
set
of
three
replicas
with
some
anti-affinity
parameters
here,
and
I
was
talking
about
the
network
annotation.
As
you
can
see,
this
is
part
of
the
pod
specs.
You
don't
want
to
add
it
to
the
metadata
for
the
deployment
replica
set
gaming
set
or
staple
set
whatever
you
have,
you
want
to
add
it
under
the
spec
for
the
the
pod
itself.
A
So
it's
one
level
lower-
and
this
is
you
know,
identical
to
the
pod
spec
that
I
had
earlier
same
network,
just
simple
line
that
says
an
annotation
that
says
what
networks
I
want
to
attach
it
to
so
now
we
did.
A
It's
in
the
default
namespace,
so
we
have
a
sample
pod
running
it
got
assigned
to
one
of
our
nodes.
If
we
describe
this
pod,
keep
cuddle.
A
You
see
some
logs
here,
you
know
obviously
multis
attach
the
two
interfaces
to
it
and
the
reason
for
that
is
is
in
with
multis.
If
you
weren't
aware
the
primary
cni,
whichever
one
you're
using,
will
always
be
present
in
your
pods,
there's
really
no
way
around
that.
So
it's
kind
of
like
a
default
interface
that
appears
in
every
part
that
you
have
and
that'll
kind
of,
be
like
the
e0,
whereas
any
secondary
networks
you
attach
will
be
present
as
the
remaining
interfaces.
A
A
You
can
see
it
has
two
interfaces
and
actually
got
two
ip
addresses,
so
each
zero
is
its
calico
interface
and
net.
One
is
the
mac
vlan
interface
on
the
multis
network,
so
got
a
ip
of
32
and
now
let
me
show
an
example
for
sriv.
So
again,
as
I
mentioned,
we
have
samples
here,
so
we
just
navigate
to
the
sriv
folder
here.
A
The
first
thing
is
this:
is
you
need
a
credit
config
map
and
this
isn't
a
requirement
from
coming
from
luigi
or
postfumber?
This
is
just
something
that's
required
by
the
device
plugin
and
it
basically
tells
the
device
plug-in.
You
know
you.
We
went
and
created
eight
virtual
functions
previously
on
our
host
right.
This
tells
the
device
plugin.
Now
what
resources
and
virtual
functions
to
watch
and
then
tell
kubernetes
that
are
usable.
So
here,
if
we
go
to
our
example
here.
A
It's
the
same
example,
so
you
know
this
is
just
saying:
create
a
resource
name
called
intel,
srv
kernel
one
and
watch
en02
virtual
functions,
zero
to
eight
any
of
them
using
the
i40
evf
driver.
You
can
obviously
leave
this
blank
and
I'll
watch
all
virtual
functions
on.
You
know
two,
but
this
kind
of
gives
you
a
way
to
say:
if
you
want
to
reserve
the
virtual
functions
for
other
purposes
and
only
have
kubernetes
manage
some,
you
can
do
it.
A
If
you
want
to
know
more,
you
can
actually
go
to
the
device
plug-in
github
and
see
the
full
list
of
configuration
options
and
selectors
that
you
can
have.
They
get
a
lot
more
involved
and
complicated
than
this,
but
this
is
just
a
very
simple
example
to
show
how
to
do
it.
Okay,
so
now,
as
I
mentioned
earlier,
we
we,
you
know,
we
create
our
sriv
config
map.
A
We
created
the
multis
network
for
our
sriv
network
and,
as
mentioned
shown
earlier,
you
know
we
use
host
plumber
to
actually
create
the
virtual
functions
and
assign
the
i40
evf
driver
to
these
nodes.
But
as
you
may
remember,
we
didn't
actually
deploy
the
srv
plug-in,
yet
we
had
to
configure
the
virtual
functions
first,
and
that
was
only
so.
The
deployment
device
plug-in
could
detect
those
upon
startup,
let's
go
and
enable
the
device
plugin,
and
this
kind
of
shows
also
how
to.
A
Just
uncomment
uncommented
out
we're
going
to
be
deploying
srv
with
the
default
parameters
and
then
just
we
will
just
reapply
or
replace
the
network
plug-ins
a
crd.
If
we
want
to
delete
a
plug-in
say:
we've
deployed
a
plug-in,
that's
managed
by
luigi
and
you
want
to
delete
it
or
deploy
yourself.
You
have
to
uncomment
the
particular
plug-in.
So
if
you
want
to
let's
say
you
use
host
bumper
to
deploy
it
and
configure
your
nodes
and
you
don't
need
it
anymore,
you
could
just
comment
that
out
and
reapply
the
template.
A
A
Great
so
now
we
have
both
the
cni
running
as
well
as
the
device
plug-in
for
sriv
running.
The
cni
is
actually
you
know.
What's
going
to
be,
creating
doing
the
plumbing
of
assigning
a
virtual
function
to
a
container
we're
trying
to
resolve
the
cubelet
and
the
device.
Plugin
is
more
to
monitor
what
virtual
function
resources
are
created
and
allocated
and
assigned
to
pods,
and
this
will
be
used
more
for
scheduling
purposes,
but
the
srv
section
here
will
deploy
both
of
these
in
canada.
A
So
now
we
have
that
now
we
have
all
the
missing
pieces
before
we
can
deploy
an
actual
pod.
So
now
we
go
here,
we
already
did
created
these
two
resources.
First,
let's
go
and
create
the
pod.
A
As
I
mentioned
earlier,
we
have
two
pod
definitions
here.
One
of
them
has
basically,
this
is
similar
to
mac
vlan.
You
have
an
annotation
for
which
kernel
net,
which
srv
network
you
want
here
was
an
example
where
I
have
a
static
mac
address,
and
this
is
just
to
show
the
format
for
the
how
the
network
annotation
the
format
when
you
want
to
add
a
static
map.
So
let
me
create
these
four
pods
that
I
have
here.
A
A
Okay,
so
now
we've
deployed
these
four
pods
and
we
can
see
that
they
all
deployed
here
remember
the
sample
pod.
One
was
my
mac,
vlan
pod
that
I
showed
earlier,
and
these
are
the
four
srm
pods
that
I
deployed.
You
won't
see
the
srv,
your
mac
via
network
annotation
and
cube
cuddle,
get
pause
output,
for
example.
This
is
only
going
to
show
the
you
know:
calico
primary
cni
configuration
this
is
kind
of
outside
all
the
multis
networks
are
kind
of
outside
of
kubernetes
they're.
A
Not
really
the
api
servers
aren't
really
aware
of
them.
Besides,
you
know
the
network
annotation,
but
if
we
inspect
this
pod
again
now.
A
We'll
see
that
similar
to
the
mac
vlan
network
we've
attached
two
interfaces
to
it
and
thanks
to
multis
we
have
a
zero
from
calico
and
we
have
a
net
one
from
our
sriv
network.
We'll
see
a
similar
annotation,
a
network
status
applied
to
the
pod,
describing
you
know
in
detail.
The
pci
address
the
mac
ip
network
name
and
similarly,
if
you
you
know
exec
into
the
bash
prompt
of
this
pod
you'll
see
an
e0
and
calco
network
and
net
one
from
the
srv
network,
and
you
know
everything
should
work.
A
So
now
we
have
a
pod,
that's
using
sriv
and
it's
assigned
a
virtual
function
with
this
pci
address.
So
finally,
I
just
had
one
quick
thing
more
thing
to
show.
I
talked
about
the
host
network
crd.
A
So
up
to
this
point,
we
use
luigi
and
host
humber
and
the
network
network
template
to
basically
configure
our
nodes
with
the
plugins
and
everything
required
deploy
a
pod
on
these
multis
networks.
So
now,
how
do
we
actually
want
to
view
the
networking
say
right?
We
might
want
to
verify
some
stuff.
You
know
we're
not
able
to
ssh
on
each
node
to
like
inspect
this
detailed
information.
A
So
how
do
we
manage
the
networking
state
of
each
node?
We
have
another
resource
here
called
just
host
network
knockoff
network
template,
and
this
is
not
really
meant
to
be
used
by
you,
the
user.
The
host
farmer,
david
said,
is
going
to
manage
it,
and
so,
if
we
do
cube,
cuddle
get
damon
get
host
networks,
you
see
we
have
one
created
for
each
node
in
our
cluster
and
the
name
of
the
resource
is
going
to
be
the
node.
So
if
we
inspect
the
one
for
the
master,
node.
A
A
So
the
masternode
there's
not
going
to
be
a
lot
of
information.
As
I
mentioned,
we
don't
have
any
workload
scheduled
on
it.
It
we
didn't
create
any
vlan
interfaces.
We
didn't
create
srv
on
it,
because
it's
on
enable,
so
you
know
you
can
see
these
are
all
the
nics
or
all
the
interfaces
on
the
host
srv
enabled
is
false,
and
this
just
looks
all
kind
of
information
about
like
mac
mtu
pc
address.
A
A
A
Always
forget
to
do
oemo
great,
so
this
is
actually
one
of
our
srv
nodes
where
we
created
srv
on
so
you
can
see.
There's
a
lot,
there's
quite
a
bit
more
information
here.
If
you
want
to
you,
know
parse
through
it,
you'll
see,
for
example,
sriv
enabled
is
true
on
this
particular
device
with
at
this
pc
address,
and
this
is
a
pc
address
for
en02
physical
function
here.
A
So
you
can
see
the
physical
function
driver
the
name
pc
address
and
because
we've
enabled
srv,
you
can
see
detailed
virtual
function,
information
for
all
of
them,
so
how
many
virtual
functions
are
created?
You
can
see
the
pci
and
mac
addresses
for
each
of
these
virtual
functions
along
with
other.
You
know,
link
layer,
information
such
as
these
vlan
0.
You
know
just
means
it's
untagged,
it
hasn't
been
assigned,
but,
as
you
remember
for
our
other
srab
network
here,
we
had
assigned
it
as
vlan
1000
right.
A
So
now
you
can
see
from
just
looking
at
this
output.
I
know
that
it
happened
to
sign
virtual
function.
Number
six
is
one
of
the
virtual
functions
that
was
assigned
because
we
see
that
you
know
it
has
a
vlan
tag
of
1000
in
this
particular
pci
address.
A
A
So
I
think
that
concludes
the
demo
here
for
how
to
use
luigi
and
post
plumber.
If
you
want,
I
suggest
you,
you
know
you
can
poke
around
at
these
repos
deploy
luigi
yourself
and
then
use
that
to
deploy
host
bumber
and
all
these
other
plugins
play
around
yep.