►
Description
!!! Our THIRD antrea-LIVE show !!!
This time we're focusing on multus, host-local IPAM, and whereabouts - all household terms when it comes to what telco's are up to nowadays. If you ever wondered *why* the telcos are so into having multiple networks for a single container, and how the CNI community overall is adapting to these types of lower level Kubernetes networking challenges, come hang out with us !
Hosts
- jayunit100 (Vmware)
- yashbhutwala (staked)
Guests
- Vivek Seshadari
A
Empty
networking
pod
stuff,
where
you've
got
a
pod
and
it
might
have
two
different
sort
of
nicks
attached
to
it
and
all
the
rules.
It's
totally
like
all
the
rules,
change
all
the
rules
for
how
you
would
basically
think
of
you
know
all
the
non-newtonian
physics,
okay,
so
news
android
1.4
is
out.
Thank
you.
A
Everybody
for
coming,
andrea,
1.4
is
out,
and
let's
see
I
see
that
looks
like
there's
a
few
people
watching
so
feel
free
to
say
hi
in
the
comments,
and
let
us
know
where
you're
from
1.4
has
the
most
exciting
news
is.
It's
got
support
for
andrea
proxy,
full
replacement
of
coop
proxy,
complete
total
replacement
of
coupe
proxy
right.
So
it
does
everything
including
affinity,
including
node
ports,
load
balancers,
whatever
external
load,
balancers
cluster
ips.
A
Of
course,
that's
been
there
for
a
while,
and
the
real
exciting
thing
is
for
linux
and
windows,
both
so
that's
cool
and
I
think
it's
by
default.
Now,
oh,
it's
it's
disabled
by
default.
So
actually
you
have
to
turn
on
proxy
all
in
order
to
turn
on
all
the
proxying
for
all
the
endpoints.
So
in
the
sort
of
continuous
trend
of
now
you've
got
celium,
you've
got
calico
and
now
you've
got
andrea.
All
these
different
cni's
now
are
actually
doing
service
level
networking
for
for
kube,
andrea,
ipm,
okay.
A
So
a
lot
of
you
different
cni
version,
cni
people
are,
you
know
interested
in
things
like
well.
I
want
my
name
spaces
to
have
a
specific
ipool,
and
I
want
you
know
things
like
that.
That
kind
of
fine
love,
fine
grained
stuff
is
kind
of
now
flowing
into
andrea,
that's
really
important
to
enterprise
customers
who
want
tight
control
over
ip
spaces
and
things
like
that.
A
Configurable
transport
interface,
insiders
for
pod
traffic
configuration
barrier
entry
agent
to
choose
an
interface
by
network
siders.
I
think
this
is
important
for
windows,
because
at
least
in
vagrant
it
was
important
for
some
of
our
windows
testing
of
andrea,
I'm
not
sure
if
that's
related,
but
you
know
you
might
have
multiple
nixon,
multiple
gateways
and
stuff
like
that.
Udp
support
for
node
port
local,
that's
a
feature:
node
port
local
is
a
feature
that
allows
you
to
take
like
a
you
know.
A
If
you
have
a
single
pod
and
you
have
a
node
port,
local
is
like
one
of
these
right,
so
you've
got
a
pod
and
your
pod
normally
you'd,
expose
it
through
a
service
right
normally
you'd
expose
it
through
a
node
port
right.
Normally,
you'd
expose
this
pod
through
a
node
port,
and
then
you've
got
your
clients
over
here
that
access
that
node
port
and
then
that
node
port
service
forwards
you
to
the
pod.
A
A
And
I
guess
there's
reasons
for
that
when
it
comes
down
to
like
wanting
more
fine-grained
definition
of
how
you
access
a
pod
endpoint
without
having
to
make
a
global
node
service.
Could.
A
I
think
it's
because
the
external
traffic
policy,
local
or
stuff,
is
sort
of
complicated
and
doesn't
work
in
all
use
cases
like
there's
certain
types
of
load,
balancer
scenarios
where
it
doesn't
do
exactly
what
some
people
want
it
to
do,
there's
a
proposal.
I
can
look
it
up
because
I
remember
there's
a
lot
of
back
and
forth
on
it.
C
Yeah,
this
should
also
tie
in
with
how
you
use
your
traffic
policy
right,
for
example,
if
you're
just
going
to
do
notepod
and
you're
going
to
have
traffic
policy
as
local.
Of
course,
you
cannot
have
the
part
in
another
host,
because
once
you
put
cluster
traffic
policy
as
local,
your
cube
proxy
ip
tables
are
programmed
only
to
forward
to
the
pod
in
the
same
host
right
so
and
if
you're
going
to
do
cluster
ip
you're
going
to
have
extra
hops.
C
These
are
all
some
things
that
we
face
in
the
telco
world,
where
they
want
to
avoid
the
extra.
Even
if
you're
using
cluster,
say
you
use
node
port,
you
send
your
traffic
to
one
of
your
worker
nodes,
but
the
partisans
say
host
two
yeah.
If
you
host
one
and
then
you're
going
to
reroute
it
again
to
host
two,
which
is
an
extra
hop.
So,
whereas,
if
you
do
not
put
local,
you
are
going
to
directly
export
it
using
a
node
port
in
the
host,
where
the
part
is
running.
D
C
A
Yeah
there
was
some
debate
about
this
right,
so
there's
external
traffic
policies,
some
people
say
external
traffic
policy.
Local
does
kind
of
address
this
right,
but
there
are
scenarios.
I
remember
because
there
was
a
proposal
floating
around
where
they
said
it
didn't
and
so
external
traffic
policy
local
does
and
host
support
too
right,
like
I
guess,
both
of
those
but
in
either
one
of
those
scenarios.
A
D
Yeah
and
that's
where
what
vivek
said
makes
sense
right,
so
even
with
external
traffic
policy
set
to
local,
when
traffic
hits
a
node,
it
would
still
need
to
translate
into
a
pod
ip
through
cluster
ip
translation.
D
That's
one
extra
hop
on
the
telco
world,
which
is
too
much
too
much
delay,
but
host
port
just
exposes
the
part
directly
on
the
node.
So
I
I've
wondered
how
host
code.
C
A
I
want
to
say
hi
to
everybody:
if
pyrave
is
here
from
he's
over
from
vmware
doug
smith,
hey
doug,
what's
up,
I
saw
you
on
twitter
earlier
scott
rosenberg,
it's
our
friend,
very,
very,
very
great
partner
of
ours
and
he's
testing
windows
on
kubernetes.
So
he's
a
very,
very
special
person
to
me
friend
is
fry.
What's
up,
how
are
you
doing?
Okay
from
texas,
I'm
from
oklahoma,
okay
and
andy
pierce
from
london
and
antonio
from
italy
cool?
Well,
this
is
great.
A
I
feel
like
we're
starting
to
build
a
little
bit
of
a
community
up
here.
It's
the
most
people.
We've
ever
had
on
a
show
this
early.
So
I
I
really
appreciate
you
all
coming
and
if
you,
if
you
want
to
send
me
your
send
me
your
emails,
maybe
we
can
send
out
some
t-shirts,
some
andrea
t-shirts,
so
anyways,
let's
go
on
so
whole
sport.
I
don't
know
this
is
an
interesting
thing.
D
You
I
know
that
was
the
exact
same
question
I
was
thinking
about.
How
is
it
different
from
most
part
might
be
a
little
more
to
it.
Maybe
something
around
moving
parts
between
hosts
and
then
keeping
the
open
floor
rules
in
place
so
that
part
movement
can
happen
more
easily,
because
host
port
is
still
specific
to
a
particular
host.
A
A
A
Well,
I
don't
think
host
port
works
in
windows.
Maybe
that's
part
of
it
so
yeah.
I
don't
think
you
could
do
a
host
port
on
container
d
and
windows,
and
so,
if
you
use
andrea
for
your
cni
in
windows
and
you
do
the
node
port
local,
that
means
you
can
have
the
exact
same
behavior
on
both
on.
But
you
can
use
the
exact
same
thing
to
do
that
behavior.
A
So
right,
luther
I'm
right
about
that
right.
Container
d,
host
sports
you're
not
going
to
get
a
host
board
out
of
that,
because
there's
no
docker
bridge.
C
So
the
problem
I
was
saying
is
right:
when
you
have
say
two
parts
for
the
same
service
running
the
same
host,
somebody
has
to
take
the
pain
to
go
and
make
sure
that
the
host
port
they
are
populating
in
their
definition
are
unique.
C
I
don't
think
two
parts
can
share
the
same
host
port.
If
I'm
not
wrong,
whereas
in
notepad
you
don't
really
go
and
configure
the
portrait.
This
is
something
that
you
get
from,
that
ephemeral
range
that
you
have
between
twenty
nine
thousand
and
thirty,
two
thousand
it's
automatically
added
by
the
proxy.
So.
D
C
A
Yeah,
so
I'm
going
to
go
on
andrea
slack
and
once
this
meeting
starts
and
I'm
going
to
see
if
we
can
get
any
of
the
answering
engineers,
because
gian
wasn't
able
to
make
it
today
to
join
us
and
give
us
a
little
tip
on
this
now,
let's
jump
to
node
port
local.enable
configuration
framework
for
energy
okay,
so
we've
already
talked
about
we've
beaten
that
dead
horse,
andrea
proxy
skip
services,
configuration
parameter
to
the
entry
agent
to
specify
services
which
should
be
ignored
by
andrea,
proxy.
A
Okay,
all
right
so
that's
similar
to
they
have
that
in
coup
proxy
right.
They
have
a
similar
thing
where
you
can
annotate
a
coupe
proxy
service
to
skip
proxy
and
certain
endpoints.
So
I
guess
you
need
to
do
the
same
thing
for
entry
or
proxy
and
add
support
for
two
services
which
rule
andrea
native
policies.
Allow
matching
traffic
intended
for
services.
A
User
documentation
for
wire
guard
encryption
and
end
cap
mode
for
eks,
so
we've
got
some
cloud
native
stuff,
that's
going
into
andrea.
Let's
focus
some
of
you
that
use
other
cni's
like
calc
order,
probably
have
thought
a
little
bit
about
this
whole
end
cap
first,
not
in
cap
thing,
so
I
guess
antrox
is
a
primary
c.
I
of
the
eks
cluster
major
benefit.
Bob
is
here.
A
D
Guy
you
spoke
to
too
soon
today
and
I
haven't
played
with
andrea
as
much
so.
Let's
see,
let's
read
it
slowly,
when
entry
has
a
primary
cni
parts
are
connected
to
andrea
overlay,
network
and
part
ips
are
associated
from
the
private
side
is
configured
for
any
ks
cluster,
and
so
the
number
of
pods
per
node
is
no
longer
limited
by
the
number
of
secondary
ips
per
instance.
D
So
when
you
use
amazon
vpc
as
a
cni
right,
so
the
amazon's
cni
ipam
it.
There
is
a
limit
to
the
number
of
so
each
pod
interface
that
you
create.
It
creates
a
elastic
ip
of
sorts
that
is
associated
with
an
instance
and
then
that's
how
the
amazon
cni
with
vpc
works.
D
So
I'm
guessing
there's
a
limit
to
that
and
going
this
way
allows
you
to
ride
over
that
limit.
A
A
A
I
don't
know
if
we
have
any
entry
engineers
with
us
today
might
just
be
me
and
I'm
not
an
entry
engineer,
so
I
I'm
assuming
it
does
the
same
thing
that
the
other
cni
ipms
do
right.
Let's
take
a
look
so
so,
if
I
have
this
ip
pool
here,
I
can
define,
I
could
literally
define
an
ipool
and
then
all
the
all,
the
pods
and
name
spaces
that
are
annotated
with
that.
A
A
So
if
you
do,
if
you
annotate
at
the
namespace
level,
it
means
all
the
pods
in
that
namespace
will
come
up
that
way,
but
I'm
assuming
you
can
also
add
that
annotation
to
a
pod
and
the
only
reason
I
I
know
this
is
because
we
had
a
customer
that
wanted
to
do
this
the
other
day
yeah.
Where
is
the
docs
for
this?
So
let's,
let's
see
if
we
can
look
here.
A
So
if
I
go
here
and
I
go
to
my
favorite
thing
in
the
world
grep.app
and
I
look
and
I
see
what
these
are-
these
annotations
a
pod's
ip
will
be
allocated.
Oh,
that's!
The
change
log
we're
already
there
okay,
so
we
need
ipm
annotation
key.
So
if
we
look
in
here
and
we
look
through
the
code,
can
we
find
where
let's
see
and
then
let's
go
in
the
unit
test
and
see
so?
A
A
A
So
I
guess
the
question
here
that
I'm
going
to
have
is
whether
or
not
can
I
just
leave
it.
Let's
see
if
there's
an
issue
for
it
right,
so
ipam
pod
annotation
three
closed.
What
did
we?
What
happened?
Essentially,
okay,
so
the
original
feature
was
centralized.
Ipm
based
on
name
space
entry
relies
on
host
local
ipm
based
on
calico
yeah,
because
you
could
do
this
in
calico,
it's
pretty
common
right
and
an
oven.
I
guess
you'd
do
that
too.
I
propose
an
ipm
method
based
on
namespace
isolation.
A
C
A
So
I
think
we're
we're
ready
to
get
into
today's
topic,
which
is
like
super
exciting,
which
is
multis
and
and
whereabouts,
and
host
local
ipm
and
all
this
weird
stuff
that
only
telco
people
know
about.
But
we
have
two
people
that
are
like
sort
of
telco.
Well,
one
person,
that's
really
telco
vivek
and
arun.
Both
right,
I
don't
know
I
think
arun.
Are
you
doing
telco
stuff
all
the
time?
Now
too,.
A
F
C
A
Oh
doug,
I
didn't
okay,
doug
you're,
we're
okay,
you
work
on
whereabouts,
cool,
well
doug,
if
you're
an
upstream
coop
and
you
want
to
join
the
stream.
Just
let
me
know
send
me
a
dm
and
I'll
I'll.
Add
you
to
the
stream
we'd
love
to
have
you
to
talk.
Talk
us
through
some
stuff
hi
susan
susan
is
my
friend.
A
She
helped
me
set
up
this
show
she
runs
the
she
runs
all
the
entry
stuff,
like
all
the
all
the
like
sort
of
media
and
all
that
stuff,
and
so
thanks
susan
good,
to
see
you
here:
okay,
cool
arun!
I
think
we
lost
you,
but
that's
okay!
C
A
Me
and
zudong
are
on
the
same
team.
Zoodong
is
I'm
sorry?
No,
that's
lubran.
I
thought
it
said.
Zr,
okay,
it's
lubron!
Yeah!
I
mean
wait.
Me
yeah.
This
is
lubron.
Okay,
your
name
is
ziaron,
but
it
says
lebron,
okay,
cool,
that's
the
english.
I
guess
pronunciation
cool,
so
me
and
lubron
are
actually
also
on
the
same
team.
So
me
and
lubron
work
on
tkg,
networking
together
and
he's
here
with
us
today,
that's
great
so
while
vivek
is
showing
up,
does
anybody
have
anything
they
want
to
talk
about
or
ask
us
about?
A
You
know
we
can
look
at
because
it's
kind
of
interesting
is
there
was
an
issue
I
I
remember
kubernetes
services
with
proxy,
no
routing,
there's
like
an
annotation.
It's
probably
useful
for
people
to
know,
there's
a
there's,
a
rule
that
you
can
there's
an
annotation.
You
can
put
on
services
where
you,
where
you
have
the
service,
not
proxied
by
by
the
coop
proxy,
but
I
forgot
what
the
annotation
is.
But
I
like
learned
about
it
the
other
day
and
I
was
kind
of
surprised
to
see
it.
A
B
A
A
If
you
turn
that
on
then
a
service
won't
get
plumped
through
the
proxy
and
I
think
like,
but
I
don't
remember
what
it
is
anymore.
G
G
Okay,
that's
the
wrong
code
base.
A
A
Yeah
service
proxy
label
yeah
this
one
yeah.
So
if
you
do
service
proxy
name,
if
you
do
this
one
right,
then
I
think
that's
it
yeah
service
proxy
name
right.
So
if
you
do
service
proxy
name,
then
you
can
skip
proxying.
G
A
So
should
implement
service
dot
service
proxy
name,
so
we
create
services
and
when
you
create
it
with
that
label,
it's
disabled
right.
So
it's
a
little
trick
for
disabling
coupe
proxy
for
certain
services,
so
you
can
still
create
a
service
but
it'll
be
disabled,
and
then
you
can
use
a
different
proxy
to
service
that
to
proxy
those
requests
on
your
node,
so
yeah
vivek.
How
you
doing
you
ready
good.
A
Cool
I'll
add
that
to
the
show
notes,
if
folks
want
to
help
me
with
the
show
notes
or
add
questions
to
the
show
notes.
I'll
put
the
link
in
the
susan
says:
hi
lubron
I'll
put
the
link
in
the
show
notes
here.
So
you
can
join
us
here
on
this
hackmd
file
and
you
can
add
notes
or
add
questions
in
here.
A
C
Okay,
you
guys,
okay,
I
can
see
the
screen
myself.
Okay,
so
before
we
go
into
the
testing
right,
I
just
wanted
to
touch
base
on
the
cni
for
folks
who
don't
know
what
it
is.
This
is
the
specification
which
is
used
for
handling
the
network
interface
component
of
all
your
containers,
and
they
have
really
defined
certain
cni
operations
here,
which
will
be
used
during
the
different
life
cycle
of
your
part,
to
you
know,
attach
network
interfaces
into
your
containers
or
parts.
C
So
what
I
would
like
to
really
go
about
and
see
now
is
probably
about
the
need
for
these
secondary
interfaces
in
telco
right.
So
hopefully
I
just
put
up
a
tree
slight
thing.
So
first
thing
is
like:
why
do
we
need
multis
in
telco
use
cases?
One
is
this
is
what
I
was
explaining
last
week
that
there
is
a
need
for
supporting
multiple
networks
and
most
of
the
telco
accounts.
C
They
would
ideally
like
to
separate
their
say
management
and
traffic
network,
or
they
would
like
to
separate
even
at
a
finer,
more
fine-grained
level
that
they
have
a
separate
one
for
signaling,
separate
one
for
those
enemas
which
are
running
in
data
plane
and
they
want
to
carry
the
data
plane
traffic
in
a
separate
network.
So
there's
really
a
use
case
where
the
telco
operators
want
to
support
multiple
networks
which
are
ideally
isolated
from
each
other
right
and
then
second
thing
is,
as
I
was
saying,
we
were
trying
to
avoid
as
many
hops
as
possible.
C
I
mean,
as
you
know,
when
we
use
the
cni
interface,
for
example,
based
on
the
way
you
place
your
parts
and
the
way
you
configure
your
servers.
There
might
be
multiple
hops
which
packet
takes,
for
example,
it
can
land
in
one
host
and
then,
if
you
put
it
as
cluster
ip
and
the
part
in
another
host,
you
may
have
to
do
one
more
jump
hop
to
go
to
the
pod,
which
you
would
like
to
bypass
as
possible.
C
And,
of
course,
this
is
also
the
main
way
by
which
interface
is
based
on
sriv
or
dpdk
or
edp,
something
which
is
a
vmware
specific.
So
these
kind
of
interfaces,
if
you
want
to
present
inside
your
parts
for,
say
higher
network
performance,
you
leverage
multis
to
present
those
secondary
interfaces
and
set
parts.
C
So
these
are
some
of
the
potential
use
cases
and
how
do
we
press
multiple
interest
phases
inside
part?
Mainly,
we
use
multis
multis
and
some
more
multi,
so
it's
mainly
multis
right
and,
of
course,
cni
also
provides
you
with
a
lot
of.
A
Somewhere
we
can
look
at
for
because
I
just
because
june
jen
said
that
the
andrea
folks,
he
left
us
some
notes
here,
and
he
mentioned
that
he
mentioned.
I
don't
know
where
the
notes
are
that
he
mentioned
that
intel
and
andrea
folks
are
working
on
some
sri
related
integrations
and
he
said
a
pr
for
it.
But
I
didn't
really
know
how
to
describe
what
I
started.
Yeah.
H
C
A
C
So
if
you
see
here
in
asada,
yogi
is
a
way
by
which
you
have
a
physical
interface
on
your
host
and
when
you
are
running
vms
or
pods
in
your
vms.
This
is
the
way
by
which
you
present
those
interfaces
directly
inside
your
virtual
machine.
At
you
know,
if
you
see
here,
you
have
your
virtual
mix,
and
these
are
directly
presented
inside
your
virtual
machine
with
the
direct
memory,
access,
dmas
and
all
the
stuff,
so
that
you
can.
C
The
applications
running
in
your
vms
can
directly
access
those
vms
in
the
user
space
and
they
can
handle
the
networking
operation
tag.
The
input
output
operation
there.
So
this
is
the
way
by
which,
especially
sriv
is
a
way
by
which
you
have
one
physical.
They
call
it
as
virtual
links
or
vfs,
and
you
present
these
vf's
directly
inside
the
virtual
function.
So
whenever
a
packet
is
sent,
it
doesn't
again
go
through
the
host
kernel.
It
will
directly
go
from
the
virtual
machine
to
the.
B
C
B
C
Mix
from
intel
which
support
there's
a
specific
family
of
intermix
which
support
this
technology,
it's
becoming
a
bit
more
widespread.
I
C
So
generally,
this
will
be
used.
Of
course
he
told
telco,
but
anything
where
they
want
to
avoid
unnecessarily
high
latency,
where
you
want
to
again
go
through
the
hosts
kernel
right
because
you're
directly
bypassing
it
now
so
anywhere,
where
you
need
a
higher
network
performance
like
reduced
latency
or
something
you
will
gently
go
for
this.
A
Yeah
all
right:
okay,
sorry,
keep
going
keep
going.
C
You
can
also
sort
of
use
a
mac
wheel
on
our
ipv,
like
you
still
have
one
interface,
but
then
you
attach
an
interface
to
your
vm
and
then
you
can
use
mac,
vlan
or
ipv
lan
based
interfaces
to
present
secondary
interfaces
inside
your
part
right-
and
this
is
also
used
by
multiple
telco
customers,
because
each
interface
inside
your
pod
is
ideally
connected
to
different
networks
that
we
were
talking
about
in
the
earlier
slide,
one
for
signaling,
one
for
2am
one
for
actual
data
traffic,
so
they
will
have
multiple
parts
inside
so
of
course,
multis
provide
an
ability
to
add
multiple
interfaces
into
your
part
and
which,
on
the
cni
spec,
it's
just
the
chaining
right.
C
So
when
you
create
a
part,
you
say
you
create
your
network
attachment
definition.
This
is
basically
the
way
by
which
you
attach
the
secondary
interfaces
inside
your
pod.
This
is
a
crd
which
is
provided
by
multis
and
then
again
you
can
add
multiple
interface
secondary
interfaces.
If
you
need
inside
a
part.
So
let
me
show
that.
A
C
So
what
maltese
provides
is
a
crd
which
is
called
as
network
attachment
definition
and
you
define
your
secondary
networks,
characteristics
there
and
then,
when
you
annotate,
your
part
with
the
name
that
you
give
here
for
the
cid
multis
will
make
sure
that
it
is
also
in
the
chain
in
the
cni
chaining,
and
then
it
will
add
that
secondary
interface
and
then,
of
course,
you
can
call
multiple
plugins.
A
C
A
Doug's
here
how
you
doing
doug.
E
I'm
doing
great
thanks
for
having
me
on.
I
appreciate
it
I'll
be
back.
I
don't
mean
to
slow
your
slow,
your
roll,
but
yeah
happy
to
pitch
in
where
I
can.
C
C
So
so
yeah,
as
we
were
saying,
we
use
network
attachment
definitions
to
attach
secondary
interfaces
inside
the
part
and
variables
leverages
that
thing
right,
for
example,
if
you're
using
whereabouts.
This
is
a
typical
example
that
they
give
where
you
create
a
network
attachment
definition.
This
is
just
a
name
for
you
which
you
have
to
annotate.
Your
part
with
there
is.
C
It's
confirms
to
the
cni
version,
0
3,
0,
and
then
here
is
where
you
say:
what
is
the
type
of
interface
you
are
adding
and
the
types
which
are
supported
are
in
the
cna.dev
page.
If
you
go
to
the
plugins,
you
will
see
them
here,
so
you
have
a
support
for
ipv
land,
mac,
vlan,
ptp
host
device
prints
for
you.
C
They
use
that
name
to
execute
the
binary
there
yeah
yeah,
so
they
have
different
types.
I'll
probably
be
touching
this
on
these
two
today,
the
ipvlan
and
the
mac
vlan
plugins,
if
possible,
probably
I'll,
just
touch
with
mac
line
and
then,
if
needed,
we
can
go
into
ipv
and
then
sorry,
let
me
go
back
here
and
whereabouts
is
all
about
this
section
where,
when
you
want
to
program
your
interface
with
ips
and
say
some
network
characteristics
like
routes
that
you
want
to
program
inside
your
pod,
which
wants
to
leverage
the
secondary
interface.
C
That's
where
this
ipam
section
comes
in
and
whereabouts
is
one
of
those.
So
the
doug
you
can
correct
me
if
I'm
wrong.
The
whole
reason
why
they
came
about
with
whereabouts
is
based
on
certain
limitations
that
we
have
on
the
ipam,
supported
by
the
cni
community.
As
such,
as
you
can
see
here,
we
basically
have
three
ipams,
which
are
supported
out
of
the
box
from
the
cni
which
are
dhcp
host
local
and
static.
C
Each
one
has
their
own
scope.
For
example,
if
you
say
static,
it's
really
like
there's
just
a
single
ip
or
something
that
you
configure.
There
are
ips,
and
then
this
section
this
ipam
section
you
see
here,
go
here
right.
This
is
the
section,
for
example.
If
you
are
using
static
instead
of
whereabouts.
E
Yeah,
you
nailed
it
and
a
quick
backstory
just
to
add
a
little
bit
of
color
is
the
original
inspiration
for
whereabouts
is
from
the
multis
quick
start
guide.
We
have
an
example,
that's
for
you
that
uses
a
host
local
ip
and
basically
people
would
use
it
they're
like
hey.
E
This
is
great
and
then
they
would
go
to
use
it
on
two
nodes
and
they
would
get
a
duplicate
ip
on
the
second
node,
because
host
local
only
has
allocation
per
node
and
I'm
like
geez,
it
seems
like
more
and
more
people
have
this
kind
of
requirement
that
they
don't
want
to
use
dhcp.
For
whatever
reason,
maybe
you
don't
have
it
on
that
isolated
network
that
you're
creating
as
a
secondary.
D
E
But
you
want
to
dynamically
allocate
from
a
pool,
so
that
was
that
was
kind
of
the
initial
inspiration
and
actually
started
as
a
fork
of
this
static,
plug-in
and
just
built
on
that,
but
yeah
that
and
that
ipam
section
that's
like
a
whole
like
spec
from
the
cni
maintainers
themselves.
So
it's
easy
to
just
build
yet
another
cni
plug-in
that's
executed,
like
other
cni
plug-ins
and
the
cni.
The
api
just
says:
hey.
This
is
the
stuff
you
pass
back
and
forth.
C
Right
so
he's
talking
about
this
right.
Okay,
this
is
the
configuration,
but
I
can
go
to
the
spec
here
here.
This
is
the
one
so
each
plugin,
when
it
is
called
it,
has
to
give
the
output
in
this
particular
format,
so
just
to
add
to
doug's
thing
right.
So
if
you
see
these
three,
the
dhcp,
the
host
local
and
static,
if
you
see
the
scope,
the
static
ip
is
just
for
a
single
part
right.
C
So
if
you
are
going
to
use
static
ip,
you
are
going
to
create
one
network,
attachment
definition
per
part
that
you
want
to
use
secondary
interfaces
for
right
and
you
have
to
use
it
so
and
if
you
take
host
local
as
that
was
mentioning,
this
is
really
at
a
single
node
level.
So
you
create
like
an
ip
pool
which
you
can
use
for
ip
allocation
within
that
particular
kubernetes
node.
C
You
cannot
share
this
with
another
node
because
then
you're
going
to
end
up
having
pods
with
duplicate
ips
on
different
nodes
for
the
secondary
interfaces
and
whereabouts,
is
something
which
is
at
a
cluster-wide
level.
I
would
say
so
for
each
cluster,
you
go
and
create
a
mat
right
and
then
once
you
clear
it
and
you
run
your,
let
your
whereabouts
run
any
a
pod
which
wants
to
use
want
to
get
an
ip
for
the
secondary
interface.
If
they
refer
to
this
network,
attachment
definition
whereabouts
make
sure
that
each
part
gets
a
unique
id.
C
So
if
you
see
it
just
goes
progressively
higher
the
way
you
can
use
these
ipam
services
right.
But
of
course
each
one
has
their
own
limitation,
or,
I
should
say
kind
of
a
drawback
when
you
consider
how
they
are
going
to
use
this
in
production
say
where
I'm
going
back
again
to
the
telco
world,
where
we
have
hundreds
and
thousands
of
cell
site
nodes
which
needs
to
leverage
this,
because
most
of
the
network
function
providers
in
the
5g,
especially
the
consumerized
world,
in
ram
site
they
are
using,
are
leveraging
secondary
interfaces
to.
C
So
this
is
more
like
cell
side
is
where
you
have
your
actual
radio
tower.
I
would
say
just
to
give
an
easy
comparison
and
they
have
containerized
the
software
which
is
running
there
and
they've
made
it
to
run
on
top
of
kubernetes
right.
So
now,
when
you
run
your
software,
they
want
to.
You
know,
leverage
secondary
interfaces,
because
the
require
the
network
requirements
that
are
at
the
radio
site
is
very
high.
They
they
really
cannot
take
any
latency
right.
The
latency
requirements
are
so
straightforward.
C
C
So
if
you
see
that,
and
when
you
come
back
to
this
ipam,
the
problem
we
have
is,
if
you
use
the
static
way
for
each
software
instance,
that
is
going
to
be
run
on
these
sites.
You
need
to
create
one
network
attachment
definition
and
if
you're
going
to
talk
about
1000
nodes,
you're
going
to
somebody
has
to
go
and
create
1000
of
this
network.
Attachment
definition
make
sure
that
they
are
created
with
the
correct
ips
in
the
correct
nodes.
And
then
they
have
to
go
program.
C
So
if
you
take
host
local
again,
this
is
at
a
node
level.
But
again
this
will
have
the
same
limitation
that
when
you
take
it
to
thousands
of
nodes,
you
need
to
configure
them,
and
if
you
go
to
whereabouts,
it's
a
bit
better
where
you
define
it
at
a
cluster.
So
if
a
cluster
say,
if
you
have
thousand
nodes,
you
are
defining
it
using
say
so
it
depends
again
on
different
infra
vendors.
Some
of
them
support
one
kubernetes
cluster
per
node.
C
A
C
Variables
is
more
an
ipam
right,
it's
more
for
the
ipam
part.
The
the
secondary
interface
itself
is
handled
by
multis,
with
probably
one
of
these
plugins
right.
A
C
Correct
so
by
default,
if
you
have
seen
mostly
people
use
host
local,
I
think
andrea
used
to
use
this
in
the
earlier
release
where
you
define
ip
pools
per
node,
and
then
you
leverage
the
host
local
functionality
to
provide
eyepiece
and
if
you
take
calico
calico
as
they're
trying
to
compare
but
calico
has
their
own
ipam,
which
they
will
use
for
providing
it
right.
So
it's
mostly
about
the
ip
address
management
yeah
for
those
secondary
interfaces
right.
So
the
the
cns
pic
as
such
is
agnostic
as
to
what
you.
A
C
It
supports
it
as
of
now
it's
in
its
name
space,
but
probably
we
can
extend
it
to
provide
something
like
what.
C
And
then,
of
course,
waterboats
is
again
per
cluster,
whereas
if
you
say
dhcp
is
one
potential
solution
where
you
have
a
centralized
dhcp
server
which
cannot
handle
your
addressing
needs.
So
this
is
possibly
something
that
you
can
take
out
to
your
field
right
when
you
deploy.
This
is
something
that
is
of
interest
to
us,
but
yeah
prefer
the
decreasing
order
of
preferences,
dhcp,
whereabouts
and
post
local
and
static
in.
A
C
They
are
not
really,
even
though
they
are
leveraging
the
kubernetes
solution
as
such
the
way
they
deploy
and
manage
the
life
cycle
of
a
software
which
is
running
in
the
sales
site
or
anywhere
is
not
going
to
be
where
it's
really
fml.
They
deploy
a
network
function.
It's
going
to
stay
there
for
some
time,
especially
when
you're
using
secondary
interfaces.
C
Your
whole
concept
of
kubernetes
service
goes
away
right
because
there's
no
kubernetes
service.
It's
going
to
map
to
your
secondary
interface,
there's
no
programming
which
occurs
at
the
q
proxy
level.
So
the
problem
we
have
is
these
functions
are
going
to
be
there
for
longer
time
and
say
we
have
multiple
networks
and
say:
customer
wants
to
extend
the
network
with
an
extra
silver
right
he's,
adding
an
extra
subnet
into
that
particular
network.
C
As
of
now,
there
is
no
way
we
can,
and
if
you
see
here
there
is
a
way
you
can
program
routes
into
your
pod
secondary
interfaces.
Some
of
the
plug-ins
supported
some
don't
right.
This
is
again
one
of
the
capabilities
which
is
needed.
So
that's
one
thing
that
we
need
to
check
in
andrea
if
it
supports.
If
it's
not
supported.
This
may
not
be
really
usable,
because
these
are
the
ones
by
which
you
effectively
say
hey.
If
you
want
to
reach
possibly
these
network
subnets,
you
use
this
interface
right.
C
C
E
There's
some
work
in
progress
as
part
of
the
network
calling
working
group
that
creates
multitasks.
That's
a
discussion
ongoing
about
hot
plugging
and
unplugging
interfaces.
E
There's
at
least
one
fork
of
multis
out
there
called
cactus
that
has
done
this
for
quite
a
while.
It's
based
on
sort
of
an
old
version,
but
yeah.
If
you
check
out
the
multis
repo
there's,
even
recent
pull
requests
related
to
it,
but
keep.
D
E
Plug-Ins
are
they're
kind
of
like
a
one,
typically,
a
one-shot
process
like
the
the
binary
kicks
off.
It
does
its
job
and
it
exits,
and
we
refer
to
that
as
a
thin
plug-in
and
that's
the
way
that
multes
works
and,
like
all
the
reference
cnn
plug-ins
that
you
mentioned,
like
macvlan,
ipv
land
bridge
all
that
stuff.
E
E
Also
so
there's
another
discussion,
that's
ongoing
too
about
cni
version
2.0
and
I'm
curious
what
we'll
find
out
out
of
out
from
the
cni
maintainers
what
winds
up
in
there.
But
I
have
a
feeling
that,
like
during
the
lifetime,
operations,
will
probably
be
inspiration.
A
E
Yes,
so
casey
calendrello
who's,
one
of
the
maintainers
and
the
originators
of
cni.
He
gave
a
talk
about
it
at
virtual,
you're,
kubecon.
E
Yeah
so
he's
been,
he's
been
going
about
the
community
to
socialize
it
and
see
what
people
have
to
say,
but
I'm
pretty
sure
that
once
the
new
year
rolls
around
we'll
start
to
see
a
lot
more
from
the
cni
maintainers
on
it.
But
yeah.
E
So
that's
definitely
been
a
discussion,
I'm
not
exactly
sure
it's
going
to
go
on
there,
but
it
sounds
kind
of
exciting
to
me
and
the
idea
that
you
would
I
mean
you're,
naturally
going
to
have
that
kind
of
long-running
process
type
of
communication
with
grpc
right.
So
right
now,
if
we
didn't
mention
it,
cni
plug-ins,
they
just
speak
via
standard
in
and
standard
out.
So
that's
the
api
over
which
they
talk.
So
the
improvement,
that's
possible
would
be
grpc
first,
you
know
so
yeah.
I
think
that
would
be
awesome.
E
A
If
you
did
that
yeah
yeah,
I'm
assuming
for
now,
okay
and
for
so
far
I
pray
for
it
goes,
I
appear,
but
I
I
think
it's
a
it's
probably
a
good
question.
I
assume
people
taint
nodes
or
they
add
tolerations
and
taints,
and
they
just
use
the
scheduler
for
that
the
same
way
they
do
for
other
hardware,
but
I
don't
know
his
question
was
how
do
you?
How
did
you
schedule
the
sri
deal?
Isr
iov
enabled
node.
E
Oh
and
you
know
what
so
srov
there's
a
srov
device
plug-in
and
what
device
plug-ins
do.
Is
they
talk
to
the
kubernetes
scheduler
and
give
you
gives
the
scheduler
awareness
of
what
resource
has
been
utilized
so
that
way,
as
like
part
of
the
process
before
a
workload
gets
scheduled,
the
the
scheduler.
E
E
No
problem
you
may
find
that
you
have
to,
I
mean,
depending
on
what
you're
doing
you
might
might
need
some
like
pre-set
up
on
your
your
nodes
beforehand,
to
like
get
all
the
drivers
and
kernel
arcs
and
all
that
stuff
right.
But
once
it's
all
set
up
during
runtime
should
be
fairly
seamless
with
the
device
plugin.
C
Yeah,
so
let
me
just
take
one
more
minute
to
finish
right.
We
have
seen
that
in
telco
at
least
not
all
ipam
plug-ins
support
ipv6,
some
of
the
telco
operators
are
trying
to
go
into
ipv6,
and
so
we
are
again
limited
by
the
items
we
can
use.
C
C
Okay
and
then
the
third
thing
is,
there
is
no
ipam
now,
which
has
something
like
ipv
timer,
which
you
used
to
have
in
dhcp
right.
Nobody
has
implemented
something
like
that.
Now,
where
say,
a
pod
is
running
on.
C
C
C
Stateful,
so
it
depends
right,
it
depends
on
the
network
function
when
some
of
them
use
deployments,
some
people
use
stateful
set,
but
majority
of
them
at
least
what
we
have
seen
use
deployments.
So
we
have
this
potential
issue
too,
because
if.
C
Not
for
the
we're
not
talking
about
the
primary
right,
we're
talking
about
the
secondary
when
we
leverage
one
another.
A
C
C
That's
what
I
was
saying
hack
you
you
have
to
automatically
populate
it.
There
isn't
q
proxy
doesn't
do
it
for
your
secondary
yeah,
because
it
doesn't
even
know
it
right.
You
can
see
it
in
the
paths
status
field
if
I'm
not
wrong,
but
other
than
that
it
doesn't
get
reflected
in
the
end
point.
So
there's
no
way
your
service
can
work
on
creating
any
plumbing
for
your
secondary
interface,
ips
and
also.
C
D
C
E
Some
of
that
functionality,
the
network
plumbing
working
group
that
specifies
the
stuff
volt
is
built
on,
is
looking
into
two.
So
one
thing
for
sure
was
one
of
the
original
goals
with
multis
was
to
have
it
just
be
an
absolute
side
car
to
kubernetes
that
you
didn't
need
to
modify
core
kubernetes.
You
just
use.
C9
you've
got
an
extra
interface
there
you
go,
however,
as
you
mentioned
yeah
some
of
the
kubernetes
like
functionality
that
you
might
expect.
E
Isn't
there
so
network
policy
and
services
being
right
up
there,
there's
as
part
of
the
like
github
org,
the
kate's
network,
plumbing
working
group
you'll
find
a
network
policy
extensible
application
that
currently
works
for
macvlan
for
sure,
but
could
be
extended
to
other.
So
the
network
plumbing.
E
E
F
E
A
working
group,
so
we
do
a
fair
amount
of
that
too
yeah.
That's
one
thing:
there's
also
a
lot
of
traction
on
some
work
with
services
for
secondary
networks.
There
we.
A
A
E
A
C
C
A
Well,
I
mean,
I
hope,
you'll
come
back
again
and
we
can
do
some
like
hacking
around
on,
like
on
an
andrea
plus
multis
environment.
That
would
be
cool,
yeah.
C
I
do
have
it,
unfortunately,
we
didn't
have
time
to
go
through
it.
So
that's
fine
I'll.
You
can
call
me
anytime
and
I
can
come
back
and
we
can
do
a
live
session
where
we
try
to
use
whereabouts
and
multis
to
assign
like
these
two.
Secondly,
yeah
doug.
A
A
And
and
yeah
it's
even
though
it's
technically
the
andrea
live
stream.
It's
it's.
You
know
hopefully
more
of
a
community
event
than
anything
else.
So,
let's.
A
And
so
vivek,
thanks
again,
like
you've,
helped
me
out
on
a
few
of
these
and
I
I
appreciate
you
always
being
willing
to
jump
in
on
steph
and
doug
thanks
for
coming
at
last
minute
and
arun
and
luther,
I
don't
know
what
happened
to
y'all,
but
thanks
for
thanks
for
coming,
I
I
hope
baroon's.
Okay,
I
don't
know
what
happened
this
computer
crashes
home
and
we
will
see
all
I
guess
any
anything
else
you
want.
A
A
Well,
thanks
everybody
for
coming
today
and
watch
this
show
over
again
there's
a
lot
of
stuff
that
vivek
went
over
and
doug
that
you
can
learn
about.
So
thanks,
doug
you're
from
over
from
red
hat
right.
E
Yeah,
antonio
is
awesome.
Yeah
we
used
to
be
on
the
same
team.
I
think
he's
he's
moved
over
a
team.
That's
like
more
apt
kate's
api
focus.
E
F
A
E
A
Yeah,
definitely,
let's
keep
in
touch,
and
maybe
you
and
andrew
come
and
do
a
show
with
us
sometime.
F
That
sounds
awesome
yeah,
thanks
for
having
me
again
and
yeah
great
show.
Thank
you.