►
From YouTube: Episode 18: serviceType=Loadbalancer datapath (NodePort, ClusterIP, NodePortLocal), AVI VRFs on K8s
Description
Alot of folks are curious about how loadbalancers , nodeports, and so on, effect the k8s datapath.
In this show, we'll:
- Look at the datapath for pods on a Kubernetes cluster
- Look at how loadbalancers work on clouds like VMWare Tanzu (AVI) and GKE
- a quick KPNG project update
- AVI ,VRFs, and NodePortLocal vs ClusterIP vs NodePort configurations
- How GKE old services (vs newer GKE native routable services) work
A
There
we
go,
can
you
hear
me,
I
see
some
folks
are
rolling
in.
B
B
C
Okay,
awesome
and
what
we
expect
andrea
live.
What's
that?
What
are
we
expecting?
What
was
his
name,
someone
else.
I
think
I
don't
know.
A
B
I
am
eleanor
millman,
I
am
a
vm
vmware
pm
in
vmware
kansu.
The
areas
I
cover
are,
I
cover
valero,
so
I
I
think
lots
about
data
protection
and
I'm
also
the
pm
for
samabui,
which
has
brought
me
to
this
in
the
past,
because
sana
bui,
I
I
believe,
is
used
to
test
networking
connectivity.
I
don't
know
so
all
that
to
say
is.
I.
A
B
All
for
all
that
I
know
valero
and
sauna
believe
could
talk
you're
off
about
data
protection.
I
know
far
less
about
networking
and
jay.
I
am
so
excited
that
you've
agreed
to
let
this
be
a
beginner
networking
session
and
I'm
bringing
my
networking
beginner
questions.
So
I'm
ready
to
grill
you
when
you
when
you're
ready.
I.
A
A
So
but
yeah
what's
up
vivek
and
vivek,
is
here
and
he's
like
a
total
networking
expert
like
a
really
hardcore
one.
A
So
we
got
the
andrea
people
here.
We
got
some
telco
folks,
so
so
what
I
thought
I
might
do
and
we
are
expecting
the
our
friend
from
avi
so
feel
free
to
you
know
ping
him
you're
in
that
same
channel.
That
upstream
networking
channel
gke.
A
A
Yeah,
please
send
him.
The
stream
yard
link.
Tell
him
I'm
sorry,
okay,
so
here
we
go
so
I'm
gonna
connect.
I
just
made
a
gke
cluster
earlier,
and
this
is
this
is
not
actually
running
andrea.
It's
just
running
whatever
the
default.
Cni
and
gke
is.
How
do
I
do
this?
How
do
I
open
the
running
cloud?
Shell?
So
I'm
gonna
open
a
cloud
shell
up
and
I'm
going
to
go
into
it
and
then
so,
while
I
do
this,
you
know
if
the
entry
of
folks
are
in
here.
A
A
Like
normally
it's
like
happy
hour
for
me
and
I'm
like
drinking
an
ipa
on
this
show,
but
I
kind
of
partied
out
because
I
was
in
miami
for
a
week
so.
B
Whoa
that
is
truly
adventurous.
A
So
I'm
just
like
I'm
just
ready
to
get
back
to
work
and
focus.
So
so
I
wanna
thank
you
for
asking
me
this
question
eleanor,
because
I
had
no
idea
what
we
were
gonna
do
on
the
show
today,
and
I
I
I
didn't,
have
any
ideas
because
I
was
in
vacation
brain.
A
B
Typically,
this
is
not
part
of
the
demo
right.
This
is
a
real
life.
Unexpected
situation,
yeah
yeah.
B
D
A
Made
this
and
I
I
assumed
that
if
it's
I
assumed
that
I
would
not
have
problems
if
I
just
made
a
cluster
in
google
cloud,
I
mean
so
coupe
ctl
describe
pod.
Let's
see
why
these
aren't
up
and
running
but
like
as
we're
getting
started
here,
we
can
sort
of
start
to
learn
some
things
about
networking
like
we
can
see
here
that
in
a
standard
you
know
gce
cluster
gke
cluster.
We
have.
A
We
have
cube
dns
running,
and
I
saw
that-
and
I
was
kind
of
surprised
because
coupe
dns
is
like
the
the
older
dns
and
core
dns
is
like
the
new
one.
That's
pluggable,
so
buchan
not
to
dump
a
totally
random
question
on
you,
but
do
you
have
any
idea
why
people
still
use
coop
dns?
Is
it
better
in
some
way
than
core
dns?
That
might
not
be
your
area?
B
Can
I
ask
so
you're
saying,
are
these
kubernetes
defaults
or
are
they
you're
saying
when
someone
ins,
how
does
someone
decide
between
cube
dns
and
coordinates
like
when
they.
A
B
Okay,
by
the
way
I
can
barely,
I
can't
actually
read
the
words
on
your
screen.
It's
too
pixelated,
I
don't
know.
A
A
A
This
is
a
a
standard
kind
cluster
like
so,
but
this
is
an
upstream
cluster
and
I
have
coordinates
running
in
here
and
the
thing
with
core
dns
is:
if
you
do
coupe
ctl
get
cm
n
kube
system,
you
can
say
coupe
ctl
edit
cm
and
I
can
say
core
dns
dash
n
system,
and
it
has
this
really
nice
core
file
and
I
can
sort
of
really
customize
the
way
dns
works
in
my
cluster
and
it's,
like
you
know
the
new
dns
system
that
that
got
put
in
place
to
sort
of
improve
upon
the
configurability
of
sort
of
advanced
dns
in
pods
and
and
in
kubernetes.
A
So
it's
like
that's
kind
of
the
normal
thing
that
I
normally
see,
but
in
gke
that's
that's
not
the
case,
and
maybe
it's
because
they
don't
want
you
messing
with
gk
with
dns
in
a
gke
cluster,
because
it's
a
sas
right
so
anyways.
This
says:
there's
no
node!
So
now,
if
I
go
in
here-
and
I
I
there
should
be
a
way-
I
can
add
nodes
right.
A
So
if
I
go
in
here
chinchi,
I'm
back
from
pto
chinchi,
so
vivek
did
we
deploy
cni,
we,
I
I'm
I'm
gke,
I'm
doing
an
autopilot
gke
cluster
and
I
assume
that
gke
set
up
my
cni
for
me,
but
you're
right.
We
may
not
have
one
but
like
so
this
should
I
should
be
able
to
node
auto
provisioning.
A
It
says
it's
enabled
notifications,
maintenance
window.
I've
got
all
these
notifications
delete.
Okay,
I
deleted
those.
Just
try,
cube
ctl,
get
notes
and
see
if
yeah.
A
Definitely
no
nodes.
Okay,
so
if
I
do
coop
ctl
get
nodes
dash,
oh
wide,
no
resources
found
so
so
I
think
I
can
make
this
cluster
bigger
by
how
about
this
I'm
going
to
work
on
this
and
then
eleanor
since
bhushan
is
here
and
he's
like
an
ako
expert.
How
about
you
ask
him
like
a
couple
of
those
kind
of
questions
about
the
application
data
path?
Right,
like
I
think,
one
question
eleanor
started
with,
which
was
a
good
one.
A
When
I'm
outside
the
cluster
right,
so
if
I
open
my
myro
board
where's
my
row,
my
rail,
I
have
this
myro
app
that
I
use
now.
So
if
I
open
a
myro
board
and
where
is
it
yeah,
let
me
sign
in
later
try
again
here
we
go.
A
B
Can
I
actually
translate
into
a
to
show
the
level
or
actually,
once
you
answer
and
then
I
may
have
to
ask
you
to
go
more
basic,
so
sure,
let's
tell
that'll
be
the
first
question
sure.
E
Sure,
maybe
let
me
share
my
screen.
I
have
it
yeah.
G
A
Even
better
because
then
I
can
like
do
some
weird
stuff
on
my
machine.
While
you
do
that
and
try
to
solve
the
other
problem,
I
have
scott's
here.
A
Scott,
why
are
all
my
gk
pods
pending?
Have
you
ever
seen
this?
I
can't
figure
out
how
to
scale
my
nodes
up,
or
do
you
see
the
slides?
I
do.
A
A
A
You
got
that
done.
Let
me
know
because
that
can
take
like
a
while.
It's
it's
the
whole
mac
os
security
thing,
okay,
so
so
in
here,
normally
I'll
I'll,
give
you
some
some!
So
are
you
familiar
with
the
coupe
proxy
eleanor.
A
Yeah,
so
normally
in
a
normal
cluster-
and
you
don't
see
this-
which
is
kind
of
the
first
interesting
thing
and
the
reason
you
don't
see
this
is
because.
A
Well
I
don't
know
why,
but
in
a
normal,
in
a
normal
cluster
like
a
non-gke
cluster,
you'll
see
that
there's
coupe
ctl
get
pods
dash
a
you'll,
see
that
there
are
these
cube
proxy
pods
right
and
they,
like
you,
said
they
they
run
on
every
node
right,
if
I
say
coupe
ctl
logs.
If
I
look
at
the
logs
for
it,
you'll
see
that
it's
kube
system
right
you'll,
see
that
it's
gone
off
and
it's
figured
out
what's
node
it's
on
and
it
it's
using
the
iptables
version.
B
A
So
it's
a
good
question
right
like
how,
where
does
it
go
off
and
and
get
the
note
ip
from
right
like
like
we
can,
let's,
let's
poke
around
and
see
right
so
tmux
new
dash
s,
eleanor
we'll
name
our
tmux
session
after
you?
What's.
B
A
Okay,
so
if
I
do
find
dot,
slash
dash,
name,
node.go
right-
and
I
is
that
it's
not
it's
not
even
code-
that's
inside
of
it
must
be
this
right.
It
must
be
this
file,
that's
doing
that
right.
So
if
I
go
in
here
okay,
so
how
does
it
do
that?
So
it
looks
like
what
it's
doing
is
in
inside
of
kubernetes
there's
a
utility
function
called
get
node
ip,
and
you
know
somebody's
calling
this.
Let's
see
who's,
calling
it
right.
Well,
let's
see
what
it
does.
A
First
right,
how
does
it
get
that
so
the
way
that
it's
doing
this
is
it's
it's
making
a
kubernetes
client
and
it's
making
a
call
to
the
api
server
right
and
then,
as
soon
as
it
calls
the
api
server.
It
goes
off
to
the
api
server
and
it
says
I
want
to
get
the
list
of
all
nodes.
Okay,
so
it's
doing
the
equivalent
of
this
coupe
ctl
get
nodes
so.
B
D
A
A
A
A
B
A
Yeah
and
I
I
have
no
idea
why
it's
asking
that
question.
So
if
I
do
get
grep
get
note
ip,
we
can
see
who
calls
it.
So
this
is
interesting,
oh
and
actually,
when
the
coup
proxy
starts
up
it
actually
from
its
own
host
name
it
it
takes
its
own
host
name.
So
if
I
do,
this
has
turned
into
an
interesting
show,
I
I
mean
coup
ctrl.
A
Dash
and
coop's
all
right,
coob
ctl
get
damon
set
dash
and
coob
system
right,
so
we'll
do
coupe
ctl
edit
ds,
dash,
name,
cube
proxy
dash,
n
cube
system,
okay
right
and
then
let's
go
in
here
and
then
let's
look
is
this
an
is
this
an
argument
like
is:
am
I
telling
it
here?
It
is
so:
okay,
so
host
name
override
dollar
sign,
node
name,
so
I
so
this
dollar
sign,
node
name
thing
is
a
so
somehow
that
is
getting
plumbed
in
that
host
name
overwrite.
A
B
A
A
B
B
But
is
the
answer
that
it's
somehow
using
that
to
query
the
api
server
to
figure
out
its
own
ip
address?
I
mean,
but
really,
my
question
is:
why
can't
it
just
do
that
ip
thing
you
you
using
ip
just
to
get
it
or
whatever
you
use
just
to
get
its
ip
to
me.
It's
why
the
heck
doesn't
have
to
go
to
the
api
server.
A
A
Why
do
we
want
to
give
it
a
name,
so
they
can
figure
out
its
ip
address
right
I
mean
those
are
kind
of
the
two
questions
you
you
or
my
version
of
your
two
questions,
yes
and
sure,
so
one
reason
why
I
know
that
it
might
need
its
own
ip
address
is
because
it's
writing
these
low-level
networking
rules
and
it
has
to
make
these
decisions.
So
if
I
go-
and
I
say,
docker
ps-
this
is
gonna.
This
is
gonna
work,
its
way
up
into
your
original
question.
So
I
don't
know
what
happened
to
bhushan.
A
A
Yeah,
I'm
in
a
node
right,
you're,
bigger,
okay,
I'm
in
a
kind
cluster
and
I've
just
done
a
docker
exec
t
I
into
this
cluster
right.
Okay.
So
if
I
go
bin
bash
and
I
look
at
these
nodes-
iptables
dash
save.
If
I
look
at
these
rules,
you
can
see
if
I
say
coup
well
this.
This
might
be
the
smoking
gun
right
here.
A
A
So,
if
I
say
ctl
get
services
right,
then
there's
a
there's,
a
service
called
the
kubernetes
service
and
that
service
points
me
to
the
and
it
allows
me
it
allows
my
pods
as
soon
as
they
come
alive
to
be
able
to
ping
kubernetes
and
access
the
api
server
as
like
an
internal
thing
right,
so,
which
is
what
you
know
for
people
that
are
newer
to
kubernetes.
It's
kind
of
a
really
interesting
thing,
because
it's
like
being
able
to
like
reach
up
through
your
umbilical
cord
to
your
you
know.
B
A
A
No,
I
mean,
I
meant
to
say
it
is
useful.
It's
okay!
It's
like
the
fundamental
thing
that
makes
kubernetes
different.
The
docker
right
is
the
fact
that
there's
this
dns
from
to
me
at
least
right.
It's
there's
this
there's
this
innate
concept
of
dns
and
cluster.
That's
like
queryable
and
manipulatable
right
from
the
containers
that
you
create
inside
of
the
cluster,
so
discovery
right.
It's
the
fundamental
thing.
A
I
think
so
so
I've
got
1096.1
here
and
I'm
I'm
I'm
kubernetes
and
I'm
saying
this
is
a
service
and
anybody-
and
this
is
a
famous
ip
address
right.
Like
you
know,
this
is
the
first
ip
address
that's
allocated.
When
the
kubernetes
comes
online,
it
says
I'm
going
to
make
1096.0.1
the
you
know
the
the
the
start
of
my
service,
ip
ranges
for
new
services,
the
first
one
I'm
going
to
make
the
internal
one
and
that's
an
internal
kubernetes
thing.
So
kubernetes
makes
that
for
you
right.
A
B
A
A
A
Why
doesn't
it
like
that
coupe
ctl
create?
I
thought
I
could
oh
create
service.
I
thought
I
could
just
create
a
generic
cluster
ip
service.
You
need
to
give
a
whole
amble.
Oh
coupe
ctl
create
service
example.
Let
me
just
grab
a
stupid,
regular
yaml
service
from
somewhere.
I
have
this.
I
know
I
have
all
my
fan
all
my
examples
that
I
use
are
here.
A
So
if
folks
want
to
follow
the
examples
that
I
use
on
this
show,
you
can
go
to
j
unit,
100
k8s
prototypes
tree,
and
I
have
like
all
these
like
stupid
examples
that
you
can
use.
So
here's
one
of
them,
here's
one
of
the
ones
that
I
always
use
four
of
us
and
it
creates
a
whole
bunch
of
pods
that
you
can
smoke
test
your
networking.
So
let
me
w
get
this
onto
this
and
I'll
create
this.
A
Okay,
coupe
ctl
get
service
dash,
hey.
So
let's
get
all
the
services
now
so
now
we
can
see
like
we
have
all
these
services
right.
So
we
have
the
first.
One,
though,
is
10.8.56.1,
and
then
we
have
these
other
services,
like
I
just
made
default,
http
backend,
that's
dot,
57.253
and
the
other
one's
dot,
56.10
and
the
other
one's
57.92,
and
so
like
there's
all
these
new
services
being
used.
A
But
the
first
thing
in
that
whole
range
is
that
and
and
that's
done
by
the
on
startup
there's
a
there's
like
a
there's,
a
you
know
the
api
server.
It's
such
a
fundamental
thing
to
kubernetes
the
api
server
sort
of
goes
off
and
makes
that
ip
and
reserves
it
as
the
internal
cluster
ip
and
then
in
this
iptables
rule
what's
happening
is
it's
saying?
Look
I
need
to
write
a
rule
so
that
if
somebody
hits
this
node
and
they're
trying
to
go
to
you
know
this
they're
trying
to
go
to.
A
Then
I
need
to
do
some
masquerading
of
the
traffic
or
whatever
right,
and
then
this
is
saying.
If
somebody
by
tcp
is
trying
to
access
the
internal
kubernetes
service,
then
I'm
going
to
write
a
destination
natting
rule
so
that
I'm
going
to
write
a
rule
that
says
your
new
destination
is
172.18.0.2,
so
anybody
who's
trying
to
access
this.
This,
like
this
service
right,
is
gonna
like
sort
of
get
sent
to
the
api
server.
That's
running
on
this
node,
and
this
is
the
node's
ip
address.
A
So
if
I
say
coop
ctl
get
nodes
dash,
oh
wide
right.
This
is
the
node's
ip
address,
so
these
rules
are
written
by
the
coupe
proxy,
and
so
this
is
kind
of
relevant
to
your
original
question
of
like
how
does
this
external
traffic
work?
Well,
external
traffic
still
return
requires
on
the
coop
proxy.
Even
if
you
have
something
like
avi
right
now,
buchan,
maybe
you
can
share
your
screen
and
show
yeah
show
that
I.
B
E
Can
you
allow
my
screenshot.
A
A
Okay,
now
I
could
see
your
screen
bouchon.
Oh.
E
Okay,
all
right
all
right
so
yep
here
are
the
slides
that
I
was
talking
about
so
as
nj
was
saying
that
each
node
has
its
own
q
proxy
running
inside,
which
takes
into
traffic.
So
usually,
when
you
expose
it
expose
the
service
out
to
an
external
client
using
either
node
port
or
a
load
balancer
service.
E
It
internally
creates
a
node.
Even
when
you
create
a
load
banner
type
service,
it
creates
a
node
port
for
it.
So
the
service
is
basically
listening
on
some
the
nodes,
ip
address,
comma,
some
port,
so
that
port
will
be
the
same
across
all
the
nodes,
but
the
node's
ip
address
will.
I
will
be
different
depending
on
what
node
it's
on.
So
whenever
just
ignore
the
service
engine.
For
now
the
rv
services
are
not
here.
If
a
external
client
wants
to
reach
the
service,
you
can
directly
browse
to
the
nodes.
E
Ip
address,
colon,
the
port,
and
that
way
the
traffic
will
reach
the
node
and
on
the
node,
the
q
proxy
ip
table.
Rules
will
see
that
this
incoming
traffic
is
coming
on
this
port.
So
I
need
to
send
it
to
the
application
port.
So
it
will
make
the
last
hop
decision
of
sending
it
and
it
can
send
it
to
a
pod
on
the
same
node
or
it
can
send
it
to
a
pod
on
a
different
node,
and
this
is
where
the
external
traffic
policy
comes
in.
E
No,
this
is
one
of
the
way
it
works.
This
is
the
way
most
of
your
layer.
4
load,
balancers
work,
but
with
avi
we
have
a
interesting
way
of
bypassing
the
q
proxy
altogether.
So
if
you
do
cube,
ctl
get
nodes,
minus
or
eml
right,
you
will
see
for
every
node.
There
is
a
mapping
between
what
the
pod
network
is
allocated
to
this
node.
So
each
node
will
have
a
slash
24.
E
You
can
change
it,
but
by
default
it's
usually
a
slash
24
network
and
allocated
to
every
node
within
your
cluster
and
ako
is
the
operator
that
rv
uses
for
integration
with
kubernetes,
so
it
syncs
all
of
these
from
kubernetes
api
server
and
then
programs
static
routes
on
av,
so
it
it
will
say
that
in
order
to
reach
a
pod
in
this
network,
the
next
stop
is
this
node,
so
that
once
the
service
engine
knows
all
of
these
static
routes,
whenever
it
needs
to
route,
I
mean
load
bands
to
any
pod
inside
the
in
in
your
cluster.
E
What
it
will
do
is
it
will
send
out
a
packet
with
the
source,
ip
of
its
its
own
interface,
the
destination
ip.
Will
be
of
that
of
the
pod?
The
source
mac
will
be
of
the
service
engine
and
the
destination
mac
will
be
of
that
of
the
node.
So
what
happens
is
since
the
service
engine
is
on
the
same
l2
network,
the
packet
flows
to
the
node,
and
once
it's
on
the
node,
the
internal
os
networking
sees
that
the
destination
ip
is
actually
the
parts
ipad,
so
it
routes
it
to
the
destination
port.
E
So
that
way,
the
service
in
can
directly
reach
the
backend.
A
E
Even
gke
works
so
gta
he
has
two
modes.
First
is
a
routed
mode,
where
gke
also
learns
about
these
routes
and
programs
that
them
in
gke
networking-
and
the
second
is
the
vpc
mode.
Vpc
mode
is
completely
different.
It
basically
exposes
this.
These
part
c
ids
into
google's
network
itself,
so
the
parts
become
routable
that
way,
but
yeah.
The
older
way
gk
used
to
do
is
the
same
way
that
we
does
so
it
even
there.
They
sync
the
routing
tables.
A
Okay,
so
the
route,
but
that
next
top
is
because
I
got
distracted
and
I
was
answering-
and
I
was
doing
something
else
so
but
then
so
you
were
saying,
though,
that
the
next
hop
is
that
is
the.
Let
me
make
my
screen
bigger,
so
I
could
see
your
screen
a
little
better.
There
we
go
okay,
so
I
sub
oh
yeah,
that's
better!
So
my
I
come
in
and
I
come
in
and
I'm
a
user
and
I
want
to
access
my
app
and
my
app
has
a
vip
of
10.10.1.100..
E
A
E
This
is
how
any
pro
any
proxy
you
use,
any
l4
proxy
use
will
do
even
in
the
case
of
other
proxies,
usually
use
node
ports
even
like
yeah.
B
E
Use
elb
and
all
it
uses
node
port
on
kubernetes
cluster,
so
in
amazon.
So.
E
E
The
service
engines
or
the
load
balancers
interface
ip
address.
E
You
we
can
there
that's
another
feature
that
we
can
do
so
we
cannot
directly
have
preserve
the
client
ip
on
layer,
3
packet,
because
we
want
the
return
to
go
through
the
service
engine.
If
we
preserve
the
ip,
then
the
pod
will
directly
answer
to
the
client
and
there
is
no
direct
tcp
connection
between
the
client
and
the
part,
so
that
will
fail.
E
A
A
So
if
you
look
at
the
kubernetes
documentation,
it
says
the
source,
I
p
for
services
with
type,
and
so
we
had
to
read
through
this
once
it
was
with
anusha.
I
had
to
read
through
this
because
we
were
working
on
this
in
in
kpng
and
we
ran
into
this
bug
where
there's
this
one
test
that
fails
and
it's
162.
A
Right
and
the
reason
it
fails
is
because
there's
a
test
in
kubernetes
itself
that
confirms
that
you
always
preserve
the
source
pod
ip
for
traffic
through
the
service
cluster
ap.
E
Yeah,
this
is
a
cluster
ip
type
service
right.
We
were
talking
about
load,
balancer
type
service,
so
cluster
ips
are
all
internal
to
the
cluster.
A
E
A
Getting
my
service,
my
service
types
confused,
and
so
this
is
only
for
cluster
ips,
but
for
node
ports,
that's
not
a
requirement.
We
don't
have
to
preserve
the
source
pod
ip.
That
makes
sense
yeah.
Now
this
all
makes
perfect
sense
to
me:
okay,
cool,
okay,
cool,
so
you
can
share
again
and
keep
talking
through
your
data
path.
G
G
A
E
E
Yeah,
so
we
talked
about
more
port
how
that
works,
how
avi
does
direct
routing
to
the
pods
so
most
of
the
cni's,
whether
you
use
calico,
flannel
lantria
canal?
All
of
those
don't
expose
the
pods
as
a
routable
entity
outside
the
cluster.
So.
E
Need
this
kind
of
a
mechanism
to
directly
reach
the
pod?
In
that
case
there
are
other
cni's
like
eks
uses
awscni,
and
that
makes
the
parts
routable
directly.
So
in
that
case,
any
load,
balancer
or
any
external
client
can
directly
ping
the
port
or
directly
reach
the
pod.
Without.
B
E
Source
netting
or
any
kind
of
routing
required
routing
is
built
into
the
networking
of
the
stack,
so
even
nsxt
cni
does
the
same.
If
you're
using
clusters
over
nsxt
overlays,
you
can
have
your
parts
routeable
using
nsxtcni,
but
for
the
there
without
those
routable
cni's
right
you,
you
are
stuck
with
only
two
options:
either
node
port
or
cluster
ip
that
comes
with,
and
each
of
them
has
their
own
pros
and
cons.
E
So
say:
if
you
are
going
with
no
port
mode,
you
do
not
have
the
ability
to
select
which
pod
you
are
going
to.
You
are
at
the
mercy
of
the
cube
proxy.
So
wherever
you
proxy
sends
you,
you
have
to
go
there,
so
you
cannot
persist
your
request
to
a
certain
port
on
the
back
inside.
So
if
you
have
a
state
state
full
application
that
cannot
be
done
using
this
kind
of
a
method.
E
E
There
are
other
ways
to
get
around
it
too,
but
yeah
what
you
call
yeah,
but
are
we
still.
E
So
when
you
think
about
northport
right,
northport
is
exposing
the
service
outside.
It's
not
exposing
the
pods.
So
when
you
health
monitor
any
load-bencher
drive
which
help
monitors
an
output
type
service,
yeah
checking,
if
the
service
is
up,
it
doesn't
check
if
the
individual
parts
behind
the
services
up.
A
E
It's
very
obvious
specific
questions
for
other
viewers.
E
F
A
So
vivek
I
saw
you
joined,
did
you
have
anything
specific
or
you
just
wanted
to
hang
out
with
us?
I'm
really
glad
you
came
by
the
way.
Just.
A
Cool
all
right,
yeah
so
feel
free
to
interject
at
any
time.
So,
okay,
all
right
go
on
doug,
what's
up
hey,
so
where
were
we
bush
on?
So
you
were.
E
Just
I
was
talking
about
the
pros
and
cons
of
these
two
methods.
Yeah,
like
I
said,
the
persistence
breaks
down
when
we
we
don't
have
the
power
of
deciding
which
part
to
choose
when
we
are
doing
load
balancing.
Also.
E
Do
health
monitor
all
the
way
to
the
pods,
because
we
are
actually
help
pointing
the
service
and
that
is
being
exposed?
Those
are
solved
by
cluster
ip.
Since
we
can
directly
reach
the
pod,
we
can
even
help
monitor
the
port
level,
but
with
avi.
We
are.
This
requires
a
separate
sc
group
per
cluster.
No
now.
A
A
A
E
We
we
did
have,
we
did
try
out
bigger
egress
proxy
also,
but
that
project
got
stalled
right
now
so
right
now
we
do
not
have
to
bring
it
back
on
our
roadmap
yet
and
yeah.
We
do
have
customer
ask
to
do
egress,
but
most
of
the
customers
who
us
are
also
exploring
istio
and
istio
has
a
really
good
egress
way
to
not
just
egress,
but
even
network
policies
to
tie
around.
A
A
H
This
is
not
something
that
you
might
see
from
too
many
enterprise
accounts.
H
It's
it's
an
isolated
network.
Yeah.
You
have
multiple
isolated
networks
in
your
they
segment
your.
Why.
H
That
I
am
so
some
of
the
customers,
they
segregate
their
network
into
multiple,
isolated
ones
and
each
one
in
theory
can
have
their
own
subnet
and
it
can
be
overlapping
non-overlapping.
H
So
the
problem
we
are
facing
is
say:
you
have
paths
which
are
running,
and
one
part
is
a
part
of
the
vrf1
say
right:
the
isolated
network
one
and
there
are
some
parts
which
need
to
communicate
only
with
other
paws,
which
are
in
vrf2
right
now.
We
also
need
to
con.
So
one
thing
is
how
to
make
sure
that
you
send
out
the
packet
with
the
correct
source
ip
which
belongs
to
that
particular
vrf.
H
H
Up
a
separate
session
for
that,
if
you
want
okay,
a
bit
more
complicated,
when
I
don't
have
a
solution,
I
was
working
with
the
nsx
folks
when
they
were
doing
some
routable
pod.
I
was
working
with
the
pm
where
we
wanted
to
also
introduce
some
kind
of
a
vrf
id.
They
were
trying
to
do
something.
I
can
try
to
work
with
the
andrea
folks
to
see.
What's.
H
H
H
A
H
So
we
need
to
also
consider
how
do
we
make
sure
that
this
kind
of
communication
is
supported
by
kubernetes,
because
if
I
understand
correctly,
the
fundamental
construct
in
kubernetes
is
there's
only
one
network
which
it
uses
generally
to
talk
to
outside
work
right
and
inside
the
network.
It's
like
all
parts,
can
talk
to
each
other.
A
I
A
A
A
A
H
And
well,
you
remember
jay
when
we
were
talking
about
multi-secondary
interfaces,
that's
sort
of
this,
so
what
some
people
do
is
the
easy
way
out
is
add
secondary
interfaces
into
your
parts
and
each
of
those
interfaces
connect
to
one
of
these
networks,
if
needed,.
H
If
you
want
to
take
it
in
so
probably
boostion
can
help
you
saying
what
are
the
vrf
constructs
available
on
the
rv
side?
If
you
want
to
do
with
everything
over
the
primary
network,
you
want
to
use
a
single
primary
network
to
support
multiple.
Oh.
A
And
this
is
like
a
thing
I
didn't
know.
I
could
do
this,
so
I
can
do
this.
I
p
I
didn't
know
I
could
do
this,
so
I
can
just
do
this
in
an
ipvrf
show
it's
like
a
normal
command.
You
can
run
on
any
linux.
Node,
okay,
cool
and
and
openshift
seems
to
basically
be
saying.
Is
this?
What
openshift
is
doing?
Are
they
are
they
doing
multis?
Here?
I.
H
H
There
is
a
cool
thing
that
they
are
doing
jay.
You
remember
once
doug
joined
us
when
I
was
also
giving
the
session
from
radar.
You
know
the
guy
who
was
coming
in
university
yeah,
so
they
are
working
in
the
kubernetes
networking
group
on
a
project
called
multi-service.
You
can
google
it
if
you
want
to.
Hopefully
you
get
it
so
they
are
trying
to
come
up
with
a
thing
where
they
can
provide
some
sort
of
services
for
secondary
interfaces.
H
Yes,
but
still
that
will
cover
the
incoming,
but
we
still
need
to
worry
about
outgoing
if
you
want
to
do
it,
but
there
are
some
work
being
done
here
so
yeah
for
people
who
are
interested.
They
can
join
the
signaling
networking
group
and
that
they
are
talking
about
about
the
support.
A
H
A
H
E
Yeah
brfs
are
usually
is
basically
a
remnant
of
the
legacy
networking
that
people
used
to
do
on
hardware
routers
now
with
overlays
and
all
we
don't
usually
come
across
that
a
lot
unless
it's
a
very
it's
still
in
a
setup
environment
where
people
use
legacy.
Networking
like
in
telco
use
cases
and
also.
A
A
About
services
about
why
vrf's
are
not
supported
and
like
so
we
should.
Probably
I
don't
think
this
is
a
bug
report,
but
right
now
some
folks
are
interested
in
multiple
services
that
are
on
different
network.
I'm
sorry
pods
on
pods
that
are
on
different
service
networks.
I
guess
is
what
you're
saying
right
in
the
same.
A
Like
you
want
orthogonal,
yeah
yeah,
add
a
doc
page
explaining
why
this
isn't
a
part
of
the
k-8s
service
model.
Maybe
it's
not
even
possible
because
of
the
data
model
for
service
ciders.
I
don't
know
not
sure.
A
I
think
there
are
fundamental
changes
in
kubernetes
services
that
would
need
to
happen
api
to
really
properly
support.
This,
like
you,
would
need,
maybe
to
add.
I
don't
know.
A
A
A
Issue
cool
so
and
then
I'm
going
to
reference
your
vrf
issue
that
you
that
you
were
complaining.
I
think
this
is
hilarious,
that
you're
the
only
person
that
complained
about
it
all
right.
A
Yeah
cool,
let
me
add
these
notes
with
these
links
here.
A
A
Let
me
grab
that
link.
Where
is
that,
where
is
it?
Where
is
it?
Where
is
it?
Where
is
it?
Where
is
it.
H
H
A
A
E
Yeah,
it's
a
networking
construct.
Basically,
what
vrf
does
is
it
breaks
your
router
into
parts?
One
makes
two.
E
So
we
does
support
that,
but
whatever
it
does
not
support
is
bridging
across
vrfs.
So
your
if
you,
since
kubernetes,
has
only
kubernetes
by
default.
It
doesn't
support
vrfs
right,
so
there
it's
only
one
network
on
the
back
and
side
for
avi
and
if
you
have
multiple
whips
on
the
front
end
into
multiple
vrfs
that
won't
work
because
we
don't
pitch
across
vrfs.
E
A
E
So
in
telco
use
cases
we
use
bgp
to
advertise
whips
into
certain
selected
vrfs,
so
everything
on
kubernetes
side
is
in
a
single
vrf,
but
once
it
comes
to,
avi
makes
a
decision
of
where
this
whip
will
be
advertised
with
so
say,
whip
one
for
application.
One
is
sent
to
vrf
one
with
two
for
application,
two
sent
to
vrf3
and
so
on.
A
A
H
Crbs,
which
says
from
which
network
you
want
a
whip
right,
yeah,
and
then
there
are
also
knobs
there
to
say
when
you
allocate
a
whip
from
this,
it's
like
an
ip
port
right.
So.
H
I
didn't
know
you
could
do
you're
taking
it
the
other
way,
so
it's
the
service
which
is
annotated
because
it's
a
service
which
needs
to
be
exposed
and
so
for
that
service
whip
is
then
going
to
be
only
advertised
to
certain
peers.
Again,
these
are
all
control
using
some
knobs.
So
you
use
kind
of
again
some
label
which
you
use
to
match
between
the
value
you
have
in
crp
and
the
value
they
have
in
the
rv
controller.
H
To
say
this
whip
has
to
be
advertised
to
certain
peers
right
and
it
can
be
propagated
upstream
in
that
particular
network.
But
okay.
This
is
again
only
handle
the
incoming
request.
So
that's
why,
when
you
guys
were
talking
about
egress,
I
wanted
to
bring
it
in
that
there
is
no
native
way
to
handle
vrfs
egress
right,
because
when
this
pod
wants
to
send
out
a
request,
we
don't
have
an
easy
way
to
send
it
to
the
correct
network.
H
A
H
H
A
H
There
is
only
one
network
looking
at
the
rbs
and
the
networking
right
as
bhushan
is
saying:
there's
no
concept
of
multiple
networks
at
the
kubernetes
place
right,
so
there
are
only
single
interfaces
on
your
kubernetes
nodes
yeah.
So
there
are
only
multiple
interfaces
between
the
end
user
or
the
client
and
the
rbses.
H
A
A
H
A
H
H
You're
gonna
support
multiple
different
vrs
right,
so
your
app
one
is
exposed
only
to
your
blue
network,
like
how
you
find
that
initial
diagram.
Right
so
say
that
is
your
your
finance,
so
they
can
access
app
one
and
if
it's
sales,
which
is
your
red
app,
they
can
access
app
too.
The
sales
people
cannot
access
app
one,
because
the
belonging
to
app
one
will
not
be
advertised
in
the
red
network.
E
Basically,
you
get
the
whip
level
isolation,
a
client
in
vrf
one
can
only
access
the
weapon
vrf1,
not
between
vrf2.
E
A
A
H
H
A
C
A
So
we've
gone
off
and
done
some
a
bunch
of
this
is
interesting,
so
I
think
we
answered
eleanor's
original
question,
but
I
want
to
do
a
general
and
I
don't
know
if
we
have
a
diagram.
I
think
we
do
have
a
diagram
in
here
and
I
think
we
can
just
sort
of
show
it
to
folks,
because
so
so
this
is
the
this
was
supposed
to
be
an
introductory
session,
but
it
got
really
interesting.
A
A
So,
well,
no
avi
does
layer.
Seven
right,
yeah,
yeah
obvi
is
not
just
vip.
Savvy
does
layer
seven,
so
we
can
call
this
avi
right,
so
so
normally,
if
you
make
a
service,
so
this
is
for
you
eleanor,
even
though
you're
not
here,
I
went
through
this
with
with
matt
fenwick
once
I
don't
know
if
matt's
here,
but
so
normally
you
you,
you
might
make
a
service
if
you're
a
user
and
then
and
then
after
you
make
that
service.
That
service
has
a
status
associated
with
it.
A
Okay,
and
if
that
service,
like
you
know,
has
load,
balance
or
ingress
rules
right,
then,
if
you
look
in
there,
these
these
are
associated
with
ports
and
we're
still
in
the
services
world
right,
and
so
I
can
see
this
so
I
can
go
over
here
and
I
can
coob
ctl
get
service
dash,
o
dash
a
and
I
can
coop
ctl
edit
service
cube,
dns
dash
n
cube
system,
and
I
can
look
at
the
whole
thing
and
I
can
see
it
has
a
spec
and
a
status
and
and
there's
no
load
balancer,
because
it's
not
a
service
of
type
load
balancer.
A
It's
just
a
very
simple
cluster
ip
service,
but
in
any
case,
if
you're
making
a
service
of
type
load,
balancer,
then
you'll
have
this
status
in
here,
and
this
status
at
some
point
will
get
filled
up
by
your
cloud
provider
right
your
cloud
provider
in
in
obvious
case
avi.
Does
this
right
like
avi
goes,
and
it
writes
this
information
and
then
this
so
avi
literally
goes
in
here,
and
it
literally
goes
and
it
starts
adding
stuff
here.
It'll
add
an
ip
blah
blah
blah
it'll.
Add
all
that
and.
A
Avi
does
all
that
then,
when
you
do
coop
ct
you'll
get
service,
dash,
o
wide
you'll,
see
the
ip
address
of
that
service
and
then
buchan's
original
part
where
you
have
that
service
and
then
that
service
forwards
traffic
down
to
the
pods.
All
that
stuff
happens
and
that's
controlled
by
your
cloud
provider
right
and
so
then
obvi
will
then
go
off
and
it
will
send
things
to
the
endpoints
which
are
conveniently
down
here
because
of
a
different
diagram
where
you're
drawing
at
a
different
time.
A
But
ultimately
it
will
then
send
things
to
the
endpoints
of
of
the
services.
So
if
I
say
coupe,
ctl
get
endpoints
dash,
o
y
dash
a
you'll
see
that
this
same
kubernetes
dns
service
here
has
a
list
of
endpoints,
and
so,
if
I
say,
coupe
ctl
edit
end
points
well,
we
should
have
done
endpoint
slices
right.
A
So
I
so
yeah
forget
about
endpoints.
We
should
be
talking
about
coupe
ctl,
get
endpoint
slices
right
and,
if
I
say
coupe
ctl
edit
endpoint
slices
kubernetes
right.
If
I
go
in
here,
then
I'll
see
that
I
have
a
list
of
addresses.
This
is
an
optimization,
so
endpoint
slices
are
an
optimization
that
they
put
in
to
make
it
easier
to
do
large
scale
networks,
but
so
this
has
an
endpoint
right
172.18.0.1
and
we
can
see
iptables-save.
We
ran
it
over
here
that
endpoint.
A
This
destination
here,
it's
saying
people
who
want
to
go
to
the
service
endpoint
will
get
destination
added
to
this
ip
address
right.
So
that's
the
way
that
the
traffic
works
like
from
an
end
user
perspective
right
is
that
ultimately,
the
ultimately
your
endpoint
slices
get
like
there's
load,
balance
or
logic
that
gets
programmed
by
your
load
balancer
to
send
you
to
the
right
place
when
you
hit
nodes.
E
One
thing
with
the
cluster
ip
mode
that
we
talked
about
with
avi
these
ip
table
rules
do
not
come
into
picture
at
all
the
service
engine
because
of
the
static
routes
it
will
directly
send
it
to
the
pod.
E
F
G
A
E
I
I
think
we
do
it
in
in
the
explanation.
Values.Yaml.
H
E
This
is
the
this:
is
the
actual
values
table
from
the
help
chart,
maybe
open
this
one.
I
just
sent
it
on
private
chat.
A
A
A
E
C
E
E
E
A
E
A
Okay,
that's
more
of
a
okay
cool.
So
it's
a
licensing
question,
so
in
general,
you
always
would
want
to
use
cluster
ip,
though
right,
that's
a
better
thing.
A
Always
want
to
use
cluster
ip
yeah
and
then
you,
you
still
need
coupe
proxy
on
your
cluster,
because
you
still
need
certain
things
for
egress
and
so
on
and
so
forth.
Right
yeah,
you
still
need
coop
proxy.
It's
just
that
for
all
your
external
services.
You
don't
need
coop
proxy
anymore.
That's
good!
Yep
cool!
A
I
didn't
know
that
this
is.
I
learned
a
lot
today
cool.
Well
thanks
for
showing
me
all
this,
so
we
were
supposed
to
be
showing
eleanor
bunch
of
stuff,
but
instead
you
got
to
show
me
a
bunch
of
stuff
but
anyways
I'll
sync
up
with
eleanor
later
and
help
her
talk
through
some
of
the
other
issues
he
had.
But
thanks
for
coming,
vivek
and.
A
And
bhushan,
this
is
great
and
let
me
get
rid
of
this
comment
here
and
thank
you,
everybody
if
you
are
enjoying
the
show
and
you're
learning
stuff
every
week
please
do
like
and
subscribe,
so
we
can
get
the
entry.
Keep
that
keep
the
keep
all
of
our
upstream
content,
like
you
know,
justified
to
our
management
chain,
so
we
can
keep
doing
it
every
week
and
not
working
so
all
right
thanks,
everybody
all.