►
Description
Come hang out w/ Xinqi and Bhushan ! We're going to look again at AVI and the implementation details of the AKO operator that manages loadbalancing infrastructure for you in Tanzu. We'll also talk about AVI in AWS and Azure also.
Specifically we look at the nodeportlocal, nodeport, and cluster IP implementations of AVI loadbalancer, how it integrates with services like Route 53, and NSX, and so on.
We also spent some time talking about the contour vs envoy vs svc mesh stuff, which seems to confuse alot of folks.
A
A
Hey,
what's
up
guys,
I'm
cynthia
and
welcome
to
this
week's
entree
live,
show.
A
Okay
on
this
channel
we've
got
weekly
live,
show
talking
about
the
latest
update
on
andreas
cni,
but
this
channel
is
not
just
about
andrea.
A
We
invited
kubernetes
a
networking
expert
from
vmware
every
week
to
share
their
knowledge
with
us
and
also
I'm
sure
that
you'll
learn
a
lot
about
the
kubernetes
networking,
as
well
as
the
vmware's
product
like
andrea
avi
tanzu.
By
watching
our
videos
so
shut
up
too.
A
Yeah,
for
example,
last
week,
zach
shared
an
awesome
tool
to
visualize
the
service
availability
in
kubernetes
cluster
using
tables
so
go
check
out
that
video,
if
you're
interested
in
that
and
you
can
find
that
in
our
in
our
channel.
A
So
if
you
are
new
here
and
like
this
kind
of
contacts
consider
liking
our
videos
and
subscribing
the
channel
okay,
let's
see
who
is
joining
us
today,
everyone
say
hi
in
the
context
in
the
comments,
so
I
can
know
you're
here.
A
Yeah
zac
is
here
and
yeah
jay,
just
shared
the
link
to
the
kubernetes
service
validator
in
the
com,
in
the
comments-
and
we
have
young
join
us
today
from
the
andrea
team
and
also
we
have
new
friends
and
and
and
tony
so
anthony.
Would
you
like
to
give
us
a
self
introduction
about
yourself.
C
Yeah,
so
my
name
is
antonin,
I'm
one
of
the
maintainers
for
project
entria.
I've
been
working
on
it
since,
since
we
open
sourced
it
a
while
back
more
than
two
years
ago
now
and
yeah,
I'm
I've
also
been
like
helping
the
avi
team
actually
integrate
the
heavy
communities
operator
with
andrea
using
a
feature
called
node
port
local.
So
I
hope
we
get
to
talk
about
this
and
I'm
happy
to
take
questions
on
that
as
well.
A
Okay,
thank
you
and
oh,
I
see
I
mean,
is
here
hi
I
mean
and
okay.
So,
let's
get
started
first,
let's
invite
young
to
give
us
a
short
short
update
on
what's
going
on
with
andrea.
D
Yeah,
I'm
I
I
got
an
invitation,
but
I'm
not
100
sure
that
I'm,
the
one
you
know
should
be
introducing
you
know
what's
going
on
in
1.5,
because
I'm
just
working
with
regard
to
a
fraction
of
the
features
there,
but
I've
put
up
the
the
notes
there.
So
essentially
in
entria
1.5
was
released
last
week
and
we
have
you
know
tons
of
new
features,
packed
with
this
new
release.
Most
notable
ones
include
the
multi-cluster
functionalities.
That's
something
that
you
know.
D
We
have
been
working
on
throughout
the
last
release
and
then
now
it's
part
of
the
open
source
upstream
mantra-
and
I
I
don't
know
if,
in
the
entry
live
show
before
you
guys
have
showcased
what
multi-cluster
functionalities
can
do.
Essentially,
we
are
sort
of
like
implementing
a
kubernetes
upstream
multi-cluster
apis,
where
you
know
we,
as
as
a
entry
as
a
cni,
can
manage.
D
You
know
a
lot
of
clusters
together
so
that
you
know
one
service
deployed
in
a
single
in
some
cluster
can
actually
be
exported
to
the
other
cluster
where
they
can
consume
the
service
directly.
If
you
know
those
clusters
are
unions,
logically,
together
through
a
concept
cluster
set
and
there's
also
something
you
know-
really
exciting-
we're
developing
in
that
regard,
which
is
some
sort
of
like
policy
enforcement
across
multiple
clusters
and
that's
not
part
of
the
1.5
release.
D
But
you
know
we
are
gradually
trying
to
add
more
policy
related
work
in
in
the
multi-cluster
space.
So
that's
something
to
watch
out
for
in
probably
1.6007,
so
stay
tuned
on
that
and
in
on
top
of
that,
you
know,
as
I
listed
out,
we
also
have
the
multi-cloud
cache
support
and
a
lot
of
you
know:
entry
ipam
functionalities
that
we
introduce.
So
essentially
you
know
we
have
an
ip
pool
specification
for
entry
and
then
you
know
for
the
pilot
deployments.
D
Now
you
can
sort
of
have
an
annotation
to
assign
ips
from
a
from
a
predetermined
pool,
rather
than
just
have
the
pod
just
have
a
random
ip
generated
from
the
from
the
namespace
either.
So
that's
that
gives
you
a
little
bit
more
control
in
terms
of
the
the
ipad
of
the
cluster.
So
those
are
some
of
the
really
cool
features
that
I'm
I'm.
I
must
admit.
I'm
no
really
expert
on
because
I'm
you
know
more
focused
on
the
policy
side
of
things
and
the
policy
side.
D
Since
we
don't
have
a
you
know,
a
huge
bunch
of
features
tripping
on
one
ball
five,
but
there
will
be
a
lot
of
features.
Shipping
on
the
one
block
1.6,
so.
B
C
Antonin,
sorry,
well,
I
guess
yeah.
There
is
it's
interesting
because
I
was
going
to
rebound
on
on
multicast
actually
and
say
that
it's
not
something
that
cni's
out
there
have
typically
been
prior
to
prioritizing
sorry
and
that's
not
something
you
have
in
calico.
I
think.
C
And
obviously
it's
brand
new
in
andrea
and
that's
definitely
a
feature-
that's
been
driven
by
like
user
requests
right,
you
have
a
lot
of
like
applications
when
it
comes
to
video
broadcast
and
video
processing
where
people
have
been
asking
about
the
ability
to
multicast
traffic
between
pods
in
the
cluster
network,
but
also
bring
multicast
traffic
into
the
cluster
network.
C
So
if
you
have
like
an
external
multicast
source-
and
you
have
like
multiple
pods
who
want
to
subscribe
to
that
video
stream,
for
example-
that's
that's
kind
of
like
the
feature
we've
been
working
on
and
the
support
we
have
now
in
this
release
with
entria.
So
I
think
multicast
brand
new
feature,
but
also
something
that
that's
not
really
commonly
find
among
cni's.
Today,.
C
Well,
I'm
not
the
expert
there,
but
I
would
say
video
is
definitely
and
and
and
people
doing
broadcast
is,
is
definitely
kind
of.
Like
the
main
use
case,
that's
been
driving
multicast
support
for
us.
I
can't
think
anything
of
anything
else
of
the
top
of
my
head.
So
far,
maybe
other
people
will
have
insights
but
yeah.
That's
a
use
case.
We've
been
focusing
on.
Obviously
the
fact
that
video
has
been
like
kind
of
like
the
driving
use
case.
That
doesn't
mean
that
that's
all
we
support
right.
C
C
Multicast
is
how
you
you
can
achieve
that
by
having
like
kind
of
like
the
source
and
everything
as
if
it
was
like
one
connection
right
and
then
the
network
taking
care
of
distributing
that
network
traffic
to
multiple
destinations,
okay,
cool.
A
Okay,
I
think
we
have
like
an
audio
questions
about
the
catechol
versus
andrea.
C
Tell
us
sure
so
it
looks
like
I'm
coming
the
one
week
where
we
get
the
difficult
questions,
yeah
so
yeah
I
mean
it's
a
question
that
comes
up
pretty
often
right,
because
that's
kind
of
like
a
space
cnn
is
a
space.
That's
interesting
because
you
have
a
lot
of
quote:
unquote
competitors
right.
You
have
a
lot
of
choice
in
that
space,
just
because
it's
not
something
that
community
kubernetes
is
going
to
give
you
out
of
the
box.
C
So
you
have
all
those
people
coming
up
with
their
own
solution
and
I
think
the
big
differences
between
entry
and
calico,
I
think
the
first
one
would
be
our
data
plane,
choice
right
and
that's
something
that
ju
were
bringing
up
also
before
the
show,
so
we're
using
open
v
switch,
and
there
are
two
reasons
for
that.
C
I
think
the
first
one
is
because
obviously
entry
is
a
project
that
was
originated
by
vmware
engineering,
and
so
we
have
a
lot
of
open
v
switch
expertise
at
vmware
who's,
like
obvious
maintainers
and
obviously
ovs
was
started
by
a
company
called
nice
sierra,
which
was
acquired
by
vmware
in
2012.
I
think-
and
I
think
the
second
reason
would
be-
the
portability
of
open
v
switch
right.
We
really
wanted
to
build
the
cni
from
the
get
go
that
would
run
on
both
linux
and
windows
and
I
think
jay.
C
You
know
something
about
that,
and
so
obs
was
a
good
choice,
because
if
you
don't
go
with,
if
you
go
with
like
kind
of
like
native
linux
networking,
then
you
have
to
kind
of
like
build
everything
from
scratch.
C
When
you
go
over
to
windows-
and
maybe
some
features
are
not
just
not
available
on
windows,
I
think
with
obs
we
kind
of
have
that
layer,
that's
portable
across
multiple
operating
systems
and
at
at
least
you
can
get
like
a
similar
feel
to
our
networking
works
on
on
both
platforms,
even
though
you
still
have
differences.
Obviously,
and
interestingly
now,
ebpf
is
going
to
be
supported
on
windows
as
well
right.
There's
a
lot
of
ongoing
work
on
that.
So
maybe
there's
going
to
be
a
similar
story
and
similar
experience
with
ebpf.
C
E
C
Story
right:
you
have
everything
that
and
that
that
correspond
to
the
fact
that
entry
is
mostly
driven
by
vmware
engineers
today,
even
though,
of
course,
it's
a
cstf
project.
So
if
you're
a
vmware
customer
using
other
vmware
products,
you
can
get
like
all
a
whole
bunch
of
integration
with
with
entria
and
I'm
sure
we're
going
to
talk
about
heavy.
C
B
Cool
so
bernard,
I
don't
know
if
you
ever
got
your
mic
working
or
not,
but
can
you
hear.
F
E
I
caught
the
75
percent
of
it
because
I
was
spending
25
percent
of
it
trying
going
this
youtube
shows
my
shows
my
earbuds
as
being
the
thing,
but
but
then
I
said,
okay
well,
let
me
pull
down
on
the
pull
down
menu
where
it
listed
them
again,
along
with
other
options
I
clicked
on
it
and
then
it
started
working.
So
I
don't
really
understand
a
system
that
basically
displays
the
option
that
you
want
but
doesn't
actually
use
it
anyway.
E
I'm
here
now
frustrating
I
guess
I
had
a
specific
question
about
antria
versus
calico
and
open
v-switch,
and
if
I
can
can
I
share
my
screen.
A
E
B
B
B
Okay,
I
I
give
up
yeah.
I
think
I
think
I
think
we
better
if
we
just
move
on
with
the
show,
if
that's
okay,
bernard
and
yeah.
E
Yeah
yeah,
maybe
I
can
send
you,
I
just
wanted
to
show
a
screenshot
from
a
presentation
you
gave
that
displayed
andrea
and
calico,
and
I
had
a
question
around.
B
E
A
Okay,
so
today
I
invite
bhushan
pai,
my
friends
from
our
team
at
vmware,
to
tell
us
a
little
bit
about
available
balancing,
welcome
bhushan.
G
Hey
asiji,
oh
hi,
everyone,
my
name,
is
pushing
pay.
A
C
G
Senior
technical
product
manager
with
the
rv
team
out
here
in
vmware,
so
auvi
was
the
old
name
before
we
got
acquired
by
vmware.
We
were
small
startup
and
now
it's
called
ns
vmware
nsx
advanced
load,
balancer,
it's
quite
a
mouthful
but
yeah.
And
since
you
can
share
my
screen,
maybe
you'll
start
with
yeah.
G
So
today,
we'll
we're
going
to
see
what
enterprises,
our
customers,
that
we
have
seen
think
about
nsx,
sorry
about
kubernetes
ingress
and
what
they
expect
out
of
it,
and
how
are
we
solves
their
problems
and
provides
a
consolidated
solution?
G
So,
basically,
whenever
you
have
your
application
running
on
kubernetes
right,
it's
not
just
the
cluster
and
the
application
you
are
managing
there
is.
There
needs
to
be
traffic
management,
search
management
tying
this
up
all
into
ci
cd,
that
is
security
around
your
application
traffic
and
there
needs
to
be
observability
into
your
into
your
application,
as
well
as
your
infrastructure,
and
sometimes
all
of
these
tasks
are
done
by
different
teams
in
larger
enterprise
organizations,
which
makes
things
a
bit
difficult.
G
The
way
it
is
traditionally
deployed.
So
when
we
look
at
a
traditional
deployment
of
kubernetes,
ingress
and
road
balancing,
it
deploys
something
like
this.
There
is
a
ingress
controller
within
your
cluster,
which
obviously
are,
is
multiple
bars
on
multiple
nodes.
So
it
also
needs
an
external
layer
for
load
balancing.
G
In
addition
to
that
for
ckt,
you
might
need
external
valve
that
might
be
an
external
palo
alto
or
some
other
firewall
running
outside
your
cluster
managed
separately
for
not
south
traffic.
You
might
have
a
action
or
dns
and
many
times
you
have
even
seen
people
manually
adding
an
a
record
every
time
a
new
service
comes
up,
so
all
of
these
things
is
very
cumbersome
and
also
for
external
routable
ip
addresses
for
load
balancers.
G
G
So
they
end
they
lack
end-to-end
observability
and
makes
operationalizing
this
difficulty
in
the
large
scale.
So
this
kind
of
a
setup
does
work
in
a
small
organization
or
in
a
lab
environment.
But
what
we
have
seen
is
your
your
business
owners.
Nee
want
a
quick
way
to
deploy
apps
and
get
going.
They
want
a
platform
which
will
give
them
application,
availability,
security,
observability
and
simplify
their
operations
across
multiple
clouds.
G
There
are
many
customers
that
are
deploying
across
on-prem,
as
well
as
some
public
cloud
with
the
aws
azure
or
even
vmware's
cloud
offerings
like
vmc,
etc,
and
that's
what
avi
brings
to
the
market.
G
So
we
gives
a
single
consolidated
solution
which
does
everything
from
load,
balancing
on
layer,
four
ingress
on
layer,
seven
web
application,
firewalling
gslb,
integrated,
dns
and
ipam,
and
it
also
gives
end
to
end
of
observability
for
each
and
every
application
in
your
cluster,
and
all
of
these
can
be
configured
in
a
cloud-native
way
right
from
your
csd
pipeline,
using
your
kubernetes
specs
or
some
crds,
and
things
like
that
yep.
G
So
before
moving
ahead
into
the
actual
technical
details,
just
a
overview
overview
of
what
avi
architecture
looks
like
can.
B
The
like
so
you've
got
avi
and
you
run
avi
on
amazon,
and
then
you
run
it
on
google
and
you
run
it
on
vmware
or
whatever
like
yeah.
So
what
if
I
can?
I
is
there.
What,
if
I
don't
have
any
of
that
stuff
is
avi?
Do
I
have
to
have
like
a
proprietary
cloud
technology
to
run
avion,
or
is
there
like
a
bare
metal
way?
I
can
run
it
too.
G
That's
where
I
was
getting
actually
so
it's
a
completely
software
product
right,
so
you
can
actually
run
it
on
any
kind
of
environment.
You
can
run
it
on
a
x84
hardware
and
use
it
as
a
bare
metal
product
on
vmware,
openstack
kind
of
environment.
It
runs
as
a
vm
you're
on
public
clouds.
Also
it
if
it's
aws,
it
will
run
as
aws
instance
on
azure
or
also
vm
out
there
so
and
we
are
available
on
now
all
the
public
cloud.
Okay.
G
E
G
Yeah
say
you
are
using
something
like
a
contour
or
nginx
ingress
you
with
avi.
You
do
not
need
that
the
same
service
engine,
so
these
are
the
data
plane
engines
of
obvious.
G
Those
perform
both
layer,
seven
and
therefore
load
balancing
for
you,
even
though
they
are
sitting
outside
the
cluster.
So
you
do
not
need
another
ingress
controller
within
the
cluster
at
all.
E
F
G
Actually,
currently,
avi
only
supports
a
layer
as
running
as
a
layer,
four
load
balancer
in
front
of
istio
indus
gateway.
We
are
working
on
more
deeper
integration
into
hto
platform,
effectively
replacing
the
on
y
gateway,
the
onward
proxy
running
as
the
gateway.
Basically.
G
E
G
So,
yes,
no
matter
what
kind
of
ingress
controller
that
is
right
since
it's
scaled
out
on
your
cluster,
it
requires
a
layer
for
load
balancer
outside
right,
so
with
sdo.
That's
what
avi
supports
right
now,
so
we
can
take
and
twitter
traffic
out
here
and
do
a
pass
through
to
the
inverse
controllers
and
you
can
get
the
ssl
dominated
out
there
and
then
on
y
can
do
its
thing
and
manage
the
hto
network.
B
Your
question,
I
think,
he's
asking
the
general
yeah.
It
sounds
like
the
general
question
of
where's,
your
service
mesh
start
and
your
l7
stuff
end,
and
I
I
think
that's
a
yeah
makes
sense
to
me.
I
think
okay
yeah,
so
any
service
mission
is
generally
always
in
cluster
right.
It
always
manages.
G
B
G
And
we
we
have
it
on
a
road
map
to
have
this
service
engines
to
run
within
the
cluster,
to
replace
the
one
y
basically,
but
only
for
the
north
south
part,
not
the
east
west,
within
the
cluster.
Only
for
traffic
coming
into
the
question.
G
Cool,
let's
see
what
you
got
next
so
yeah
moving.
I
just
wanted
to
before
I
move
into
how
this
thing
works
on
kubernetes,
just
a
general
architecture
of
avi,
like
I
said
primarily,
is
a
load
balancer,
so
even
without
kubernetes
you
can
use
it
with
vm
environments
also
and,
unlike
any
other
load
balancer
in
the
market.
Today
it
has
a
distributed
architecture.
G
What
that
means
is
there
is
a
centralized
v
controller,
which
is
the
brains
of
the
product,
so
all
configurations
are
done
on
the
controller
itself,
while
the
data
path,
the
actual
traffic
load
balancing,
is
done
by
these
data
parts,
engines
called
service
engines,
so
being
software,
it
can
be
run
on
any
kind
of
a
environment
and
the
controller
can
manage
the
scale
of
these
service
engines,
so
you
do
not
have
to
pre-provision
them
whenever
a
new
application
is
created
or
current
application
or
requires
more
performance.
G
This
controller
can
create
new
service
engines
as
and
when
required,
and
basically
provision
elastic
load
balancing
for
you
and
as
the
traffic
flows
through
these
service
engines.
It
also
gathers
hundreds
of
different
metrics
and
analytics
so
that
you
have
a
great
dashboard
on
the
controller
where
you
can
view
all
of
these
metrics
and
analytics
for
your
application,
and
the
controller
internally
also
uses
this
metrics
to
make
intelligent
decisions
for
scale
out
and
scaling.
G
Yeah,
so
to
build
an
analogy
out
here,
just
like
nsx
has
its
nsx
manager,
which
also
runs
works
as
a
controller.
Similarly,
both
the
management
team
and
the
control
plane
for
av
load
balancing
is
on
the
controller,
so
you
can
configure
policies.
You
can
configure
your
virtual
services
pools
any
kind
of
a
doorbenching
algorithms
etc
out
here
on
the
controller
and
that
will
get
pushed
to
these
service
engines
and
the
service
engines
does
the
actual
magic
of
doing
load,
balancing.
G
F
G
So,
like
I
said,
both
the
rv
controller,
as
well
as
the
service
engines
run
as
regular
vms.
They
sit
outside
your
cluster.
So
in
order
to
integrate
this
with
kubernetes
cluster,
we
have
two
operators.
G
One
is
the
first
one
is
ako
or
our
kubernetes
operator,
which
manages
the
local
load
balancing
for
the
data.
So
it's
a
simple,
stateless,
pod
running
within
the
cluster
and
part
in
the
data
path.
All
it
does
is
translates
the
kubernetes
objects
to
corresponding
av
apis
and
configures
them
on
the
controller.
G
The
controller
in
turn
places
the
webs
for
those
service.
Those
virtual
services
on
the
service
engine
and
the
service
engine,
then
does
do
the
load
balancing,
so
ako
is
basically
just
a
translator
from
kubernetes
spec
to
of
apis.
Similarly,
we
also
have
amko
for
multi
cluster
operations.
G
So
if
you
have
the
same
application
deployed
across
multiple
clusters
for
dr
or
purposes
you
can
use
amko,
which
syncs
those
across
multiple
clusters
and
configures
gslb
or
global
server
load,
balancing
away
and
we'll
see
both
of
these
in
the
demo,
which
will
make
it
a
bit
more
clear.
G
Okay,
yep,
so
going
ahead
with
the
deployment
now
speaking
of
tanzu
avi,
being
a
vmware
product
has
a
really
good
integration
with
tanzhu,
and
this
ako
component
that
we
were
talking
about
is
automated
out
there.
So
when
you
are
creating
a
new
tanzu
cluster,
the
ak
automatically
gets
deployed
and
connects
back
to
the
rv
controller
and
your
load
balancing
is
pre-provisioned
for
you
pre-configured
for
you,
but
just
to
understand
how
these
things
work.
G
We'll
take
a
look
at
if
you
are
deploying
this
on
a
general
upstream
kubernetes
cluster,
how
it
goes
so
day,
zero.
You
have
your
rv
controller
installed.
You
have
to
configure
the
infrastructure
cloud.
The
infrastructure
cloud
gives
you
of
the
v
controller,
a
way
to.
G
Sync
all
the
objects
of
in
your
infrastructure,
for
example,
if
you
are
running
your
cluster
on
vcenter,
you
configure
a
vcenter
cloud
on
the
controller
and
it
will
sync
all
your
hosts
clusters
or
your
port
groups,
so
that
it
can
automatically
provision
your
service
engines
and
give
you
the
load
balancing.
G
F
G
Yeah,
so
the
next
step
is
once
the
away
controller
is
up
and
running.
The
controller
still
doesn't
know
anything
about
kubernetes
running
any
your
environment
right,
so
you
install
ako.
This
can
be
easily
run
using
a
helm,
install
command
with
some
bootstrap
parameters
given
to
it,
and
it
will
connect
to.
I
mean
subscribe
to
the
events
on
kubernetes,
as
well
as
connect
back
to
the
rv
controller,
and
with
that
your
cluster
is
ready
to
take
new
applications
and
for
avi
to
provision
root
balancing
for
it.
G
Similarly,
if
you
have
multiple
such
clusters,
each
of
them
can
have
aq
running
on
them,
but
they
can
home
back
to
the
cmov
controller
and
they
can
either
share
the
same
pool
of
service
engines
or
can
have
dedicated
service
engineer
resources
allocated
to
each
cluster
too.
So
it
depends
on
how
we
want
to
design
it.
E
G
Yeah,
so
ako
does
not
sit
in
part
of
the
traffic
at
all
it.
It's
only
a
management
plan.
Obviously
it's
only
configuring
things
on
the
controller,
so
say
you
have
a
indus
configured
and
you
have
rules
in
it
saying
that
for
this,
this
path
send
it
to
that
so
and
so
service
in
the
back
end.
G
So
yep,
so
that's
what
I
was
going
to
show
so
whenever
a
new
service
is
created,
so
ak
supports
both
kubernetes
as
well
as
open
chip,
so
it
can
be
either
ingress
or
route
for
l7,
as
well
as
service
type
lb
for
l4.
It's
created
it
automatically
it's
notified
because
it
subscribes
to
the
events
or
it
can.
G
It
also
does
poll
every
so
often
so
that
just
in
case
it
has
missed
something
it
can
act
on
it
basically,
and
whenever
it's
a
new
service
is
created,
it
translates
that
to
the
corresponding
virtual
service
on
rv
and
rv
runs
its
internal
dns,
so
it
dns
and
ipam,
so
it
fetches
an
ip
address
for
the
web
and
publishes
the
fqdn
to
the
its
internal
dns
and
places
the
whip
on
the
service
engine,
and
the
next
step
is
to
see
how
the
actual
traffic
flows,
but
before
we
go
there.
G
B
You
know
you
can
expose
something
as
a
load,
balancer
service
or
as
a
node
port
service
or
as
an
ingress
service
you
easily
just
using
that
one
command
and
then
the
problem
always
winds
up
being
like
okay,
you
expose
something
as
a
load
balancer
who,
who
the
hell
is
going
to
make
that
external
ip
right
so
avi
does
that,
but
it
also,
if
you
expose
something
as
I
can,
you
know
as
an
expose,
if
you,
if
you
create
an
ingress,
if
you
do
coop
ctl
create
ingress,
not
then
you,
you
have
an
ingress
controller.
B
That's
watching
that
resource
too
savvy
does
both
of
those
yeah
instead
of
paying
this
controller
the
eq
this
is.
This
is
always.
B
People
get
like
you
know:
you'll
get
a
gce
cluster.
You
know
it
used
to
be.
I
don't
know
if
they
solve
this
now
but
like
by
default.
You'll
get
a
cluster
in
gke
or
whatever
and
you'll
do
you'll
create
an
ingress
and
nothing
will
happen
because
there
wasn't
a
default
ingress
controller.
So
this
is
like
always
confusing
for
people
that
are
new
to
kubernetes.
E
Yeah
so
ako
sits
in
the
cluster
per
node
and
and
reads
what
gets
put
in
by
kubernetes,
and
then
it
uploads
that
to
the
eight,
the
controller
so
that
the
controller
knows
what
the
policies
are
for.
The
overall
connectivity,
yeah.
C
E
Then,
to
put
it
down
to
the
data
plane,
obvi
data
plate
objects.
E
G
Engines,
that's
right.
Only
thing
is
that
there
is
only
one
akr
running
for
the
whole
cluster:
it's
not
per
node
required
and
anyways,
it's
a
stateless
part,
so
even
if
it
dies
it
restarts
and
yeah.
B
It's
over
right.
What?
What
now?
What
was
the
I
might
have
been
distracted?
What
is
the
underlying
you
know
like
contour
uses
envoy
as
the
underlying
data
plane
thing
yeah?
What
is
the
underlying
generic
thing
that
allows
avi
to
do
the
thing
where
you
connect
the
ip
to
the
actual
service
ip,
that's
inside
the
cluster.
B
G
B
G
Yeah,
so
let
let
me
do
one
thing:
I'll
open
up
a
few
slides
and.
G
Make
it
a
bit
more
clearer,
basically
ako,
and
we
automatically
manage
just
some
routing
rules
in
order
to
let
the
service
engines
directly
connect
to
the
backend
ports
and
yeah.
That's
where
even
the
node
port
local
in
andrea,
that
we
were
talking
about
at
the
beginning
of
this
transfer
live
also
comes
in.
G
Okay
yeah,
so,
let's
get
started
with
the
demo
and
yeah
we'll
touch
on
the
routing
part,
also
yeah,
for
the
demo.
What
I
have
is
a
rv
controller
and
tkg
cluster
installed
on
my
vcenter
environment,
and
this
is
the
controller
ui
right
now.
I
only
have
a
dns
virtual
service
running,
which
acts
as
an
automated
dns
server
for
all
the
applications
that
we
create
on
kubernetes
side.
Like
I
said
the
day,
zero
operation
is
to
install
this
rv
controller
vm
log
into
it
and
configure
your
infrastructure
cloud.
G
So
what
shinshi
is
going
to
talk
about
next?
This
is
a
kind
of
prerequisite
to
that
too.
So
you
need
to
have
the
controller
up
and
running
and
basically
give
it
a
username
password
for
your
vcenter
environment
or
add
configure
your
ipam
and
dns
profiles
in
here
select
which
data
center
within
the
vcenter.
What
your
networking
management
network
looks
like
and
kind
of
things.
G
I
won't
go
too
deep
into
what
that
is,
but
basically
this
allows
the
controller
to
discover
everything
on
the
vcenter
side,
for
example,
it
will
discover
all
the
networks
that
there
are
on
the
vcenter
environment
and
automatically
create
service
engines
whenever
required
on
those
networks
yeah.
G
So
right
now
we
do
not
have
any
applications,
but
we
have
a
kubernetes
cluster
already
running
and,
like
I
said,
since
this
is
a
pkg
cluster,
it
already
has
a
eq
running
on
it,
so
we,
but
in
case
of
your
regular
kubernetes,
you
can
easily
install
it
using
helm.
Also,
so
let's
go
ahead
and
create
our
first.
Our
first
application.
G
And
I'm
going
to
do
really
simple
application:
let's
take
a
look
at
what
we
just
applied
yeah.
So
it's
just
a
simple
deployment:
spec
running
two
replicas
of
this
demo
server
and
I'm
exposing
it
using
a
load,
balancer
type
service.
G
So
it's
of
type
load,
balancer
and,
as
you
see
out
here,
ak
automatically
syncs
that
service
getting
clear
created
and
it
creates
the
corresponding
virtual
service
or
navi.
So
let's
go
to
a
different
view
which
will
give
us
more
details
and,
as
you
see
for
this
load
manager
type
service,
we
have
ip
address
automatically
assigned.
G
From
the
ipam
it
also
has
a
domain
name
automatically
created
and
published
to
the
fk
to
the
dns.
So
any
client
browsing
to
this
will
automatically
will
resolve
to
this
ip
address
and
right
now,
it's
showing
red,
because
this
is
the
first
application
we
created.
There
were
no
service
engines
running,
so
it's
creating
one.
So
and
if
you
take
a
look
at
the
vcenter
side,
it's
actually
creating
those
servicing
vms.
G
B
G
Yeah
there
are
multiple
vms
out
here
right.
The
controller
itself
is
a
vm.
Then
the
service
engines
are
actual
data
part
of
vms
again,
but.
G
Avi
creates
them
only
when
required,
since
there
were
no
services
earlier,
there
were
no
service
engines.
Okay,.
G
By
order
of
magnitude,
but
yeah,
so
that's
how
it
works.
So
we
this
is
the
first
service
we
created
it's
trying
to
create
a
new
service
engine,
so
it
will
automatically
create
the
vms
it
will
figure
out
which
network
it
needs
to
connect
to,
because
it
discovered
all
the
networks
and
the
subnets
in
it.
So
it
will
connect
to
the
web
network
as
well
as
the
network
where
your
cluster
is
running,
and
then
your
virtual
service
should
go
green
in
some
time.
B
G
So
yeah,
while
we
are
waiting
for
this,
let's
talk
about
the
way
the
traffic
works
in
this
right
in
this
scenario
so
well,
your
like
you
saw
your.
The
whip
is
also
already
published
to
the
dns.
So
whenever
a
client
makes
a
browser
to
that
ftdn,
the
dns.
G
It
to
the
correct
web
ip
address,
the
client
sends
the
traffic
to
the
web,
and
that
way
it
gets
to
the
service
engine.
Now
this
whip
can
be
placed
active,
active
on
multiple
service
engines
at
the
same
time.
So,
if
you
are
are
running
some
massive
application,
which
needs
massive
throughput,
it
whips
can
be
scaled
and
multiple
number
of
times
horizontally
to
give
that
kind
of
a
performance.
G
B
G
Generally,
get
a
new
vip
for
every
l7
rule.
No,
so
for
layer,
4,
load,
balancer
type
services,
you
get
dedicated
webs
for
layer,
seven,
we
do
something
called
sharding.
So
in
akio
you
have
you
specify
how
many
shards
you
want
that
many
number
of
weeks
will
get
created
and
we'll
create
a
l7
application
in
a
while.
We
just
created
a
service.
Now
so
say
you
say
you
want
four
bits,
so
we
will
maximum
create
four
whips
and
your
inverses
will
get
mapped
over
those
flips
based
on
the
their
host
names.
G
Yeah
so
right,
as
you
see,
your
the
service
engine
is
connected
back
and
if
you
go
to
the
dashboard.
G
B
B
E
You
know
you
showed
the
slide
of
the
data
path.
Yeah,
I
think
I
could
you
bring
that
back
up.
That
was.
E
So
a
client
is
some
is
something
outside
it's
like
from.
Like
somebody
on
a
pc
somewhere
says
I
want
to
go.
Get
to
this.
It
comes
in.
E
G
E
E
G
B
And
you
skip
entry
a
proxy
too
then
yeah
so
yeah,
and
if
the.
C
E
C
Yeah
it's
a
kind
of
like
replacement
to.
If
the
pod
are
directly
routable
right,
the
avi
can
just
set
the
destination
address
to
the
pods
ip
directly,
but
if
the
pods
are
not
readable
in
that
network,
only
the
nodes
are
maybe
because
you're
using
an
overlay
with
entry
in
tenzu,
for
example,
then
you
need
to
send
the
traffic
to
the
node
to
a
specific
port,
that
entry
are
working
with
heavy,
have
allocated,
and
then
we
do
port
forwarding
so
that
it
actually
gets
to
to
the
pod.
C
So
we
still
skip
q,
proxy
and
entry
a
proxy.
So
you
have
a
single
load,
balancing
step
which
is
done
by
avi,
which
was
the
whole
point
of
that
feature.
But
you
you
need
some.
You
need
collaboration
between
avi
and
entry
in
that
case,
because
the
pods
are
not
directly
routable
because
you
may
be
using
an
overall
network.
G
So
that
that's
a
good
segment
to
this,
these
slides
so
avi
supports
three
different
modes:
cluster,
ip
mode,
node,
port
mode
and
node
port
local,
with
antenna.
So
for
the
demo,
which
we
are
doing
right
now,
it
was
using
cluster
ip
mode
in
which
what
happens
is
each
node?
Has
some
part
c
idr
mapped
to
it.
Right
so
say
this
node
has
2
dot
24
map
to
it.
G
G
So
it
will
send
the
traffic
with
the
source
ip
of
the
interface
service
engine
interface,
the
destination
ip
of
the
pod,
but
the
source
mac
is
of
the
service
engine
interface
and
the
destination
mac
is
of
that
of
the
node,
not
the
part,
so
on
l2
the
packet
will
flow
to
the
node
and
within
the
node
the
cni
takes
care
of
routing
it
to
the
actual
part.
So
that's
how
it
reaches
the
port.
E
G
Yeah
the
controller
programs
them
on
the
so
this
is
the
the
flow
is
always
from
a
cure
to
the
controller
to
this
service.
Indian
object,
so
a
cure
programs
eq
learns
about
these
routes
and
programs
and
on
the
we
controller
and
the
ap
controller
programs
them
on
the
service
engines.
Wherever
are
the
on.
The
application
is
placed
basically.
G
You
know
it's
a
bit
difficult
to
understand,
but
yeah
we
also
have
other
modes
to
do
and
do
the
same
thing.
So
in
some
cases
doing
this
kind
of
routing
is
not
possible,
for
example
on
aws
networking.
It
does
not
allow
this
kind
of
using
a
different
mac
for
a
different
ip
address
in
a
packet
so
out
there.
We
can
also
do
something
like
node
port
mode.
This.
C
G
General
node
port
in
the,
in
which
case
there's
a
service
engine
load
balances
to
your
node
port
service
on
the
kubernetes
and
q.
Proxy
takes
care
of
the
final
load
balancing
to
the
power.
B
G
G
It's
which
stands
for
rising
sun.
G
Yeah
so
yeah,
the
the
third
mode
is
notepod
local
mode
in
which
andrea
exposes
every
pod.
As
a
node,
comma
port,
the
port
can
be
different
on
every
node
and
there
can
be
multiple
parts
on
the
same
node
also,
so
each
node
part
will
be
exposed
as
a
node,
comma
port
and
that's
what
is
added
to
the
pool.
G
B
So
node
port
local
vivec
is
saying
the
real
core
thing
of
it
is
it
allows
you
to
move
the
node
port
out
of
the
cluster
so
that
it's
now
managed
by
andrea
and
now
you
can
have
you
don't
have
collisions?
Is
there
another
reason
why
you
use
node
port
local?
Is
it
faster
than
the
is
it?
Is
there
other
reasons
why
you
use
it,
or
is
it
just.
G
So
from
load
balancing
point
of
view,
these
are
the
actual
reasons
yeah,
so
cluster
ip
is
really
good
because
it
gives
direct
pod
connectivity.
So
you
can
health
monitor
all
the
way
to
the
pod.
If
the
pod
is
down
in
the
avi
controller
knows
about
it,
which
is
not
possible
in
the
node
port
mode,
because
we
are
help
marketing
the
service,
the
node
port
service,
not
the
actual
port
persistence
is
also
possible
with
cluster
ip
mode,
because
we
can
persist
all
the
way
to
the
pod
ip
address
again,
which
is
not.
G
Client
request,
basically,
the
client
connection
can
be
persisted
all
the
way
to
the
point
we
obvi
has
multiple
persistence
modes,
so
you
can
persist
based
on
the
client
ip
address.
Your
wait.
G
Yeah,
okay,
okay,
yep
cool
the
only
downside
of
having
cluster
ip
is
that
it
must
be
directly
connected
to
the
node.
The
service
engine
must
be
directly
connected
to
the
node
network.
Your
cluster
network,
to
in
order
to
do
the
routing.
B
C
B
G
No,
no
okay,
I
haven't
got
any
requests
there
yeah.
So
basically,
the
service
engine
must
be
directly
connected,
which
is
not
not
a
case
in
notepad,
because
the
services
can
route
to
the
node
iip,
so
notepad
local
gives
the
best
of
both
world
in
which
we
do
not
have
to
connect
to
the
node
network
directly,
but
you
can
still
have
everything
like
health,
monitor
and
affinity
directly,
all
the
way
to
the
pot.
B
G
Many
times
like
if
a
customer
wants
something
like
persistence,
then
cluster
ip
is
a
better
choice,
but
many
times
they
are.
They
are
just
your
regular
general
web
applications
so
which
do
not
require
that
kind
of
a
persistence
and
also
they
are
not
stateful
services.
Basically,
so
notepad
will
be
a
easier
option,
in
which
case
you
can
even
share
the
yeah.
There
is
another
thing
right
with
cluster
ip.
G
B
B
G
Yeah,
but
we
we
do
have
some
licensing
magic
of
that
to
reduce
the
cost
just
in
case
it
grows
too
much
and.
B
G
G
So
we
created
an
application,
but
we
expose
it
using
just
regular
load
balancer,
but
an
enterprise
application
will
be.
We
will
need
ssl
termination
and
also
we
can
just
create
a
tls
ingress.
Let's
take
a
look
at
that.
G
We
have
a
secret
and
we
are
using
that
secret
in
our
tls
terminated,
inverse
h
with
examination
and
as
you
see
we
created
like,
I
said
we
do
sharding,
so
it
created
one
shard
and
then
mapped
this
virtual
service
onto
it.
Again
we
are
roar
bouncing
all
the
way
to
the
pods,
even
though
the
service
engine
is
sitting
outside
and
doing
in
this.
There
is
no
index
controller
involved
in
between
so
on
the
side
again,
the
ip
address,
the
ftdnr.
G
G
So
there
are,
if
you
take
a
look
at
rv
right
there.
We
have
a
lot
of
configuration
options
or
knobs
in
here
both
on
the
virtual
service
side.
You
can
configure
policies
analytics,
etc,
which
are
not
directly
available
through
a
regular
inverse
spectrum.
So
in
order
to
configure
those,
we
have
certain
crds
custom
resource
definitions.
C
G
It's
definitely
possible
if,
if
the
party
ideas
do
not
overlap,
you
can
use
the
same.
G
Works
so
yeah
when
I
drive,
I
was
just
going
to
show
so
we
using
the
crd,
we
can
configure
many
things
on
both
virtual
server
side,
as
well
as
the
whole
slide.
We
have
hostro
crd
to
configure
things
on
the
virtual
service,
so
it
says
that
for
the
ingress
with
this
fpdn,
this
host
name
configure
all
of
these
things.
So
we
are
changing
the
certificate
you're,
changing
the
ssl
profile,
adding
a
new
policy,
some
changing
some
maf
policies,
etc.
And
if
you
take
a
look
at
the
virtual
service
out
here,.
G
Yep,
it
can
be
either
layer
7.
Therefore,
we
have
multiple
health
monitors
available
and
you
can
even
create
your
own
custom
ones.
So.
B
G
G
This
is
more
like
a
data
plane
level
health
checks,
not
liveliness
probe.
That
kubernetes
does.
If
that's
what
you
have
yeah.
G
The
pool
automatically
so,
if
your
liveliness
check
fails,
it
will
automatically
remove
this
end
point
from
the
from
the
pool
out
here:
okay,
that's
taken
care
by
eq,
but
doing
data
plane
level.
Health
checks
is
what
the
service
engine
does.
So
you
can
configure
multiple
things
out
here.
You
can
change
the
load,
dancing,
algorithm
change,
the
persistence,
the
affinity
thing
which
you
are
talking
about
earlier,
okay.
G
Similarly,
on
the
virtual
service
side,
you
can
enable
vaf,
you
know,
have
extra
policies,
custom
policies
applied
to
this,
so
we
enabled
that
policy
out
here
we
added
some
custom
policies,
blocking
embargo
countries
straight
limiting
detox
attackers
etc.
G
It's
like
countries
from
where
you
think
attackers
mostly
come
in.
B
G
So
yep
this
was
the
yeah
one
thing
solution:
g.
So
now
that
we
have
this
application
running
on
one
side,
you
might
have
a
dr
site,
for
example,
this
one
is
running
in
san
francisco.
I
have
another
similar
setup
running
in
new
york
site
and
you
want
to
tie
this
up
together
for
dr
purposes.
G
We
also
have
amq
running
on
the
cluster,
so
it
will
automatically
configure
a
gslv
service.
So
if
you,
if
any
client
browses
to
this
fqdn,
it
will
automatically
be
resolved
to
either
the
new
york
ip
or
the
san
francisco
ip
and
based
on
whatever
algorithm
you
you
have,
so
you
can
change
the
priorities
of
this.
You
can
do
different
ratios
between
these
and
the
gslv
will
take
care
of
making
sure.
If
something
is
down,
it
will
not
resolve
to
that.
Basically,.
B
A
B
A
Yeah
we'll
invite
the
cocooning
team
to
tell
us
more
about
the
fbi,
foreign
supporting
tanzu
and
pretty
looking
forward
to
their
demo,
okay.
So
before
we
end
this
episode.
So
if
you
are
new
here
and
like
this
kind
of
contents,
considering
subscribing
the
channel
and
share
this
with
your
friends
that
might
be
benefit
from
this
kind
of
content.
A
A
Okay,
everyone
see
you
next
wednesday.