►
From YouTube: Kubernetes VMware UG 20200507
Description
Meeting of the Kubernetes VMware User Group May 7 2020 - agenda focused on load balancer options when running all forms of Kubernetes on premises on VMware infrastructure
A
A
Maybe
today's
presentations
in
this
user
group
will
help
that
along
where
you
can
come
out,
learning
new
things
and
even
if
you're
suffering,
zoom
burn
out,
maybe
pay
attention.
Hopefully
this
is
good
enough
that
you're
not
diverted,
and
you
come
out
of
this
learning
something
we're
gonna
start.
This
was
based
on
a
request
from
Bryson
at
the
meeting
last
month
that
we
cover
load
balancing
for
an
on-prem
situation.
A
Bryson
doesn't
seem
to
be
here,
but
maybe
he'll
join
late
or
catch
this
catch
the
recording
later
so
we're
gonna
start
I'm,
just
gonna
rip
through
some
slides
and
turn
this
over
to
my
co-host
today,
Steve
sloka
who's
an
expert
on
this
subject.
But
let
me
let
me
just
share
a
101
for
people
who
maybe
need
this
on
what
is
a
load
balancer,
so
excuse
me,
my
I'm
gonna
have
to
play
my.
A
Got
too
many
windows
open,
I'm
gonna
find
something?
Okay,
so
what's
a
load
balancer,
and
why
do
you?
Why
do
you
need
one?
Well,
you
can
host
apps
and
services
on
kubernetes
and
at
times
you
want
to
expose
this
to
the
outside
world
people
coming
in
over
the
Internet
or
people
outside
the
kubernetes
cluster
itself.
A
You
may
or
may
not
need
that,
but
it's
potentially
available,
if
you
do
now,
when
you're
in
a
public
cloud
like
AWS,
Azure,
digitalocean,
GCP
or
any
of
the
others.
This
is
relatively
easy
because
the
public
cloud
itself
would
host
a
load
balancer
as
a
service.
Now
you
pay
for
this
based
on
usage
typically,
but
it's
there
when
you
move
to
a
non
crammed
situation,
whether
you're
on
hypervisor,
like
VMware
infrastructure,
on
bare
metal,
this
load
balancer
as
a
service.
Isn't
there
unless
you
put
it
there.
A
A
This
is
based
on
typical
diagrams
of
network
infrastructure,
so
what
we're
talking
about
here
is
principally
going
to
be
north-south,
meaning
inbound
connections
outside
your
kubernetes
cluster
that
might
come
from
the
internet
might
come
from
some
land
connection
that
isn't
technically
the
public
internet,
but
something
from
outside
versus
internal
things,
where
your
kubernetes
hosted
services
within
the
cluster
are
talking
to
one
another.
That,
in
a
typical
network,
diagram,
would
be
designated
as
east-west.
Now.
A
Kubernetes
itself
has
something
called
a
service
resource
that
handles
the
east-west
and
maybe
can
get
that
done,
but
even
if
you
use
that
you
would
typically
build
up
a
service
that
you
would
then
need
to
potentially
expose
north
south
or
if
you
wanted,
to
allow
outside
the
cluster
users
to
consume
that,
and
you
would
use
a
kubernetes.
Yes,
no
slides,
especially
sharing
at
this
point.
Yes,
they're.
A
Ok,
sorry
about
that
I'm
gonna
post
this
deck
after
so
I'm,
not
gonna,
go
back
in
the
interest
of
time,
but
you
get
the
idea
north-south-east-west.
Can
you
see
it
now?
Yes,
ok,
great
and
kubernetes
has
something
to
handle
the
east-west
or
at
least
a
facade
resource
called
the
service
that
can
do
that
see
and
I
might
enter
into
the
actual
implementation
of
that
when
you
get
to
load
balancers,
there's
also
something
called
ingress
and
steve
is
going
to
cover
that
in
just
a
few
minutes
when
I
get
off
the
stage.
A
But
potentially
you
don't
need
a
load
balancer
in
the
sense
of
a
network
load
balancer,
maybe
something
called
the
kubernetes
ingress
could
do
it
for
you.
The
distinction
is,
and
this
distinction
can
be
a
little
bit
smokey
if
you
will,
because
there
are
some
low
ballots
or
ingress
solutions
that
take
on
both
of
these
jobs,
but
ingress
was
initiated
in
kubernetes
years
ago,
back
in
2015.
I.
Think
one
surprise
here
is,
even
though
it
dates
back
to
around
2015.
A
It
still
marked,
as
we
do,
and
it
was
designed
to
work
with
these
things-
called
reverse
proxies
that
have
been
popular
before
kubernetes.
The
distinction
is
that
an
ingress
would
dig
into
the
contents
of
a
packet
like
a
URL
to
sniff
out
a
hostname
or
something
in
this
ring
and
use
that
to
steer
traffic,
whereas
the
elf
or
load
balancer
is
looking
for
things
like
IP
address
port
to
steer
traffic.
So
what
is
inspecting
the
content
of
data
stream?
The
other
is
inspecting
just
a
wrapper.
A
You
can't
have
both
at
the
same
time
and
in
many
instances
you
would
need
or
want
both.
At
the
same
time,
there
are
a
lot
of
these
solutions.
They
vary
in
feature
sets.
You
can
see
some
of
them
here,
but
there
are
a
lot
of
differences
in
these
load
balancers.
They
come
in
categories
of
hardware
based
like
an
f5.
If
you've
got
the
money,
the
power,
whatever
you
know,
they
might
be
potentially
expensive,
but
f5
for
a
big
on-prem
might
be
viable
desirable.
A
Once
again,
if
your
cloud
hosted,
the
cloud
provider
typically
offers
one.
If
you're
on
pram
and
not
using
hardware,
there's
a
lot
of
software
solutions,
both
open
source
and
not
I've,
to
survey
these,
and
these
things
are
links
when
you
get
the
deck,
so
you
can
learn
more
I
may
have
missed
categorized
these,
so
some
of
them
I
found
link,
saying
they
were
open
source,
but
if
I
couldn't
find
the
actual
source
within
the
top
two
lengths
I
was
began
to
get
suspicious.
A
Some
of
them
are
coming
in
bifurcated
versions
where
they
have
an
open
source
with
less
features
and
then
I
commercially
supported.
One
element
here
with
load
balancers
as
high
availability.
With
regard
to
one
of
these
load,
balancers
middle
LB
on
kubernetes,
they
say
for
high
availability.
You
want
to
low
balance
or
in
front
of
the
kubernetes
api,
but
this
presents
a
chicken
and
egg
issue
of
you
know
what
goes
first
and
I
posted
a
link
here
to
this
middle
lb
issue.
A
The
other
consideration
when
you're
going
big
on
pram,
is
that
you
can
potentially
use
dns
to
accomplish
some
of
these
and
if
your
geo
distributed
and
wanted
to
cache
things
close
to
where
the
user
is.
This
becomes
very
complicated
and
I've
got
a
deck
link
here.
That
goes
into
way
more
detail,
but
just
wanted
to
throw
that
out
there.
A
So,
at
this
point,
I'm
going
to
turn
it
over
to
Steve,
sloka
Steve
will
introduce
himself,
but
I
could
say
that
Steve
and
I
did
a
presentation
together,
cube
con
north-american
of
back
in
November
on
the
subject
on
a
subject
related
this
leg.
You
might
want
to
see
from
the
cube
con
recording
and
I'll
put
a
link
to
that
in
the
notes
later
so
I
think
I
stopped
sharing.
If
you
want
to
take
over
and
share
Stephen
you
and
have
the
podium
sure.
C
C
D
I
mean
we
can
probably
say
that
Steve
is
one
of
the
maintainer
of
contours
who
J
is
an
open
source
upstream
ingress
controller
started
I,
have
deals
like
VMware
and
maintained
by
VMware
and
were
actually
under
the
final
step
of
donating
canto
to
CN
CF.
So
it's
going
to
become
an
English
controller
governed
by
since
EF,
but
yeah
Steve.
Take
it
cool.
C
C
So
it's
gonna
be
a
quick
backdrop
as
to
what
ingress
is
we'll
chat
about
what
an
ingress
controller
is
again
just
to
help
take
Steve's,
Steve's,
intro
and
one
step
further,
just
again
level
set
the
audience
I'll
bring
up
this
year,
T
that
we
wrote
for
contour.
Just
so
you
understand
what
it
is
when
we
go
use
it
later
and
then
we'll
just
do
a
bunch
of
demos
and
stuff.
We
can
skip
this
I've
done.
Kubernetes
that
work
at
VMware
come
find
me.
I'm,
Steve,
sloka,
everywhere
go!
C
So
what
is
ingress
right
when
I
think
of
ingress,
generically
I?
Think
of
it
as
just
incoming
right?
We
want
to
think
about
getting
traffic
from
outside
the
cluster
into
the
cluster
and
the
generic
example
is
always
I.
Have
this
internet
traffic
and
I
want
to
send
it
to
some
sort
of
service
or
application
in
kubernetes
ingress
itself
is
just
a
configuration
spec,
so
it's
just
a
way
to
define
how
this
traffic
should
route
from
point
A
to
point
B
and
then
in
ingress
controller.
Is
what
actually
implements
that
configuration.
C
So
it
looks
at
that
configuration
on
your
skin,
your
cluster
and
then
actually
you
know.
Why
is
the
bits
together
when
requests
actually
come
flowing
through,
so
why
use
ingress
right?
Why
not
just
use
load
balancers
all
over
the
place?
Well,
the
first
thing
is:
is
you
get
traffic
consolidation
right?
So
all
of
your
traffic
can
now
filter
in
through
a
single
entry
point.
You
don't
have
to
deal
with
lots
of
different.
You
know
note
ports
on
your
cluster
or
lots
of
load.
C
Balancers
things
can
get
expensive
and
things
just
plain
just
get
difficult
to
manage
right.
The
more
complex
the
system
is
the
harder
it
is
to
maintain
with
the
consolidation.
You
also
can
consolidate
all
of
your
TLS
in
one
place
right,
so
your
certificates
and
all
those
secrets
can
now
live
in
one
place.
You
don't
have
to
deal
with
distributing
that
and
them
out
across
many
different
applications
and
when
it
comes
time
to
roll
them
and
stuff
again,
it's
easier
because
it's
all
in
one
spot,
you
can
abstract
yourself
away
from
from
your
environment.
C
So
just
like
kubernetes
abstracts
itself,
away
from
different
components
you
can.
Hopefully
the
goal
here
is
to
abstract
yourself
away
from
your
environment
right.
So
you
shouldn't
have
a
deployment
file.
That
says
this
is
my
ingress
for
my
laptop
ingress
for
AWS,
etc.
Now
there
are
places
where
that
contract
breaks,
but,
for
the
most
part,
I
think
that
this
is
a
great
great
step
forward.
To
have
this
abstraction
I
mean
also
get
this
idea
of
path
based
routing
right
so
Stephen
mentioned
this.
C
Is
that
l7
router
that
layer,
7
application
layer
routing
right,
so
we
can
inspect
the
the
requests
coming
in
and
we
can
do
more
than
just
route
on
IP
import.
We
can
look
at
the
you
know,
slash
do
slash
blog
and
the
request
and
take
action
on
that
which
is
cool
all
right,
so
this
this
picture
here
to
explain
just
ways
what
we
just
said:
what
the
picture,
with
some
easier
easier
to
understand.
So
again,
yet
traffic
comes
in
we're.
C
Gonna
have
some
sort
of
load
balancer
in
front,
as
Steve
mentioned
indicating,
so
we
just
need
this
load
balancer
to
basically
send
traffic
to
a
set
of
envoys.
So
in
contours
case
we
use
envoy
as
our
data
path
component.
So
envoy
is
the
CNCs
project.
It
came
out
of
the
folks
at
lyft
and
it's
a
high-performance
C++
based
proxy
and
that's
why
we
use
it.
It's
contour
in
this
case
is
the
ingress
could
or
the
the
XDS
server
or
the
controller
for
ongoing.
C
So
we
give
an
envoy
enough
information
to
basically
go
find
contour
in
the
cluster.
I
mean
contours.
Job
is
to
look
at
the
cluster
and
find
things
like
services,
endpoints
secrets,
ingress
objects,
HTTP
proxy,
which
is
our
C
or
D
objects,
all
those
different
information
bits
when
it
finds
them.
It
builds
a
new
configuration
and
passes
that
information
down
to
envoy,
so
we
can
do
that
and
without
restarting
envoy,
which
is
which
is
a
big
win
of
envoy.
C
So,
basically,
yeah
request
comes
hit,
some
sort
of
load
bouncer
on
employee
gets
it
and
then
it
routes
it
to
your
application.
So
this
is
the
generic
example.
You
can
also
take
contour
out
of
this
picture
and
swap
in
any
other
ingress
controller.
I
think
this
this
picture
would
still
hold
up
pretty
well,
maybe
different
components
under
the
hood,
but
essentially
the
same
ideas
is
stays.
C
Cool
SD
mentioned,
there's
different
ways:
you
can
attract
traffic
to
envoy
I'll
skip
this.
Since
we
talked
about
it,
he
has
actually
similar
things
in
his
slide.
So
that's
probably
better
slide.
I'll
just
go
on
through
this,
alright,
so
HTTP
proxy.
So
this
is
our
iris
theory,
your
custom
resource
definition.
So
this
is
our
way
of
looking
at
some
of
the
limitations
we
have
or
shortcomings
with
ingress.
Today
we
if
you're,
followed
by
a
project
for
a
while.
C
We
came
from
this
idea
of
this
thing
called
ingress
route
and
then
this
this
Zod
is
a
successor
to
that
and
why
we
did
this
right.
So
we
did
this
because
we
wanted
to
help
support
this
idea
of
multi
T
in
clusters.
Right
wouldn't
have
lots
of
users
and
a
cluster
self
manage
their
own
ingress
objects.
It's
very
easy
today,
with
the
v1
beta1
ingress
to
have
two
different.
You
know:
teams
different
users,
take
down
production
and
we've
seen
that
happen
before
just
based
on
how
the
spec
is
designed.
C
So
this
theory
is
written
around
allowing
users
to
solve
that
problem.
We
can
do
that
through
this
idea
of
this
delegation
of
routing
and
more
details
on
this
could
come
at
different
talk,
so
I
won't
spend
too
much
time
here,
but
the
idea
is
that
we
can,
you
know,
get
permission,
sets
of
that
path
or
domain,
and
let
users
then
self
manage
based
on
this.
This
permission
set
that
we
can
kind
of
configure.
C
Another
big
thing
we
wanted
to
help
solve
is
this
way
of
having
a
common
configuration
place
for
certain
things,
so
things
like
service,
lates,
load,
balancing
strategies,
request,
timeouts,
all
those
sort
of
parameters
that
don't
have
a
place
in
the
spec
today
they
typically
end
up
as
annotations
on
your
objects.
So
we
wanted
to
find
a
way
to
put
them
into
the
configuration
object
without
having
to
annotate
everywhere,
and
the
last
thing
that's
kind
of
cool.
Is
we
had
this
idea
of
this
delegation
of
secrets?
C
So,
if
you're
familiar
with
ingress
today,
if
you
want
to
terminate
TLS,
are
your
ingress
controller?
You've
got
to
have
a
certificate
or
a
secret
to
reference,
and
if
you
have
that
object
across
many
different
namespaces,
you've
got
to
copy
that
secret
there
many
many
times
so
as
there's
a
maintenance
nightmare,
there's
also
a
security
issue
right,
because
you're
exposing
that
key
to
users
that
may
not
or
it
shouldn't
have
maybe
access
to
that
into
that
secret.
So
we
can.
C
We
can
have
this
way
where
users
can
self
manage
their
their
route,
their
I'm,
fully
qualified
debate
names,
but
not
have
a
physical
access
to
the
secret.
So
we
can
delegate
off
that
secret
across
the
cluster,
which
is
kind
of
cool
cool.
That's
enough
of
the
see
ready
again
just
want
to
introduce
it.
So
when
we
looked
at
it
later
here,
you're
all
familiar
with
it
cool.
So
let's
bring
it
another
thing
here,
right
so
contour
again,
it's
an
ingress
controller.
Its
job
is
the
proxy
traffic.
C
You
know
north
south
into
your
cluster
gimbal
is
a
tool.
We
wrote
that
help
kind
extend
that
functionality
but
extend
it
out
over
multiple
clusters
right.
So,
instead
of
looking
at
just
having
traffic
hit
a
service
or
a
set
of
pods
in
your
cluster,
we
want
to
now
draft
traffic
out
across
many
different
clusters.
So
originally
we
targeted,
kubernetes
and
OpenStack
for
this
and
then
I
have
some
code
actually
need
to
push
a
PR.
We
can
do
this
now
with
DC
R
as
well.
C
So
we
can
target
VM
is
running
in
a
vSphere
cluster,
so
here's
kind
of
how
gimble
looks
right.
So
that
is
that
we
have
these
discoverers
and
discovers
you
just
simple
controllers
and
they
go
look
for
these
services
to
end
points
and
these
upstream
clusters.
So
I
guess
in
this
case
the
tears
OpenStack
right.
C
So
so
this
discovery
knows
how
to
talk
to
the
OpenStack
API
I
know
to
go,
find
all
the
VMS
and
load
balancers
that
exist
in
there
and
it'll
copy
those
those
endpoints
into
this
gimbal
routing
cluster
right
so
will
actually
create
services
and
end
points
in
gimble
again,
which
is
running.
Kubernetes
will
create
that
in
this
crannies
cluster
and
it
will
map
those
two
teams,
so
will
say
a
namespace
as
a
team
right.
So
all
of
the
things
that
match
say
app
team
one
in
this
example
from
each
different
cluster
right.
C
It
could
be
a
cross
B,
my
our
kubernetes
or
OpenStack,
all
those
if
an
endpoint
should
live
in
the
same
namespace
right,
because
there
are
the
ideas
that
were
we're,
categorizing
them
together
as
the
same
team
once
they're
there.
Then
we
can
create
some
ingress
configuration
here
in
this
Kimball
cluster
and
then
we
can
sending
traffic
all
over
the
place
right.
So
we
have
our
load
balancer,
sending
traffic
to
envoy
and
now
once
it
hits
this
gimbal
cluster,
then
we
can
send
traffic
out
back
to
to
whatever
endpoint
it
could
be.
C
C
C
Okay,
cool
so
I'm
using
here
in
my
house,
I,
have
unify
so
just
I'm,
just
proving
that
this
is
all
real.
So
this
is
actually
my
real
unified
gateway
controller,
so
I'm
running
a
security
gateway
which
connects
to
my
garage,
which
then
runs
in
my
bookshelf,
then
runs
in
my
desk.
So
now
I've
got
three
different
hops
and
I've
got
some
wireless
stuff.
So
that's
my
my
networking
here
that
I've
got
running
what
I've
done
is
I've
deployed
I
have
vSphere
running
here
as
well
under
my
desk.
Here's
that
machine.
C
So
this
one's
running
a
couple
things
right,
so
I've
got
a
couple
VMs.
These
are
my
crazy-cool
VMs,
okay,
web
o,
1y
bo2,
and
also
this
Windows
Server,
and
we
down
here
I've
got
some
clusters
right,
so
this
is
actually
I
spun
these
up
using
cluster
API.
However,
you
do
this
is
up
to
you,
but
this
is
what
I
use
just
again
be
transparent.
C
So
what
I
want
to
do
is
I
want
to
send
traffic
into
the
skimble
routing
cluster
and
then
we'll
do
some
routing
out
to
different
places,
all
right
so
to
get
traffic
into
my
cluster.
I.
Have
to
then
you
know,
set
up
some
sort
of
load,
balancer
right
so
again,
I'm
using
a
meadow
LD
so
on
the
skimble
cluster.
I
can
show
you
so
if
I
get
pods
namespace.
C
There
we
go
so
here's
my
my
meta
LD
system,
that's
big
enough
PLC,
so
I
forgot
the
controller
running
and
then
the
speaker
right,
nice,
speaker
and
a
speaker
runs
on
every
host
in
the
cluster.
So
these
match,
through
the
nodes
I
have
running
in
my
and
my
cure
brandies
cluster.
So
here
you
can
see
again.
I've
got
those
those
three
different
notes
before
I.
C
Guess,
if
you
count
this
one
and
then
out
here
in
my
unify
system,
what
I
have
is
I
can
show
my
BGP
right
so
I'm
using
the
BGP
routing
method
again-
and
here
are
the
three
nodes
in
my
cluster,
this
dot
100
to
110
and
140.
So
this
is
how
I'm
getting
directing
traffic
to
metal
LD
and
the
metal
lb
is
going
to
dish
out.
My
IP
address
right.
So
let's
get
rid
of
this
real,
quick
and
then
I
will
show
you
I
have
contour
deployed,
so
I
get
services
in
contour
project
contour.
C
C
So
if
I
do
an
NS
look-up
on
that
address,
actually
so
I
would
you
did
lay
my
domain
is
pixel
proxy
will
see
here,
I
get
a
response,
I
guess:
I
gotta
work
on
my
Traverse
there,
some
reverse
isn't
working,
but
the
front
works
cool.
So
this
this
domain
name
or
this
address
now
resolves
to
this
IP
address,
which
is
pointing
to
envoy
right.
That's
what
I
really
need
I
need
to
get
all
of
my
traffic
driven
to
Envoy
once
it's
there,
then
the
hard
part's
over
and
then
I
can
just
configure
routes.
C
However,
I
like
to
do
it,
okay,
cool!
So
now
that
I've
got
my
infrastructure.
I've
got
my
cluster
running
I,
set
up
some
routes
and
stuff
right.
So
if
I
go
ahead
and
take
a
look
at
if
I
get
proxies
and
all
namespaces
you'll
see,
I've
got
nothing
on
here
all
right,
so
my
cluster
is
completely
empty.
So
a
request,
if
I
did
a
request
to
that
load
balancer,
it's
just
gonna,
fail
right
because
right
now,
envoy
action
isn't
having
a
listener
is
open
because
there's
nothing
configured.
So
let's
go
configure
something.
C
So
let's
start
here
so
we're
gonna
start
with
this
idea.
This
thing
called
a
root
proxy
right,
and
this
is
basically
the
the
fully
qualified
domain
name
proxy
you'll
see
here.
I
have
that
here.
This
is
the
pixel
proxy
net.
So
what
I
mean
is
I'm
gonna
create
a
namespace
and
kubernetes
called
root,
proxies
I'm,
just
gonna
tuck
away
this
root
proxy
over
in
an
admin
namespace
right,
so
you
just
can't
get
to
it
and
then
what
I'll
do
is
I'll
delegate
off
portions
of
this
path.
Again,
that's
a
different
talk.
C
We
can
dig
them
that
later,
if
you're
all
interested,
but
just
it
can,
you
know,
explain,
what's
going
on
so
create
a
namespace
Oh,
create
this
root
proxy
and
then
we'll
say:
hey
marketing
team.
This
is
an
example.
The
marketing
team
they're
gonna
own
this
domain
name,
I'm
gonna,
give
them
slash
right,
so
they
can
own
the
whole
thing
and
then,
in
that
marketing
namespace
here
we'll
go
ahead
and
we'll
send
traffic
off
to
some
application.
C
What
would
you
do
is
send
it
to
send
at
the
VMware
presses
we're
gonna.
Do
we're
gonna
send
traffic
basically
to
these
two
websites.
These
are
my
two
web
servers.
We
need
a
way
to
have
this
cluster
know
what
those
services
are,
and
the
problem
is
is
because
it
lives
in
the
in
vSphere.
So
let's
go
ahead
and
create
a
namespace.
C
Will
get
services
and
endpoints
in
the
names
based
marketing
you'll
see
these
things
popped
in
there
four
seconds
old.
So
what
happened
is
I've
got
gimbal
running
in
this
cluster
and
what
it
does
is
it
goes
out
to
the
vSphere
API
and
it
goes
and
queries
for
all
the
things
that
exist
and
what
it
does
is.
C
C
So
now
that
we
have
that
now
we
can
send
traffic
to
it
right.
So
we'll
hop
back
here
to
do
this
proxy
lives
in
creating
entity.
So
this
is
the
the
service
we're
gonna
send
traffic
to
so
now,
we've
got
gimbal
running
I'll.
Just
show
you
here
again
marketing
again
you'll
see
here
that
this
service
matches
the
service.
Here
now
the
service
is
called
web
in
meteor.
C
If
you,
if
you,
if
you
were
quick
on
that,
but
you'll
see
here
in
gimbal
and
kerbin
Eddie's,
it
actually
has
this
prefix
called
these
here
the
ends,
so
that's
just
a
name
that
we
gave
the
cluster.
So
you
have
to
give
a
cluster
and
some
sort
of
identity
right
because
in
theory
I
could
have
you
know
a
hundred
different
clusters
running
behind
gimble
and
they
could
all
have
a
service
called
web.
So
this
helps
make
things
unique.
So
that's
why
that's
where
this
little
prefix
came?
So
that's
all
user-defined
you
can
come
up
with.
C
However,
you
see
fit
alright
cool.
So
let's
go
create
some
things,
so
we
will
hide,
go
ahead
and
keep
control
apply
or
reap
Roxy
right.
So
now,
if
I
come
back
to
here
and
I,
get
proxies,
you'll
see.
I've
got
two
of
them
right
now,
they're
valid.
So
again,
this
was
the
route
1,
which
has
my
domain
name,
and
this
is
the
one
in
the
marketing
whoops,
sorry
in
the
marketing
space
here
so
now,
what
should
happen
is
if
I
curl,
this
pixel
proxy
net
I,
should
get
directed
to
the
VMS
running
in
these.
C
Here,
let's
go
ahead
and
do
that
and
I
do
so.
This
is
a
simple
little
echo
server,
that
I
wrote
and
it
prints
out.
Basically
the
thing
you
hit
so
it'll
print
out
level
1
if
you
hit
the
VM
running
whether
one
or
elbow
2,
if
it's
hitting
Webo
2.
If
I
refresh
here
of
time
this
data
there's
a
lot
over
to
Chrome
will
sometimes
cache
that
connection
for
you,
let's
get
the
idea
right
so
now
we're
getting
traffic.
C
The
traffic
is
coming
in
through
meadow
lb,
hitting
the
Envoy
iPod
Nano
iPod
then
looks
and
routes
based
on
this
domain
name,
and
then
it
hits
the
service
and
kubernetes
which
then
redirects
it
back
out
to
my
vSphere
cluster
right.
So
there's
no
applications
running
in
in
in
kubernetes.
At
this
point,
sort
of
short
of
all
the
infrastructure
for
kubernetes
to
run
in
terms
of
user
applications,
so
it's
all
running
right
now
in
the
vSphere
infrastructure.
So
here
I
can
just
proxy
out
right.
C
So
essentially,
we've
rewritten
our
own
little
software
based
load
balancer
protein
Bowl.
Now
we
can
direct
traffic
wherever
we
see
fit.
You
know
within
within
the
network
cool,
so
let's
go
ahead
and
take
that
one
step
further,
let's
go
in
and
add
in
another
site.
So
again,
if
you
saw
here,
I
know
I'm
bouncing
I
hope
this
isn't
confusing.
We
have
this
Windows
box
right
in
this
Windows
machine
is
running
some
old
blogging
software
that
we
can't
move
to
containers
or
anything.
It's
just
got
to
run
in
Windows.
C
So
let's
go
ahead
and
send
traffic
to
here
and
let's
give
that
team.
The
slash
blog
so
here
we'll
add
a
new
condition
to
our
proxy
definition
and
we're
going
to
this
path
based
routing
now
so
now,
slash
blog
is
going
to
route
to
this
Windows
machine,
which
is
called
blog.
So
any
traffic
to
slash
will
hit
the
visa
VM.
So
everyone
well
go
to
any
and
slash
blog
will
go
to
redirect
to
this
Windows
machine.
Let's
go
ahead
and
apply
this
one.
C
C
So
I
intentionally
left
this
here,
just
because
it's
easy
to
know
that
this
is
is
so
that's
that
machine
right
so
now,
I'm
hitting
now
I've
got
path
based
routing,
so
slash
blog
is
again
hitting
the
vSphere
VM
running
over
there
and
then,
if
I
hit
slash
in
my
request,
you
can
see
it
still
hits
one
but
one
web.
Oh
all
right
cool.
So
let's
go
ahead
and
let's
containerize
those
now
so
what
about
one
web,
Oh
I
can
containerize
it's
a
simple
app.
Let's
move
that
over
off
of
our
VMware
infrastructure
into
kubernetes.
C
So
what
we'll
do
is
well
go
ahead,
make
a
deployment
again.
So
this
is
my
app
I
can't
analyze
it,
and
this
is
that
same
echo.
Server
here
that
I've
got
running
and
again
the
text
I'm
gonna
define
is
just
say:
hey.
This
is
kubernetes,
so
it's
clear
that
we
know
which
application
we're
hitting
and
there's
a
simple
service
here
that
will
deploy
as
well,
so
we'll
go
ahead
and
apply
this
so
again,
just
to
be
clear:
I
have
don't
have
any
any
pods
and
the
marketing
namespace
right.
It's
really
empty.
C
There's
nothing
running
there,
but
when
we,
when
we
apply
this,
we'll
have
some
stock
so
we'll
go
ahead
and
apply
our
three
deployment
blue
all
right.
So
now,
if
I
get
pods
now
I've
got
my
application
running
right.
So
it's
running
now
in
kubernetes,
but
nothing
points
to
it.
Yet
right,
no
traffic's
going
to
hit
it
because
we
haven't
configured
that
so,
let's
go
ahead
and
do
that!
That's
what
we'll
do
here
so
we'll
do
now
is
we'll
extend
our
proxy
so
before
we
just
had
a
single
service
for
the
vSphere
VMS.
C
At
the
same
time,
and
what
I
want
to
do
is
do
it
I'm
gonna,
do
a
canary
based
deployment,
so
we're
gonna,
send
some
traffic
and
and
taper
it
off
slowly
from
the
the
future
of
VMs
into
my
two
brandies
infrastructure,
just
to
be
sure,
because
I
don't
not
sure
if
it's
gonna
have
a
problem
or
not,
and
what
I
can
do
is
I
can
add
a
weight
here.
So
this
is
kind
of
where
the
proxy
comes
in.
I
can
add
a
Wayans.
C
C
All
right
cool!
So
now
we
have
that
so
now,
if
I
go
ahead
and
curl,
this
I
should
get
Webber
one
mostly
and
eventually
I'll
get
web.
Oh
there,
it
is
so
there's
I'm,
sorry,
the
Cooper
days
infrastructure,
and
here
we
Kirby
notified
it
so
that
we
went
from
the
VM
into
into
containers.
Sometimes
it's
easier.
If
I
can
do
this,
I'll
do
a
while
loop
here,
we'll
just
curl
it
every
half.
Second,
as
soon
as
you
can
see
that
that
requests
routing
back
and
forth
a
little
easier.
C
This
way
again,
90%
of
the
traffic
should
hit
mini
M's
and
then
10%
should
be
hitting
the
kubernetes
infrastructure
so
about
every
nine
requests
should
hit
kubernetes
there.
It
is
there's
one
going
through
there
cool
sleeve
that
run
and
let's
go
ahead
and
we'll
change
this
around.
Let
this
run
in
left
and
we'll
switch
their
weights.
So
maybe
we'll
go
down
to
50/50
right,
we'll
just
speed
it
up.
C
If
I
apply
this
again
so
now
we're
gonna
give
half
and
half
so
it
should
load
balance
about
the
same
between
kubernetes
as
well
as
the
VMS
there.
It
goes
so
you
can
see
it
hitting
level,
1
level
2
and
then
also
hitting
kerbin
at
ease.
We're
happy
with
that.
We
can
watch
our
community
I
will
just
jump
ahead,
we'll
go
10
and
90.
So
now
we
can
do
you
know
90%
of
my
traffic
to
kubernetes,
which
there
it
is.
C
C
C
A
One
I
wanted
to
point
out:
Steve
and
I
put
it
in
the
chat,
but
I
noticed
you
used
metal
lb
as
the
load
balancer
and
since
we're
talking
about
load
balancers,
you
mentioned
you
were
using
it
in
BGP
mode.
So
for
anybody
doing
that,
maybe
we
do
have
a
couple
other
speakers,
but
if
you
can
quickly
point
out
the
difference
and
I
want
to
mention
to
people
just
because
I
got
burned
by
it.
A
C
Yeah,
that's
cool
yeah
I
didn't
know
that
so
I'm,
not
a
huge
I'm
gonna,
be
aware.
Expert
was
gonna,
sound
silly,
but
yeah.
So
the
BGP
you
just
you
configure
the
the
peers.
Actually,
it's
a
good
article.
I
can
send
you,
you
can
figure
the
peers
you
want
to
send
traffic
to
and
then
the
BGP
requests
will
then
get
spun
out
and
and
the
healthy
nodes
will
she
go
to
get
requests
I?
Think
in
l2
mode,
only
one
note
can
be
the
incoming
traffic
or
can
get
traffic
from
maybe.
A
A
C
B
C
Yes,
it
is
confusing
because
they
kind
of
they
overlap
a
lot.
So
contour
is
an
ingress
controller,
so
it's
job
is
to
route
traffic
from
outside
the
cluster
into
the
cluster.
So
when
you
can
create,
you
know
that
ingress
configuration
or
in
our
case
I
showed
you.
The
the
HTTP
proxy
configuration
it's
job
is
is
actually
route
traffic,
so
you
can,
you
know,
have
a
load,
balancer
take
traffic
and
it
will
actually
go
and
route
to
an
application.
That's
its
job
envoy
is
the
component
that
actually
does
the
routing
right.
C
So
it's
a
reverse
proxy
that
it
actually
routes.
The
data
around
the
contour
is
just
a
configuration
server
for
envoy,
but
sometimes
we
just
kind
of
bundle
it
all
together
in
the
contour,
but
technically
contour
is
just
a
controller
which
sends
configuration
changes
to
envoy.
So
when
you
update
your
config
andand,
we
can
route
the
next
request
properly
and.
D
A
D
A
So
that's
why
Steve
used
metal
lb
I
believe
because
in
some
of
these
scenarios
you
need
the
l2
l3
l4
things
going
on
in
addition
or
you
want
them
so
contour
is
a
rapper
and
I.
Think
gimble
is
like
was
originally
an
external
project
that
was
sort
of
a
control
plane
for
contour.
But
correct
me
if
I'm
wrong
and
I've
heard
that
the
functionality
that
has
gotten
merged
into
contour.
C
Yeah
sort
of
so
so
it's
again,
contour
is
just
an
ingress
controller.
It's
this
job
is
to
route
traffic.
Gimbal
is
essentially
a
service
discovery
component
in
the
way
exist
today.
So
in
example,
that
we
showed
I
showed
we
had
all
these
different
VMs
running
out
here,
and
these
here.
Gimbals
job
was
just
to
go,
find
these
VMs
and
then
replicate
give
service
discovery
into
their
kubernetes
cluster
right.
So
we
were
able
to
reference
it
and
send
traffic
to
it.
That's
all
I
did
all
the
gimbals
job
is
just
basically
set
up
this
service
discovery.
C
If
I
had
another
kubernetes
cluster
that
I
wanted
to
proxy
to
I.
Could
I
could
do
that
and
have
gimble
them
go,
find
all
the
services
and
endpoints
in
that
cluster.
So
it's
basically
just
setting
up
service
discovery
for
me
so
that
I
have
those
endpoints
in
my
routing
cluster
but
contour
like
Michael
Michael,
said
its
job
is
just
to
do
route
traffic,
so
you
can
you
don't
have
to
use
gimbal.
Gimbal
is
just
layer
it
on
top.
Then
let
us
do
this
multi
cluster
kind
of
routing,
but
out
of
the
box.
C
D
E
Yeah
we'll
do
it.
This
is
Ashish
and
some
be
a
part
of
VMware
and
the
acquisition
we
ever
made
in
July
last
year
of
Adi
networks.
So
we
are
a
commercial
load-balanced
at
an
ingress
solution
for
communities
as
well
as
VM
bare-metal,
public
cloud
workloads,
so
think
of
it
as
a
as
a
solution
that
is,
gives
you
a
uniform
and
universal
applicability
across
all
use
cases.
It's
a
software-only
solution
and
I
can
I
will
go
into
the
details.
If
you
can,
let
me
share
okay,
I
think
I.
E
F
G
A
Let
me
jump
in
a
minute
here
too,
because
I
think
you
guys
are
new
to
the
crew
Bernays
user
group
eating,
but
this
group
operating
under
the
auspices
of
the
CNC
F,
is
under
ground
rules
that
we
don't
use
this
for
product
pitches,
but
in
some
extent
you
know
you
can
acknowledge.
This
whole
group
is
about
running
on
VMware
infrastructure,
which
obviously
is
a
product.
A
E
You
just
wanted
to
make
sure
was
clear
absolutely,
and
this
would
be
completely
technical
discussions,
see
so,
okay,
great
right.
So
thanks
for
the
the
introductions
team
and
see
I
think
it's
a
perfect
setup
in
and
what
I'm
going
to
talk
about
is
the
commercial
option,
so
I
think
we
talked
about
contour
and
on
why
and
potentially
metal
lb.
We
have
had
a
commercial
solution
in
this
space
for
a
while
and
we're
making
some
enhancements
and-
and
so
that's
the
perspective
owner
will
give
you
and
what
good
solution
is.
How
is
it
work?
E
So
at
any
point,
if
you
have
any
questions,
please
stop
me
in
ask
questions,
of
course,
so
we
think
from
three
point
of
view
like
as
when
we
look
at
communities
and
at
the
end
of
that
you're
providing
a
set
of
application
services
right,
there's
traffic
management,
so
we
talked
about
the
load.
Balancing
in
ingress
is
definitely
part
of
that.
We
talk
about
multi,
cluster,
with
gimble
and
even
multi
environment.
That's
part
of
traffic
management,
ipam,
dns
certificate
management,
etc.
So
all
of
that
we
talked
about
is
part
of
the
traffic
management
and
there's
security.
E
Security
comes
from
both
encryption
certificate
management,
but
also
authentication,
network
level
policies,
whitelist
black
was
etc
and
then
of
course
observability.
So
this
is
the
set
of
capabilities
that,
in
our
view,
are
relevant
for
any
large-scale
deployment
and
when
we,
when
we
started-
and
when
we
looked
at
over
the
last
year
or
two
years
and
I,
let
me
just
build
this
up
and
there
are
a
lot
of
open
source
solutions
out
there,
which
are
good
and
great
to
begin
with.
E
E
Then
there
is
a
separate
solution,
potentially
for
DNS
and
multi
cluster
and
then
I
Pam
and
then
maybe
security
could
be
commercial
could
be
open-source,
and
these
create
challenges
around
management
operations,
lack
of
consistent,
observability
across
all,
and
then
automation
becomes
a
challenge
as
well,
because
there
are
so
many
moving
pieces
that
you
have
to
automate.
So
what
we
have
a
solution
that
that
solve
some
of
these
problems
is
that
it's
a
unified
solution.
It's
integrated
solution,
so
I
think
we
were
discussing
earlier
right.
Some
solutions
have
a
unified,
lbn
ingress.
E
We
have
one
of
them.
So
are
the
official
name.
Is
NSX
advanced
load
balancer,
but
it's
you
don't
need
NSX
to
run
it.
You
can
run
it
in
any
environment.
Just
the
product
names
of
I'll
use
Ari
for
now,
if
they
unify
they'll,
be
an
ingress
with
the
same
solution,
also
providing
multi
cluster
built
in
I,
Pam
DNS
and
security
capabilities.
There's
built-in
observability,
it's
a
fully
automated
solution,
so
you
you
control
it
through
cuddle.
You
don't
have
to
do
anything
outside
of
what
you're
doing
today.
E
So
all
the
ingress
and
LB,
as
well
as
the
co
RDS
they'll,
be
adding
you
just
use
kuba
api's
to
to
configure
and
manage
it.
So
you
get
the
cloud
native
automation
and
I'll
spend
a
few
minutes
on
how
it
works
in
a
minute,
but
you
also
get
all
the
features
and
observability
that.
So
that's
the
this.
The
basic
premise
of
why
why
we
have
a
solution
in
this
form
at
the
high
level
architecture
right.
E
So
this
is
the
same
product
that
works
across
both
container
workloads
and
non
container
workloads,
so
it'll
be
good
to
just
understand
high
level
architecture.
So
there
is
an
avi
controller
and
there's
data
plane,
call
service
engines
so
think
of
data
plane
and
service.
Centers
are
like
on
Y,
so
there
they
are
the
actual
proxies
right
there.
That's
our
load
balancers,
but
the
control
plane
is
the
re
controller
and
and
that's
whether
that's
a
single
REST,
API
endpoint.
E
Everything
that
you
do
is
to
manage
throughout
the
controller
and
it
automatically
spins
up
these
data
plane
or
service
engines
scales
them
out
scales
them
in.
So
it's
a
fully
elastic.
It's
like
an
e
lb
like
solution
where
it
horizontally
scales,
out
and
scales
and
provides
active,
active
deployments,
supports
BGP,
for
example,
for
RHI,
and
so
this
service
engine
is
both
is
at
Elbe
and
ingress
in
one,
and
it
can
take
a
form
of
a
bare-metal,
a
vm
container.
E
From
now
I'm
going
to
focus
the
next
10
minutes
on
just
a
container
piece
only
there
are.
There
are
a
couple
of
components
right,
so
that
is
the
control
plane
we
talked
about.
Additionally,
there
is
a
very
lightweight
operator
called
avi,
kubernetes
operator,
which
is
the
one
which
is
similar
to
contour.
E
So
if
you
want
to
map
it
to
the
contour
on
y
discussion,
they
just
did
AKO
re
kubernetes
operator
is
the
community's
ingress
controller
and
it
is
listening
to
all
the
communities
notifications
when
a
services
gets
deployed
when
a
routes
gets
created
or
a
service
type
Alby
gets
created.
It
is
the
one
that
receives
notification,
its
deployed
one
cluster
and
then
it
then
translates
into
equal
and
RB
api's,
which
then
deploys
the
data
plane.
So
the
slight
difference
here
in
the
architecture
is
that
the
data
plane
are
deployed
in
the
underlying
infrastructure
cloud.
E
They
are
not
deployed
as
pods
inside
the
cluster.
We
actually
had
that
deployment
model
where
the
data
plane,
the
proxies,
were
deployed
as
pods,
but
there
were
some
performance
challenges
with
that.
We
have
some
customers
running
at
56,000
pods
and
for
5000
services.
I
take
it
per
second
and
there
were
some
performance
issues
as
running
as
part.
A
E
E
A
E
In
fact,
we
have
one
customer
deutsche
bank
and
who's
using
us
for
a
vid
openshift
and
they
are
using
those
on
Prem
in
two
datacenters
and
in
Azure
in
two
other
data
centers.
So
yes,
this
they've
been
running
about
50
to
60,000
pods
for
the
last
three
years.
Without
me,
so
yeah,
it's
a
very
large-scale
deployment
solution.
Okay,
thanks
right!
So
now,
let's
double
click
on
how
this
works
right.
So
so,
let's
talk
a
single
cluster
first
or
a
non
multi
cluster
deployment.
First
and
I'll
bring
the
market
oscillator.
E
So
there
are
three
components:
right
AKO,
which
is
the
which
is
providing
the
translation
of
KU
b
api's
into
RB
api's,
and
the
a/v
controller
is
managing
the
actual
policies.
That's
the
brain
for
da
b
solution
and
the
data
claim
the
service
engines
are
are
the
actual
the
proxies
so
on
day,
zero
did
you
deploy
through
any
automation,
terraform
ansible,
your
own.
You
deploy
a
controller
as
a
VM
cluster
of
three.
It
could
be
V
Center
on
vSphere.
E
It
could
be
bare
metal,
it
could
be
public
cloud
again
and
anything
all
of
them
are
supported
and
you
can
figure
whatever
cloud
you
deployed
in
you
configure
certain
parameters.
If
you're
deployed
in
vSphere,
you
configure
the
credentials
so
vSphere
so
that
the
controller
can
spin
an
automatically
spin
of
VMs
and
so
on.
If
you're
in
AWS
you
configure
of
ec2
credentials
so
that
it
can
spin
up
the
MS,
for
example,
and
that's
it
that's
the
infrastructure
deployment.
The
next
stage
is
every
time
a
cluster
gets
deployed.
E
E
We
are
working
within
the
team
within
VMware
to
do
other
form
of
deployment
as
well
as
including
supporting
a
built
in
packaging,
but
for
now
it's
a
home-based
deployment
and
it's
employed
per
per
cluster
and
what
the
a
keyword
does
is
it
establishes
the
the
connection,
a
secure
connection
with
the
re
controller,
so
that
it
can
start
provisioning
what
it
needs
to
and
now
it's
ready
for
app
deployment.
So
this
you
do
one
per
cluster
at
the
beginning
of
a
cluster
deployment.
E
Let's
get
that
out
of
the
way
now
imagine
what,
as
Steve,
was
showing
her
you're
in
your
deployment.
You're
deploying
your
services
as
a
developer.
When
you
can
feel
when
you
deploy,
let's
say:
you're
your
web
services
and
you
configure
a
service
type
lb
or
you
configure
ingress.
Remember
Bobby
is
unified,
there'll
be
an
ingress.
So
there
is
a
single
hop
solution,
not
a
two
hops
solution.
It's
a
both
performance
and
latency
advantages
as
well
as
manageability
advantages.
E
E
It
automatically
programs
translates
through
the
avi
api's
and
programs
are
the
efi
is
what
avi
control
is
gonna.
Do?
Is
it's
gonna?
First,
it's
gonna
actually
first
decide
if
it
has
enough
service
engines
enough
proxies.
If
not,
it's
gonna
call
the
appropriate
cloud
infrastructure,
whether
it's
vSphere
or
AWS,
or
whatever
it
is
to
spin
up
a
set
of
VMs
for
proxies.
Then
it's
going
to
plonk
those
VMs
in
the
right
network.
By
talking
to
whichever
the
underlying
networking
infrastructure
is
there.
E
It's
gonna
allocate
an
IP
address
for
the
VIP
because
remember
the
load
balance
of
functionality,
Universal
IP,
so
it's
going
to
talk
to
whichever
type
and
you
have
configured
it.
Oddly,
it
comes
with
a
built
in
iPad,
but
also
works
as
a
third
party.
Commercial
and
open
source
versions,
including
public
cloud,
I,
Pam's
and
DNS
is
as
well
so
it
allocates
the
item
and
then
it
programs,
the
underlying
service
engines.
It
then
also
programs
DNS.
E
So
again
now
he
comes
with
built
in
DNS,
but
you
can
use
off
perfectly
or
Infoblox
whichever
your
your
favorite
DNS.
Is
it
auto
programs
fqdn
it
auto
programs,
the
underlying
proxies
with
the
appropriate
whips
and
you're
done.
You
can
start
running
traffic
now
earlier
discussion,
our
BGP
vs
l2.
We
support
both
modes.
So,
of
course,
the
scalable
model
is
BGP,
so
the
control
of
it
program
BGP,
so
that
the
service
engines
will
inject
BGP
routes.
E
R,
/,
32
RHI
to
the
upstream
router
that
it's
paired
with
and
active,
active
scale-out
to
multiple
service
engines,
same
rip
will
reside
on
multiple
service
engine
or
can
decide,
and
this
air
state
around
ssl
states
and
and
cookie
state,
and
all
that.
So
it's
a
it's
a
fully
distributed,
load
balancer
in
an
active
active
form
and
then
the
traffic.
If
an
external
client,
you
first
look
up
fqdn
what
you
are
dns,
it
is
already
been
populated.
You
get
the
IP
they.
E
It
goes
to
the
service
engine,
which
terminates
which,
again,
what
load
balancer
in
ingress
so
terminates.
The
tcp
ssl
HTTP
makes
both
the
load
balancing
and
a
path
routing
decisions,
and
the
traffic
goes
to
directly
to
the
back
end
and
depending
on
the
underlying
CNI.
Most
common
deployment
models
are
the
CNS,
are
audible
and
and-
and
we
can
go
to
that
details,
but
but
we
also
support
Northport
based
through
proxy,
but
the
most
most
performant
is,
if
they
see
it.
If
the
the
pods
are
routable
questions,
this
is
just
the
high
level
how
it
works.
E
E
E
E
Now
I
do
want
to
show
one
more
thing
here
again.
If
we
please
ask
questions,
if
you
have
any,
but
I
have
I
have
a
demo
setup
and
again
I
don't
have
time
to
go
through
the
through
the
detailed,
a
setup
demo,
but
I
can
show
you
some
of
the
analytics.
So
one
of
the
key
differentiators
for
us
also
is
the
built-in
analytics.
So
here
is
what
the
dashboard
looks
like,
where
let
me
get
this
out
of
the
way.
Sorry
so
so
this
is
our
Diaby,
where
it's
a
fully
multi
tenant
system.
E
So
in
kubernetes
the
namespaces
would
map
to
author.
Actually,
the
projects
and
namespaces
would
map
to
a
tenant
in
our
be
fully
role
based
access
control,
fully
API,
driven
built-in
Swagger's
back.
All
of
that,
and
then
for
every
every
application
that
you
load
balancing.
We
provide
significant
amount
of
visibility
and
analytics.
So
in
this
case,
this
whip
is
load
balanced
on
active,
active
across
two
service
engines
that
three
pack
base
now
routing
rules
that
have
been
set
up
here,
pointing
into
three
different
pools
and
then
on
each
of
the
each
of
the
applications.
E
We
also
get
you
visibility
into
into
in
latency
breakdown
or
the
last
six
hours
a
day
or
week.
So
you
can.
This
is
built
in
by
the
way
without
any
agents
it
tells
you,
for
example,
what's
the
ran
leap
and
see
what's
local
latency?
What's
the
application,
latency
and
so
on,
and
you
can
get
that
at
individual
transaction
level
as
well,
so
I
can
open
up
any
any
transaction.
E
E
E
A
E
No
you're
right,
so
you
could
have
some
services
running
in
one
cluster
of
different
or
anything
in
a
different
cluster
or
it
could
be
running
in
a
VM
or
a
bare-metal,
and
you
can
do
load
balancing
based
on
DNS
or
BGP,
either
way.
Yes,
and
you
can
have
consistent
policy
across
all
of
the
different
applications.
I.
A
E
I,
don't
know
I'm,
not
an
expert
in
communities,
but
ok,
I
know
that
on
and
who's
on
and
on
the
call
right,
we
are
working
on
actually
contributing
the
card
that
the
community,
in
terms
of
like
the
a
mko,
especially
the
multi
clusters
we
are.
We
would
like
to
contribute
back
to
the
community
in
terms
of
how
we
define
those
standards
and
what
type
of
objects
should
be
exposed.
How
do
we
make
it
native
to
also
extend
this
to
across
multi
cluster
and
multi
environment,
whether
it's
VM
or
bare
metal?
A
E
Ako,
a
named
cure
will
be
open
sourced
the
kubernetes
operators,
the
proxy
pieces,
commercial,
but
I've
also
had
chat
with
Michael
Michael
who's
on
the
call
around.
How
do
we
unify
some
of
the
api's,
but
on
AKO
and
contour,
and
is
there
a
way
to
have
a
consistent,
so
it
can
easily
replace
one
with
the
other
when
you
go
from
open
source
of
commercial?
So
so
we
are
engaged
in
those
kind
of
discussions
as
well,
where
we,
how
we
unify
some
of
these
things
so.
F
I
can
take
the
previous
question,
which
is
there
is
a
certain
as
part
of
the
Federation
cube
Federation
B
to
this
federated
ingress
objects
and
federated
deployments
and
federated
services
that
tries
to
accomplish
the
Multi
cluster
ingress
in
some
sense,
but
I'm
not
sure
how
widely
adopted
and
used
that
is,
and
so
we
are.
We
are
doing
a
solution
that
we
already
had
a
GSLV
solution
and
we
are
doing
the
MK
or
to
just
integrate
with
that
directly.
So.
G
A
Great
so
with
that
said,
we're
we're
one
minute
past
who
I
guess
we
can
cheat
a
little
because
we
started
a
few
minutes
late,
but
let's
wrap
this
up.
We
had
an
item
on
the
agenda.
I
wanted
to
talk
to
Robert
about
upcoming
activities
at
the
virtual
cube,
con
Europe,
but
I
think
that
we're
out
of
time,
if
there's
a
rush,
we
can
talk
about
that
in
our
user
group,
slack
channel,
otherwise
we'll
roll
it
into
next
meeting
wherever
anybody
else
got.
Any
last-minute
questions.
A
A
So
if
there's
something
you've
been
curious
about
what
a
session
on
challenge
me
because
I'm
willing
to
out
there
and
get
speakers
on
whatever
topics
you
want
to
hear
about,
just
put
it
in
the
agenda
or
mention
it
on
the
committee
slack
Channel
and
I'll
make
it
happen
thanks
for
coming
by
everybody.
Thank
you.