►
From YouTube: Kubernetes UG VMware 20200402
Description
April 2, 2020 meeting of the Kubernetes VMware User Group. This meeting covered using service mesh with Kubernetes running on VMware infrastructure. Also unfinished discussion of load balancing and other mesh related topics that will be carried to a future meeting.
A
And
we're
recording
so
a
carryover
from
last
week
was
a
request
to
cover
service,
mesh
and
I.
Think
people
specifically
brought
up
the
idea:
the
names
of
AVI
and
NSX
service
mesh,
which
are
VMware
products.
Now
this
is
a
delicate
territory,
because
under
kubernetes
and
CNCs
rules
we
cannot
use
this
meaning
for
product
pitches.
A
A
Okay,
so
the
whole
concept
of
service
mashes
we're
dealing
with
a
new
world
and
the
move
to
micro
services.
We've
got
a
lot
more
items.
They
used
to
be
with
monolithic.
There
are
a
lot
of
these
components,
whether
they
be
services
or
app
components
that
are
very
short-lived
and
they
tend
to
be
deployed
in
non-deterministic
places
and
scattered
anywhere
from
on-prem
to
public
cloud
to
multiple
clouds.
A
So
this
is
one
example
of
a
service
mesh
I'm
going
to
focus
on
sto,
just
because
I
think
that
it
seems
to
be
the
popularity
winner
when
you're
talking
running
service
mesh
in
a
kubernetes
environment.
There
are
a
couple,
others
that
are
also
popular
one
by
console
by
how
she
can
warp
and
link
or
D.
But
this
slide
is
specifically
about
sto.
The
other
ones
are
quite
similar.
A
A
A
There's
a
lot
behind
each
of
these
four
categories.
Not
everything
is
listed
here
and
you
can
just
read
them,
but
I'm,
not
gonna,
regurgitate
these
to
you
what's
new.
Why
would
you
use
SQL?
Well,
the
old
model
was
that
you
would
integrate
app
code
into
your
app
the
actual
code
to
engage
in
network
communications
along
with
the
associated
security.
A
These
were
typically
not
something
you
do
from
scratch,
but
on
platforms
like
Linux
you'd
link
in
libraries
to
do
this,
but
bad
things
would
happen
or
labor-intensive
things
would
happen.
When
you'd
have
version
creep
of
these
libraries,
you
know
you
might
have
one
for
SSL
another
for
whatever
your
network
protocol
is,
and
each
of
these
have
code
dependencies
on
versions.
Each
of
these
have
security
patches
coming
out
all
the
time.
They're,
not
necessarily
it's
very
easy
to
use
and
you'll
find
that
your
typical
app
developers
are
not
going
to
be
networking
or
security
experts.
A
Because
of
this
mistakes
are
gonna,
get
made
and
bad
things
happen.
So
the
new
model
with
a
service
mesh,
is
to
get
the
network
code
out
of
the
app
and
service
on
kubernetes.
The
technique
commonly
used
is,
what's
called
a
sidecar,
where
it's
running
in
a
container
alongside
your
app
or
service,
but
not
in
the
app
or
service
code
itself,
and
these
the
code
in
the
sidecar
is
maintained
by
networking
and
security
experts,
so
we
officio
this
sidecar
would
be
Envoy
and
in
kubernetes,
like
I,
say
it's
injected
into
a
pod.
A
There
are
ways
to
use
this
outside
kubernetes.
So
not
everybody
has
the
luxury
of
starting
greenfield
and
having
all
aspects
of
your
services
and
containers
in
containers
running
under
kubernetes.
So
if
you've
got
legacy
things,
what
can
you
do
about
that?
Well,
it
turns
out
at
least
with
Envoy.
You
can
have
a
sidecar
like
option
where
it's
installed
as
a
service
running
in
a
Linux
VM,
and
it
you
do
have
this
separation
you'll
see
the
categories
here
that
you
can
still
do.
A
This
is
called
mesh
expansion
not
as
well-documented,
certainly
is
using
it
with
standard
kubernetes
containers,
but
it
is
possible,
so
you
can
click
your
micro-services,
your
apps
and
kubernetes.
You
can
have
legacy
stuff
in
VMs
and
have
these
things
interoperate
with
all
the
features
of
service,
discovery,
security,
observability,
etc.
A
Some
challenges
with
service
mesh
and
some
of
these
challenges
are
being
taken
on
to
varying
degrees,
by
collateral,
open
source
projects
or
maybe
meshes
themselves,
but
ServiceMaster
really
kind
of
presumes.
Some
underlying
infrastructure,
such
as
a
load
balancer
or
a
firewall.
It
doesn't
do
literally
everything
from
point
to
point,
so
this
integration
is
something
that's
an
open
subject:
Federation
across
meshes,
you
might
have
a
mesh
located
in
vendor
aids,
public
cloud,
a
second
one
in
vendor
bees,
another
one
on
prim.
How
do
you
tie
these
together?
A
Supporting
apps
run
by
independent
organizations
by
that
I
mean-
and
this
is
maybe
not
that
new
in
the
financial
sector,
but
you
might
have
an
app
written
by
a
retailer
who
needs
to
deal
with
a
financial
institution,
and
you
don't
have
a
common
organizational
domain
with
integrated
trust,
but
there
are
still
techniques
possibly
to
get
those
things
to
interoperate.
How
would
you
do
that
with
service
mesh?
Moving
on
supporting
you
know,
the
sto
service
mesh
originally
took
on
HTTP
protocols,
but
the
same
concept
of
a
control,
plane.
Observability,
monitoring
security
applies
to
non
HTTP
protocols.
A
Another
issue
is
integration
with
the
lower
level
network.
Tier
I
think
my
design
SEO
is
designed
to
work
with
whatever
CNI
you
happen,
to
bring
to
kubernetes,
but
in
many
instances
it's
useful
to
have
integrated
policy
and
observability
between
the
underlying
network
layer
and
the
higher
level
HTTP
protocol
stuff.
That's
going
up
to
app
or
app
to
service.
A
Also
managing
this
when
everything
is
not,
everything
is
hosted
in
kubernetes
and
finally,
it's
non-trivial
with
many
or
most
of
the
current
service
meshes
to
do
the
initial
install,
keep
it
running
and
keep
it
updated
so
supporting
the
full
lifecycle
management
of
the
mesh
control
plane
itself
can
be
a
challenge
yeah.
You
can
meet
it,
maybe
by
improved
designs
or
by
coming
up
with
as
a
service
type
offerings.
A
So
that's
it
for
my
deck
I
think
what
I'd
like
to
do
is
turn
it
over
to
Deepa
to
make
any
corrections
to
what
I
just
said
and
maybe
add
a
little
more
color
and
then
what
I'd
like
to
do
is
open
it
up
for
specific
user
questions.
I
know,
Bryson
I
think
you
had
a
few,
but
Deepa
can
you
go
introduce
yourself
and
okay
yeah.
B
Vmware
specifically
I
initially
started
off
as
one
of
the
early
and
generous
helpings
without
the
product
and
recently
moved
on
to
product
management,
and
basically
it's
just
following
what
Steve
was
talking
about
and
yeah
I
think
I
think
it
was
pretty
accurate
in
terms
of
what
he
was
describing
I
believe
there
were
specific
questions
around
sanzu
service
mass,
so
maybe
maybe
we
can
I
mean
the
baby
could
go
about
doing.
That
is
if
users
in
this
group
in
second
yeah.
A
Maybe,
but
before
me
point
this
out
to
when
we
ended
our
meeting
last
month,
this
question
was
actually
about
nsx
surface
mesh,
but
that's
actually
been
renamed.
So
that's
you
used
the
term
tansu,
but
maybe
people
in
this
meeting
aren't
even
familiar
that
rename
that
took
place
yeah,
so
I
had
asked
about
avi
surface
meshes
well,
Bryson.
Why
don't
you
take
over
here
is
this
is
a
delicate
subject:
I'd
rather
have
a
user
kind
of
drive
this
at
this
point,
so
I
don't.
C
B
B
So
we
you
you
integrate
with
open
source
to
interact,
so
our
data
plane
is
pretty
much
still
today.
Basically,
how
we
are
different
is
that
we
kind
of
try
to
extend
beyond
the
current
capability
so
bringing
in
the
concept
of
users
into
the
mesh,
as
well
as
extend
to
data
services.
So
one
of
the
things
that
we
did
I
think
around
early
around
the
UN
half
ago
of
the
open
source,
so
in
into
an
LOI,
which
is
a
my
sequence
filled
out.
B
So
basically,
the
idea
behind
that
is
to
also
kind
of
not
only
capture
the
metrics
for
database
activity,
but
also
be
able
to
apply
policy
based
on
the
transactions
that
are
kind
of
flowing
through
the
micro-services.
So
that's
that's.
Basically
what
how
we
differentiate
from
open
sources
to
kind
of
trying
to
know.
A
B
You
see
my
screen.
Yes,
we
can
see
you
okay,
so
so,
basically,
let
me
go
through
a
kind
of
the
architecture
of
town
deserves
mash,
so
we
we
have
global
controller,
that's
essentially
a
sass
and
also
could
run
on
pram,
and
this
piece
is
a
part
of
a
product
and
is
delivered
accordingly,
but
on
the
on
the
tossed
aside,
turns
over
smash
data.
Plane
is
open
source
and
whatever
we
deploy
on
the
clusters
is
pretty
much
what's
out
there
in
sto.
B
In
fact,
one
of
the
capabilities
that
we
support
in
TSM
is
a
multi
class,
so
solution
where
you
can
have
services
extending
beyond
just
a
single
cluster
and
that
that
falls
well
into
the
magic
cluster
solution
that
esterhaz
and
that
was
actually
open,
source
for
and
in
and
VMware
so
say,
yeah,
so
the
local
control
plane
is
all
sto.
It's
just
a
question.
So
could
this
integrate.
B
Yeah,
so
for
that,
so
so,
if
you
have
is
still
deployed
on
anthos
the
the
way,
the
way
it
would
operate
with
our
service
mass
solution
is
through
Federation.
So
basically,
this
is
again.
I
am
not
I'm,
not
sure.
If
the
users
are
aware
of
Hamlet,
which
is
an
open
source
project
beta
again,
VM
I
was
actively
working
on
it.
So
the
is
that
with
other
songs
match
offerings
like
and
hosts
or
console
or
even
OpenShift,
you
could
feather
it
with
tangles
over
smash
using
the
open-source
hamlet
protocol.
A
B
A
A
Don't
want
a
high
jacket,
but
it
brought
to
mind
a
question
myself
of
you
know.
What's
the
trust
assumption
here,
if
I
had
console
and
say
anthos
owned
by
different
organizations,
is
that's
safe
to
integrate
with
this,
or
does
this
assume
that
everybody's
kind
of
in
the
same
organization
and
well
trusted
I,
see.
B
That's
a
good
question:
the
general
idea
is
not
not
necessarily
so
they're
there
like.
There
are
two
models
that
you
could
think
of.
One
is
where
you
you're
a
single
organization,
and
you
have
a
single
trust.
Domain
I
mean
a
trusted
root
and
you
have
basically
both
the
mesh
is
trusted
by
that
single
root.
B
That's
one
solution:
where
you
can
you
Confederate
using
a
single
trusted
root,
but
we
are
also
working
with
sky
teal
on
using
a
federated
identity
model,
so
this
is
another
another
open
source
initiative
that
we
are
in
in
discussion
with
the
sky
till
now
and
it's
there.
The
folks
are
now
in
HP,
the
general,
so
using
identity
Federation.
If
you
have
organizational
silos,
you
can
essentially
Fedder
eight
different
administrative
boundaries
and
establish
mutual
trust
using
that
protocol.
So
Hamlet
will
integrate
with
that
as
well.
A
So,
bringing
this
back
to
the
this
user
groups
focus
of
running
kubernetes
on
top
of
PM,
where
infrastructure
you
know
things
like
vSphere
or
maybe
the
desktop
hypervisors.
Is
it
it?
I
believe
it's
fair
to
say
this,
but
you're
the
expert
on
service
mesh-like.
You
answer
that
this
is
agnostic
as
to
the
network
implementation
through
the
CNI,
so
that
yeah
so
long
as
vSphere
supported
a
network
implementation
down
at
the
vm
level.
This
stuff
should
just
work.
You're,
not
walking
yourself
into
a
kind
of
a
particular
l2
or
implementation
of
your
networking.
Is
that
true?
A
B
C
B
That's
a
potential
yeah
the
way
I
put
it
is
that
at
this
point?
No,
but
but
it's
not
something
that
we
are
not
looking
at
yeah.
So
you
would
because,
like
I
mentioned
the
open
source
bits
that
we
have
like,
there's
no
difference
between
the
terms
of
service
mesh
data
plane.
And
what
and
what
is
there,
an
open
source
is
okay,
but
there
are
like
we
do.
We
do
recommend
using
TSM
because
of
course,
basically,
we
are
actively
managing
it
blessing
it
just
like
TKG
is
just
an
issue.
B
There's
absolutely
no
reason
why
it
wouldn't
work.
That's
what
I'm
trying
to
say,
but
but
yeah
you
could
potentially
take
vanilla,
sto
and
try
to
integrate
it.
There's
there's
no
reason
it
would
invoke
it.
Is
it
a
supported
model
from
VMI
right
now,
the
answer
there
is
no,
it's
it's
something
that
will
come
in
the
future.
B
If
that
answers
your
question.
Okay,.
C
But
it
is
set
up
so,
if
I,
if
I'm,
not
using
if
I'm,
building
my
cluster
myself
and
running
like
vanilla
for
Nettie's
and
on
top
of
VMware,
like
everything
in
my
cluster,
really
has
nothing
to
do
with
VMware
other
than
like
some
storage
providers,
and
things
like
that,
the
this
could
will
integrate
with
that
yeah.
When
I
read
out
of
this
and.
A
A
B
Basically,
we
will
need
to
work
with
folks
on
the
Red
Hat
site
to
build
a
Hamlet
agent
for
Red
Hat
and
then
that
could
integrate
with
NSX
SM
right
now
we
have
integrations
with
amp
house
and
with
console,
but
we
don't
have
anything
with
Red
Hat,
yet
open
ship.
Yet.
But
if
you
build
a
Hamlet
agent,
there's
no
reason
it
wouldn't
work.
B
A
B
A
B
D
So
I
have
a
question
I'm
trying
to
understand
the
relationship
between
this
and
Ennis
XT.
So
I
might
just
see
this
as
as
this
replacing
and
a
60
in
inside
VMware's
turns
our
grid,
or
is
it
complementary?
Where
do
they?
How
do
they
relate
to
each
other.
B
Be
/c
n
Ilia
and
this
the
solution
contact
men
said
it
doesn't
replace
it
so
I
think
all
else
like
any
of
the
policies
or
connectivity
for
services
TSM
will
enable
that,
but
any
layer
for
and
below
and
SXT
and
on
TRAI
will
enable
that
and
there
there
are.
There
are
also
some
some
some
future
future
capabilities
where
the
two
would
be
integrated
together
as
well
to
kind
of
provide
a
unified
policy
solution.
If
you
will
but
yeah,
but
right
now
there
tree
and
I
mean
yeah
the
end.
They
will
always
stay
complementary,
so
so.
A
A
A
A
A
A
A
A
D
D
A
16
think
you
were
the
issues.
You
know
it
there's
the
concept
that,
in
theory,
your
app
developer
doesn't
even
rights
as
if
you're
talking
safely
your
talk.
All
your
stuff
is
coming
from
localhost.
If
you
will.
This
is
maybe
a
bit
of
us
who
sit
over
simplification,
but
you
can
just
throw
all
concerns
with
it
out
the
window
and
assume
that
something
else
magically
took
care
of
it.
A
A
It
can
also
manage
deployments
that
are
popular
in
a
micro
services
architecture
where
you
might
do
canary
testing
of
rolling
out
a
new
version
of
an
app.
But
you
don't
want
to
go
hundred
percent
flip
over
from
version
one
to
version
two.
Maybe
you
want
to
test
it
to
make
sure
it's
good
before
you
fully
roll
it
out
so
sto
and
I
think
all
the
other
service
meshes.
Let
you
specify
a
percentage
that
goes
to
the
new
version
versus
the
old
and
take
care
of
rolling
that
out.
A
They'll
even
approach
that,
at
a
second
level
of
something
like
a
use
case
where
a
new
app
uses
version
two
of
an
API
and
make
sure
that
it
only
gets
connected
to
the
service
that
actually
implements
version,
two
of
that
underlying
API.
So
those
are
always
been
kind
of
tricky
things
to
manage
manually,
especially
at
scale,
and
the
service
mesh
can
address
that
for
you
now,
if
you're
a
newbie
just
trying
to
learn
it,
that
concept
of
kubernetes,
also
having
the
cni
plugin
for
networking
is
often
confused
and
there
is
interaction
there.
A
So
you
kind
of
want
to
learn
the
basics
of
kubernetes.
First,
in
my
opinion,
that's
just
kind
of
out
the
path
I
went
down
anyway,
because
they
didn't
have
sir
smash.
Why
I
started
on
kubernetes
and
that
kind
of
worked
for
me
to
get
that
foundation
of
understanding
how
just
networking
works
in
a
pod
first
and
then
take
on
something
like
sto
as
your
next
undertaking
in
your
path
to
learn
things.
A
Some
good
resources
are
probably
to
look
at
past
presentations
at
cube
con
conferences,
and
maybe
you
have
to
go
back
as
far
as
a
couple
years
ago
to
get
the
100
once
because
I
think
a
lot
of
a
lot
of
the
talks
in
the
recent
cube
cons
have
moved
on
to
more
deep
dive
topics
and
less
of
the
101.
But
if
you're
looking
for
101,
even
though
SEO
has
been
changing,
I
think
a
lot
of
it
is
perfectly
fine.
B
Steve
mentioned
a
few
basically
kind
to
kind
of
try
to
enable
Bluegreen
deployments
tied
to
your
CD
pipeline
or
access
policies
as
well.
The
question
that
you
specifically
asked
around:
how
do
you,
how
does
it
still
help
you
with
with
having
confrim
casters
and
maybe
a
few
in
the
cloud
as
well?
So
you
know
that
specific
piece
is
solved
by
the
multi
clustering
solution
that
I
mentioned.
So
basically,
the
general
idea
is
that
you
should
be
able
to
discover
services
across
different
clusters
and
have
them
connect
to
each
other.
B
So
the
end-to-end
encryption
is
all
TRS
and
the
the
the
idea
is
that
still
has
the
capability
of
inserting
an
ingress
and
egress
gateway
as
attachment
points
to
your
cluster.
So
generally,
how
the
traffic
flow
would
be
is
that
your
sidecar
service
would
try
to
connect
to
globally
named
service
in
a
different
cluster,
and
a
traffic
will
be
routed
all
TLS
encrypted
through
through
those
through
the
ingress
and
egress
gateway
down
to
the
add
a
cluster.
So
it's
to
enable
status
well
and
TSM
operationalizes
it
as
well.
B
A
I
wish
I
could
tell
you
it's
easy.
I
mean
it's
it's
easy
in
that
it
relieves
you
of
a
lot
of
the
hairy
issues
of
getting
stuff
to
work
in
the
new
world,
but
sto
itself,
I
think
has
been
criticized.
A
bit
on
being
fairly
complex
to
set
up.
I
have
managed
to
even
set
it
up.
If
you
have
a
powerful
laptop
with
a
lot
of
RAM
and
you're
just
trying
to
learn
it
in
a
home
lab,
it
is
possible.
I.
A
A
End
point
end
point
connection,
but
from
an
ops
perspective
there
are
some
things
that
are
a
little
tricky
when
you
try
this
on
Graham,
maybe
there
it's
a
it's
a
well-trodden
path
when
you're
in
a
public
cloud,
but
things
like
you
know.
In
reality,
these
network
connections
might
arrive
at
a
physical
location.
A
Getting
a
firewall
going
to
a
load
balancer
and
maybe
in
traditional
non-service
mesh
through
an
ingress
on
to
pods,
with
some
routing
sto
will
take
on
a
piece
of
that,
but
not
all
of
it,
so
that
when
you
deploy
it
on
Prem,
you
may
have
some
work
to
do
from
an
ops
perspective
of
getting
a
load
balancing
solution
in
place
for
ice
and
I.
Think
you
and
I
have
discussed
that
a
little.
C
C
You
you,
you
can
use
parts
of
it
without
using
all
of
it.
It's
quite
complicated
I
would
I
would
say
that
the
question
like
Robert
back
to
you,
unlike
why
you
should
use
it,
but
I
think
that's.
The
first
question
you
should
ask
is
why
should
you
use
it?
Do
you
really
have
a
need
to
use
it?
It's
quite
a
big
buzzword,
service
mesh
right
now,
so
there's
a
few
different
things.
So
that's
the
first
question.
C
I'd
ask
is,
do
you
need
it
and
then
there
may
be
some
parts
where
you
you
need
part
of
it,
but
you
don't
have
to
use
all
of
it,
and
so
one
of
one
of
the
things
that
I
thought
you
were
going
where
I
thought
you
were
going
is
I
brought
up
a
couple
times.
Is
you
still
there's
still
some
things
that
you
have
to.
C
Have
somewhere
so
you
still
have
to
have
a
load,
balancer
kind
of
pointing
to
your
cluster
somewhere.
So
that's
one
of
the
things
that
I
kind
of
want
to
bring
up
as
a
proposal
for
one
of
our
future
meetings.
Just
talk
about
load,
balancing
options
in
front
of
kubernetes
on
VMware,
so
we
have
different
types
of
load,
balancing
on
our
public
cloud
than
we
do
on
our
VMware
kubernetes
clusters.
D
This
I
mean
this
gets.
This
gets
to
exactly
where
confusion
kind
of
starts
to
come
in
because
I
mean
right
now.
If
I
do
pick
es
or
I
did
use
kubernetes
on
on
Visa
within
a
second
underneath
it
and,
as
X
will
do
things
like
load
balancing
for
me,
but
most
of
the
designs
you
make
it's
all
very
cluster
specific.
So
it
sounds
like
a
possible
use
case
for
service
managed
to
say
well.
I
have
multiple
kubernetes
clusters.
You
know
I
have
bits
of
my
app
spread
across
these
clusters.
D
A
In
a
way,
it's
it's
an
abstraction,
but
it
does
attempt
to
reach
out
and
control
external
things.
It's
just
sort
of
like
if
you
look
at
kubernetes
itself
in
the
l2
with
regard
to
ingress,
there's
an
abstraction
for
ingress
which
opens
up
a
path
to
the
outside
world
into
things.
You've
got
running
in
kubernetes,
the
kubernetes
inherently
doesn't
implement
the
ingress.
It
simply
puts
in
place
a
structure
where
you
make
a
declarative
statement
of
what
ingress,
as
you
want.
A
Some
code
has
to
run
separately
that
monitors
what
your
statements
are
and
makes
the
world
comply
with.
What
you
said
you
wanted
now
with
regard
to
ingress.
A
lot
of
the
solutions
that
are
popular
can
actually
be
self
hosted
in
kubernetes,
but
they
don't
have
to
be
so.
You
can
have
ingress
as
based
on
traditional
legacy,
let's
say
hardware,
iron
solutions
for
that
and
I
believe
with
load
balancing
it's
the
same
scenario.
A
There
are
some
options
that
are
could
be
self
hosted
on
Cooper
neighs,
but
there's
others
that
maybe
you've
had
in
place
in
a
data
center
for
years
where
somebody
would
still
have
to
write
I
think
the
adapter
code
to
make
this
happen,
but
it
can
be
done.
I
know,
I
had
personal
experience
with
using
a
load,
balancer
called
metal
lb
and
the
I'm
by
no
means
an
spirt.
A
The
only
reason
I
even
got
into
this
was
that
I
did
a
talk
on
SEO
mesh
expansion
using
it
to
connect
kubernetes
were
closed
to
VMs
at
an
open
source
summit
last
year
and
I
wanted
to
get
a
demo
going
from
the
talk
that
I
could
run
on.
My
laptop
and
I
went
down
this
path
of
using
a
desktop
hypervisor
to
stand
up.
Kubernetes
plus
sto
found
I
also
had
to
stand
up
a
load,
balancer
and
Bryson's
right.
There's
a
lot
of
moving
parts
here.
B
So
yeah,
so
there
are
options
there,
there's
using
contour
as
ingress.
There's
me
as
ingress
into
your
mesh.
These
are
either
kind
of
the
load,
balancing
options
that
you
could
protect,
potentially
integrate
with
and
I
believe
true
clustering
api.
Eventually
there
is
some.
There
should
be
a
way
to
launch
a
cluster
and
specify
which
ingress
you
potentially
want
to
use.
B
C
C
B
C
B
A
A
But
if
anybody
does
get
playing
with
it,
I
got
it
to
work,
and
one
of
the
hints
for
running
it
on
a
hypervisor
was
that
at
least
for
me,
I
found
that
I
had
to
enable
promiscuous
mode
in
the
networking
infrastructure
to
get
the
thing
to
operate,
because
it
does
Omak
addressing
liasing
and
things
as
part
of
the
way
it
works.
There's
another
mode
where
you
can
use
border
gateway
protocol
instead,
where
you
didn't
have
to
do
that,
but
I
don't
have
stuff
using
BGP.
A
D
The
funny
the
funny
thing
is
from
the
world
I'm
coming
from
there
really
is
only
one
answer:
that's
a
no
60!
So
it's
really.
It's
really
interesting
to
hear
all
you
guys
mention
every
single
other
way
of
doing
ingress
load-balancing
without
mentioning
the
product
that
I
think
VMware
kind
of
pushes
as
the
solution
for
that
kind
of
stuff
and
I
can't
imagine
that.
A
B
A
A
Okay,
we've
got
three
minutes
to
go
before
the
top
of
the
hour.
Any
last-minute
topics
people
want
to
bring
up,
Robert,
I'm,
just
curious,
I
know,
cube
comm
Amsterdam
reschedule,
let's
hope
that
works
out
well,
but
it
was
due
to
be
going
on
now
and
I
know
you
had
your
your
social
planned
as
a
venue
for
a
face-to-face
meeting.
If
the
user
group
members
had
been
in
Amsterdam,
did
any
of
the
locals
actually
meet
for
that,
or
is
that
band
that
kind
of
thing
now
band
under
that.
D
Accounts,
that's
counselor
completely.
All
cafes
and
restaurants
are
closed.
Okay,
so-
and
you
know
allowed
together
so
we're
all
on
lockdown,
so
it
wasn't
gonna
happen
anyway,
but
I
think
Keith
is
just
gonna
call.
He
he
suggested
I.
Do
it
virtually
I.
Would
if
I
had
a
corporate
zoom
account
that
doesn't
kill
you
after
40
minutes.
I
might
still
organize
something
like
that
in
the
coming
month,
or
so
just
just
for
fun,
just
to
azuma
version
of
it.
Yeah.
D
A
Sounds
great
well,
if
nobody
else
has
anything
we'll
close
the
meeting
then
so,
thanks
for
joining
I'll
get
we've
got
a
few
people
who
are
gonna
making
add
links
to
the
agenda,
notes
documents,
so
you
should
might
be
able
to
find
a
little
more
and
a
day
or
so.
If
you
go
look
there
and
then
I'll
when
zoom
gives
me
the
recording
I'll
put
it
up
on
the
YouTube
channel.
If
you
want
to
tell
your
friends
about
whatever
happened
here,
so
thanks
for
joining
everybody
and
see
you
next
month,
thank.