►
From YouTube: Webinar: Ranga Rajagopalan @ Avi Networks - “How-To” of Cloud-native Traffic Management
Description
Full details here: https://www.cncf.io/event/webinar-cloudnativetrafficmanagement/
In this webinar, we will discuss how cloud-native traffic management must be approached differently and how dev, app, and network teams can work synchronously to deliver microservices applications.
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
A
A
A
The
meantime,
if
you're
kicking
your
heels
I,
would
encourage
you
to
check
out
the
previous
webinars
on
the
YouTube
playlist
by
the
link
that
I
just
shared
in
the
chat,
or
you
can
go
ahead
now
and
sign
up
for
the
next
webinar
we're
going
to
have,
which
is
an
introduction.
Squadmates
of
storage
and
that's
the
second
link
that
I
just
shared
in
the
chat.
A
A
Okay,
it
looks
like
the
numbers
are
starting
to
equalize
now
so
I'm
going
to
get
started.
Welcome
to
this,
the
fifth
or
sixth
I
can't
remember
any
more
CN
CF
webinar,
so
as
I
just
mentioned
to
most
of
the
people
in
this
chat.
But
perhaps
if
you
just
don't
at
the
last
minute,
not
and
I,
put
two
links
in
the
chat.
A
One
is
a
link
to
the
YouTube
playlist
that
we
have
all
previous
CN
CF,
webinars
they're,
getting
huge
numbers
of
hits,
so
I
would
advise
you
to
go
and
take
a
look
at
those,
and
also
the
second
link
that
I
put
in
there
was
the
link
to
sign
up
for
the
next
webinar
in
this
series,
which
is
an
introduction
to
cloud
native
storage.
Now
it's
one
of
the
webinars
we've
done
with
a
whole
host
episodes
or
some
other
the
list
them
all.
A
You
can
find
all
the
importing
information
now
and
it
seems
you
have
website
and
that
will
take
you
through
to
a
student
linkage
which
allow
you
to
sign
up
now.
Today
we
have
rang
a
co-founder
of
a
V
networks
over
the
last
15
years
prior
to
co-founded.
Ab
networks,
bragger,
has
been
an
architect
and
developer
several
high-performance
tributed
operating
systems,
as
well
as
a
networking
and
storage
data
center
products.
Today,
Wrangler
is
going
to
be
telling
us
about
cloud.
A
B
In
Waunakee
guys,
thanks
for
being
here
today,
as
Mark,
said
we're
going
to
cover
traffic
management
in
a
cloud
native
cluster
today
my
name
is
Ranga
and
I'm,
a
co-founder
and
CTO
at
RV
networks
and
for
the
past
four
years
have
been
exclusively
focusing
on
tech
management
and
how
to
do
that
effectively
at
scale
in
a
way
that
you
can
actually
deploy
in
production
and
keep
applications
running.
So
that's
what
I'm
hoping
to
share
today
with
all
of
you.
So
first
we
will
start
by
just
talking
about
what
exactly
is
traffic
match.
B
We
peel
back
the
onion
a
little
bit
on
the
anatomy
of
a
request
as
a
comes
into
a
computer,
that's
running
your
applications
and
then
we
will
go
deeper
into
things
like
scale,
availability,
security
and
so
on.
So
for
a
recent
survey
at
Kilcullen,
Berlin
wanted
to
list
out
what
are
the
top
concerns
in
deploying
and
then
taking
a
cloud
native
compute
cluster
collection,
not.
B
Usually,
there
is
a
lot
of
work
and
heavy
lifting
that
needs
to
be
done
from
taking
to
that
into
a
fully
production,
ready
system
and
a
cluster
to
start
with
user
accesses
and
application,
and
this
is
where
everything
begins,
and
this
is
the
purpose
of
the
cluster
and
the
applications,
of
course,
that
we
are
deploying
on
the
cluster.
So
first
thing
the
user
does.
Is
he
or
she
does
a
request
through
that?
Here
she
refers
to
the
application
by
name
facebook.com/,
and
this
request
is
served
by
a
DNS
server.
B
B
An
application
is
like
Facebook
or
Amazon.
Then
they
are
globally
distributed
and
we
will
talk
about
that
in
a
little
bit.
But
at
this
point
the
IP
address.
That's
returned,
maybe
from
from
a
variety
of
locations
regardless,
the
user
is
returned,
an
IP
address
and
he
or
she
then
actually
connects
to
the
service
and,
as
part
of
this
connection,
which
is
easily
over
TCP,
a
connection
is
established,
and
the
first
thing
that
happens
is
that
the
request,
which
is
usually
encrypted,
SSL
or
TLS,
so
HTTP
request.
It
is
terminated,
and
this
is
usually
done
on.
B
We
sent
it
and
once
the
request
is
terminated,
then
there
is
a
function
called
as
a
layer
send
routing
that
actually
directs.
This
request
to
a
given
application-
and
this
is
usually
done-
based
on
some
portion
of
the
URL
in
the
request
and
note
that
this
is
different
from
traditional
layer,
3
routing,
because
this
happens
at
the
application
level
based
on
the
URL
that
the
user
is
trying
to
access.
In
this
specific
case,
let's
go
ahead
and
assume
that
this
request
is
routed
to
an
application
called
photos
now
B.
B
Once
the
request
is
routed,
it
actually
sent
to
one
of
several
instances
of
applications.
Why
are
there
different
instances
for
a
simple
reason
that
one
instance
may
not
be
sufficient
to
service
all
the
requests?
Also,
one
instance
fails.
We
need
backup
instances.
So
usually
there
are
multiple
instances
of
applications
of
servers
that
provide
the
service
and
the
device
that
actually
performs.
All
of
this
we
talk
about
that.
A
little
bit
is
called
a
notepads.
Now,
once
the
request
actually
hits
one
of
the
instances
of
the
application
it
doesn't
stop
there.
B
The
request
may
need
access
to
other
services
that
run
within
the
cluster,
so
some
surveys
of
traffic
patterns
clearly
have
shown
that
about
20%
of
traffic
is
actually
ingress
traffic
from
outside
the
cluster
by
users.
The
remaining
80%
is
just
a
intra
cluster
traffic
that
just
remains
within
the
cluster
itself,
and
these
are
usually
applications
or
services
accessing
other
services
which
in
turn
access
other
services
and
so
on.
So
now
the
first
function
that
map
the
users
request
to
a
domain
name
to
an
IP
address.
B
It's
commonly
call
service
discovery
and
then
DNS
is
one
of
the
most
common
forms
functions
that
provide
service
discovery.
The
second
set
of
functions,
including
SSL,
TLS
termination
and
the
actual
device
that
performs
routing,
as
well
as
the
device
that
directs
the
request
to
a
specific
instance
of
an
application
is
performed
by
devices
called
load,
balancers
or
reverse
proxies
or
application
device
controllers.
B
Fast-Forward
a
little
bit
and
let's
assume
that
the
application
is
a
very
successful,
which
is
what
we
all
want
right.
We
want
to
develop.
An
application,
won't
be
very
successful.
That
means
we're
more
users
for
accessing
the
application,
so
it
sort
of
the
the
101
of
how
users
access
an
application
is
now
gone
into
a
second
level
of
how
do
I
make
this
application
scalable.
How
do
I
make
it
perform
better?
B
This
is
that
all
of
it
knows
so
more
users
come
in,
but
you
also
want
to
develop
more
features
and
functions
for
the
applications
you
want
to
fix,
bugs
you
want
to
be
able
to
roll
out
your
versions
of
the
applications.
When
you
roll
out
a
newer
version
of
the
application,
you
definitely
don't
want
to
see
glitches
for
users.
Almost
all
applications
are
globally
deployed
these
days.
They
need
to
be
available
7
by
24.
B
So
when
you
roll
out
a
new
version
of
application,
you
want
it
to
go
to,
of
course,
a
lot
of
testing
using
the
FCS
CD
pipeline.
After
that,
you
want
it
to
be
deployed
in
the
production
cluster
without
disrupting
existing
applications.
Sometimes
you
also
want
to
be
able
to
test
that
the
new
version
works.
Fine
for
a
small
fraction
of
users
before
you
deploy
to
so
the
other
side
of
this
is
that
you
usually
don't
have
just
one
cluster
right.
B
You
have
your
production
cluster,
you
have
a
development
cluster
and
you
may
have
a
integration
and
test
cluster
and,
depending
on
the
number
of
developers,
depending
on
the
number
of
application
instances
that
are
being
developed
at
any
time.
There
can
be
a
lot
of
turn
in
your
cluster.
This
includes
things
like
newer
versions
of
applications.
B
So
so
the
turbine
are
two
dimensions
that
you
need
to
keep
in
mind
when
your
application
starts
becoming
popular.
One
is
how
many
users
are
accessing
the
application,
which
typically
depends
on
which
drives
traffic
pattern
and
load,
and
also
how
many
developers
are
developing
these
applications,
and
how
often
are
they
actually?
Turning
in
system,
which
brings
us
to
the
next
topic
on
how
to
scale
your
application,
so
as
more
users
start
coming
in
the
first
function,
that
sort
of
starts
becoming
the
bottleneck
is
the
accessibility,
LS
termination.
B
This
is
typically
a
CPU
intensive
task,
and
if
you
don't
have
enough
severe
resources,
dedicated
funders,
this
can
become
the
battle.
There
are
a
variety
of
techniques
for
including
the
performance
of
this,
some
of
which
include
scaling
out
by
deploying
multiple
instances
of
the
device
that
performs
this
function,
and
and
but
regardless
of
that,
you
need
to
make
sure
that
this
functional
layers
skills
and
is
able
to
perform
adequately
when
the
number
of
users
and
sessions
starts
go.
B
B
B
Performance
based,
where
your
mind
thinking's,
like
latency
at
requests
and
the
load
on
these
instances,
and
if
you
see
the
latency
starting
to
go
up,
if
you
see
the
load
beginning
big,
what
we
increase
capacity
by
creating
more
instances
on
the
opposite
direction
when
the
actual
load
goes
down,
you
also
want
to
decrease
the
number
of
instances,
and
you
want
to
be
careful
in
doing
that.
So
you
don't
mark
all
the
instances
but
still
have
active
sessions
or
active
connections
on
it.
B
A
B
Have
skilled
my
custard
I
have
a
void
in
great
performance,
its
elastic.
Is
that
all
no
so
once
you
have
deployed
it
and
its
scale
is
running
fine.
The
next
thing
you
need
to
worry
about
is
availability,
so
this
includes
availability
within
a
cluster
also
includes
availability
across
clusters.
So
if
one
of
the
instances
here
for
example,
instance
one
goes
down,
then
there
are
other
instances
to
cover
for
it.
So
requests
still
continue
to
work.
The
service
is
still
up
because
it's
actually
provided
by
other
instances
that
are
running
out
within
the
cluster.
B
B
Is
the
mission
critical
application
for
which
you
need
global
a
little
bit?
So
so
you
need
to
deploy
these
applications
across
availability
zones
across
regions
and
and
that's
where
the
initial
service
discovery
came
in
handy
because
a
user
requested
we
have
Gideon
at
the
DNS
service,
be
the
the
DNS
service
or
service
discovery
can
decide
whether
it
wants
to
direct
the
user
type
of
data
center
on
the
Left
order.
The
data
center
on
the
right
and
this
decision
can
be
made
on
depending
on
a
variety
of
policies.
B
It
can
be
made
depending
on
the
availability
of
the
regions
or
the
clusters.
If
one
is
unavailable
for
maintenance
or
it's
down
for
some
reason,
then
all
traffic
and
we
send
to
the
other
one.
It
can
also
be
made
depending
on
the
proximity
of
users,
for
example,
if
the
users
closer
to
the
X
plus
four
on
the
left,
then
also
its
users
connected
to
the
cluster.
On
the
luck
it.
B
Performed
based
on
look,
for
example,
if
the
cluster
on
the
left
is
overloaded,
then
new
users
can
be
directed
by
thruster
away
the
governance
of
that
the
the
service
discovery
mechanism
is
critical
in
in
directing
users
to
the
appropriate
region
and
data
cetera
and
having
an
application
deployed,
especially
the
most
mission-critical,
once
deployed
across
multiple
AG's
or
multiple
regions,
is
very
critical
for
the
availability
of
your
application
grid.
So
now
you
have
been
able
to
deploy
your
application,
it's
working
fine!
You
have
killed
it
to
users
and
your
developers.
B
B
B
If
the
worst
does
happen
and
the
data
center
gets
breached,
then
you
also
need
to
make
sure
that
you're
able
to
protect
your
most
critical
assets.
In
this
case,
for
example,
your
database
and
techniques
like
micro
segmentation,
that
provide
policies
on
which
set
of
containers
or
services
are
a
lot
of
talk
to
which
other
set
of
containers
are
services.
So
we
need
to
have
micro
segmentation
policies
in
place
so
that
just
authenticated
users
intent
allow
the
top
picture
it.
B
A
B
The
set
of
techniques
that
we
use
to
provide
the
services,
the
fundamental
techniques,
are
still
the
same,
except
that
the
cloud
native
applications
are
more
decomposed
as
smaller
set
of
services.
So
the
requirement
for
automation
is
a
lot
more
for
cloud
native
scripts,
and
that
is
built-in
with
cloud
native
cluster
management
solutions.
So
cloud
native
micro
services
require
a
lot
more
automation,
whereas
traditional
applications
have
been
built
with
sort
of
a
monolithic
and
then
slow-moving
manual.
I
change,
control
policies
as
sort
of
a
distinction
between
thanks.
A
Frank
I
hope
answers
your
question
Ariel
and
if
it
doesn't,
please
drop
a
follower
back
in
the
chat
and
I'll
ask
Wenger
at
an
opportune
moment.
Second
question
we
had
coming
through
from
Dan
a
la
mode.
He
says:
how
do
most
companies
do
rolling
updates
of
their
load
balances
at
scale
and
then
in
parentheses,
why
users
terminate
their
TCP
connections
and
then
he
says:
ie,
updating,
DNS
records
and
wait
for
TTL
or
something
else.
Yeah.
B
So
that's
one
way
you
can
do
it
the
other
way
you
can
also
do
it
is
you
we
can
do
connection
mirroring.
So
every
TCP
connection
is
actually
a
mirror
to
a
load
balancer,
and
so
when,
when
that
happens,
then
you
and
aft
go
down.
So
it
goes
down
then,
can
be
the
traffic
to
the
other
load
balancer.
It
can
still
be
preserved
because
it
mirrors
connections
and
there's
also
an
intermediate
way.
You
can.
B
You
can
do
this,
where
you
can
actually
move
the
virtual
IP
APIs
to
another
load
down,
so
it
starts
responding
to
the
virtual
IP
address
and
when
you
do
that
you
all
the
new
connections
go
to
that
load
balancer
the
existing
connections,
you
wait
until
all
of
them
great.
Usually
these
are
short-lived
connections
and
then
you
create
this
load
balancer,
and
then
you
want
elipse
back
so
in
sort
of
migrating
loops
around.
B
A
B
It's
a
great
question:
we
in
fact
all
these
functions
and
techniques
are
actually
common
across
almost
all
the
major
container
orchestration
sisters,
including
commodities
and
mesos,
and
then
darkus,
warm
and
so
on
so
forth.
So
the
functions
are
all
needed.
The
techniques
are
all
beaded.
What,
of
course
varies
is
the
details
on
how
these
are
implemented.
B
However,
most
of
these
are
actually
probably
infrastructure
layer
by
BRK
station
systems
in
so
so.
These
are
somewhat
hidden
from
the
users
of
these
clusters
from
the
developers.
But
if
you
are
an
infrastructure
provider,
either
trying
it
out
for
the
first
time
or
you
have
it
running
and
you
want
to
scale
it
out,
then
you
need
to
be
aware
of
these
techniques
and
also
how
they
actually
work
in
in
these
systems.
A
B
Let's
walk
through
social
applications
up,
it's
killed.
Well,
it's
bearable
and
it's
secure,
each
of
which
is
not
very
easy
to
do.
But
let's
assume
that
you
have
done
all
these
things.
These
are
always
needed,
not
really,
so
you
also
want
to
know
how
your
applications
perform.
So
some
aspects
of
that
or
about
things
like
what
latency
all
these
resources
see.
B
So
are
they
a
percentage
of
them
seeing
errors
or
the
percentage
of
them,
seeing
higher
latency
or
for
having
performance
issues
when
you
are
being
attacked,
you
also
need
to
know
what
kind
of
attack
was
it
who
did
and
also
if
something
was
really
breached
or
not
so
last,
but
not
the
least.
We
need
a
lot
of
the
ability
to
monitor,
what's
happening
in
your
cluster,
see
what
is
happening
with
respective
performance,
with
respect
to
security
threats
and
so
on
so
forth.
B
In
your
customer,
so
you
need
good
monitoring
system
that
provides
insights
into
performance
infrastructure.
Any
errors,
configuration
audit
trail
security
tests.
What
attacks
are
printed?
What
attacks
were
led
to
and
it's
possible?
So
this
has
been
addition
to
any
level
of
application,
logging
that
that
you
may
need
to
actually
troubleshoot
education
level
problems.
B
You
need
the
worst
proxy
or
load
balancing
to
do
things
like
TLS
termination,
layers
of
routing
load,
balancing
things
like
circuit
breakers
and
so
on.
So
you
need
a
good
security
implementation
that
includes
micro,
segmentation,
about
dication,
firewalling,
authentication,
auditing
and
logging
to
make
sure
your
application
secure.
You
need
a
system
that
provides
good
visibility
into
what
is
happening
in
your
cluster
and
allows
you
to
monitor,
combines
that
with
an
alerting
system.
B
All
these
services
can
and
should
be
implemented,
integrated
native
into
your
compute
gear,
faster,
so
e,
the
the
more
integrated
these
services
are
in
your
cluster
as
part
of
the
infrastructure
layer,
then
the
better
they
will
work
the
easier
it
will
be
to
port
them
across
different
clusters
or
when
you
move
from
data
center
to
an
instance
of
a
public
cloud.
So
that's
a
couple
with
automation
for
providing
all
these
functions.
That's
a
key
requirement
for
successfully
deployed
with
services
and
and
and
making
sure
that
they
run
well
in
first.
B
A
Ranga,
thank
you
so
much
for
the
enlightening
talk.
We've
got
one
more
question
coming
through
the
chat
and
I've
also
got
one
of
my
own
one
of
the
one
of
the
privileges
of
hosting
these
things
they
get
to
ask
all.
The
questions
should
have
been
stored
up
for
a
while
SWANA
asks
as
a
follow-up
to
Ariel's
previous
questions.
How
does
this
change?
If
I
have
my
cluster
split
across
a
public
crowd
like,
for
example,
GCP
and
then
on
Prem?
B
Regression
all
these
functions
are
definitely
needed,
be
the
only
differences
will
be.
Some
of
them
are
natively
provided
by
the
cloud
provider,
for
example,
at
the
NS
service
made
obvious,
provides
route
53
asset
e
in
the
service,
so
does
other
major
cloud
providers
like
a
chiller
and
GPS
also.
So
when
you
run
in
natively
in
the
public
cloud,
then
you
have
a
choice
of
using
and
that
cloud
providers
infrastructure.
So
it's
like
AB
DNS.
B
When
you
are
running
in
your
own
data
center,
then
you
have
to
deploy
again
a
service
of
your
own,
and
you
have
a
variety
of
choices
that
you
can
choose
from.
Let's
sort
of
the
service
provider
for
each
function
can
be
depending
on
way
in
the
services
deployed,
but
other
than
that
all
the
functions
are
the
same.
The
needs
are
the
same,
and
you
need
to
follow
the
guiding
principles
for
deploying
it,
regardless
of
very
compliant.
A
Thanks
rigueur,
my
follow
up
questions
that
you
touched
very
briefly
on
the
idea
of
what
would
now
be
called
the
service
mesh
living
kadhi
is
one
of
the
most
recent
projects
in
Cynthia
and
I'm
wondering
yeah.
What
other
things
should
be
should
be
I
mean
I
mean
previously
I.
Think
people
who
used
to
may
be
sticking
nginx
in
front
of
a
few
things
at
the
end
of
it.
A
B
A
It
did,
although
it
sparked
another
one
so
I'm,
going
to
jump
straight
in
since
we
still
have
a
bit
of
time
left
over
so
a
long
time
ago,
I
was
a
continuous
delivery
consultant
and
what
we
focused
on
and
I
think
what
oh
yeah.
Well,
let
me
flip
the
question
around
a
lot
of
the
time.
Now
the
conversation
and
I
understand.
Why
is
about
how
to
like
how
do
things
work,
but
then
I
believe
yeah,
Dan
Lamont
alluded
to
this
earlier-
is
that
now
we
all
have
and
also
maybe
including
stylist
questions.
A
Now
we
have
perhaps
low
balances
on
Prem.
We
have
a
DCP.
We
have
new
updates
of
our
own
software.
We
have
new
updates
of
link,
Adi
and
other
things
coming
through
from
a
software
delivery
point
of
view
how
many
best
practices
you
see
emerging
because
it
clearly
I
mean
the
end
of
the
benefits
are
clear,
but
it
also
brings
us
extra
complexity
on
an
orchestration
level
of
how
do
we
keep
new
versions
of
things
moving
through
to
keep
this
whole
stack
running?
B
A
B
So
almost
all
of
them
follow
a
few
sort
of
principles
that
that
make
a
lot
of
sense
and
that
they're
used
to
serve
all
the
time.
So
chief
amongst
them
is
that
you
almost
always
have
to
have
multiple
pastors
some
for
development,
some
for
testing
and
some
for
production.
You
always
staged
it
in
such
a
way
that
you
first
test
it
out.
Perhaps
in
development
then
integration
unit
testing.
Then
you
move
it
to
a
production
and
you
also
automate
the
Hecox
it
right.
So
we've
all
been
there.
B
B
So
you
want
to
do
a
bit
of
carry
so
a
lot
of
these
again
we're
done
with
bits
and
pieces
of
homegrown
scripting
and
tools
so
on
in
the
past.
Again.
The
good
news
is
a
lot
of
these
capabilities
are
built
into
native
clustering
technologies
these
days,
so
it
makes
things
repeatable
I
makes
it
easier
from
an
infrastructure
point
of
view
to
roll
this
out.
So
this
differently
are
some
trends
that
I'm,
seeing
if
I
look
back
past,
seven
or
eight
years
to
how
things
were
then
back
then
I'll
be
a
member.
B
A
B
So
for
solution
is
based
on
DNS,
then
there
were
sort
of
two
variants
of
the
second
netcode
solution,
both
of
which
we're
about
I
have
a
set
of
IP
addresses
sitting
in
one
instance,
my
load
balancer
and
I
wanna
upgrade
this
load
balancer,
so
I
want
to
move.
These
IP
addresses
to
another
instance
of
a
load
balancer,
but
I
can
upgrade
instance.
Why
the
two
ways
you
can
move
IP
addresses
around
in
relaxers.
A
B
Dcp
session
from
load
balancer
wanted
all
of
that.
So
when
you
simply
move
the
IP,
then
all
the
sessions
are
available
on
your
balance
or
to
all
the
existing
users
preserved
and
traffic
and
in
strength
it
doesn't
really
matter.
It's
your
bouncer
one
call
and
you
can
upgrade
it
and
then
once
it
comes
back
up
a
mirror,
the
TCP
sessions
and
move
all
the
IDS
around
you.
B
To
do
this,
if
your
sessions
last
for
a
long
time,
unfortunately
real
world
less
than
1%
of
applications
actually
have
this
one.
Some
examples
of
these
are
things
like
backup
or
other
I've,
seen
some
of
these
in
financials,
where
they
have
a
trading
company.
That
opens
the
cache
and
a
large
broker,
and
it
remains
there
for
weeks
months,
maybe
but
most
real
life
application
I.
Nine
percent
of
them
do
not
have
long-running
sessions
there.
B
50
sessions
are
measured
in
tens
of
seconds,
and
even
if
you
drop
at
these
three
sessions,
no
big
deal
because
they
will
simply
try
the
request,
that's
what
most
browsers
and
apps
do
and
split
it
into
the
client
stack.
So
assuming
that
is
the
case,
then
the
upgrade
becomes
a
lot
simpler.
What
you
do
is
you
still
move
it
around
from
your
load,
balancer,
one
to
load
answer
to,
then
all
the
new
connections
will
be
handled
by
your
answer
to
all.
B
The
existing
connections
will
be
handled
by
your
balance
of
one,
but
no
advance
of
one
is
not
accepting
any
new
connections
and
you
wait
for
the
existing
connections
to
drain
waiting
for
a
clear,
Lexy.
30
seconds
60
seconds,
maybe
your
trimet
so
now
load
balancer
one
has
dropped,
all
existing
connections
and
empty
say,
go
ahead
and
upgrade
it
old
answer
to
is
all
the
new
connections
one
solve
and
someone
comes
up
the
opposite
habits.
You
belong
bouncer
one.
We
just
got
upgraded
handles
on
the
new
connections.
B
A
Then
has
one
follow-up
question
there,
which
is
other
any
open
source
software
projects
that
help
you
to
do
that
mirroring
to
automate
that
that
process
you
talking
about
right
now.
B
A
Thanks
when
I
go
so
then
Ariel
and
SWANA
I
hope
that
all
your
questions
were
answered.
If
not
I
would
encourage
you
if
you
don't
already
remember
the
CNC,
it's
like
to
join
up
and
you
can
catch
wearing,
Wrangler
there
or
on
Twitter
or
the
email
that
you
see
down
at
the
bottom.
All
that's
left
for
me
to
do
is
say
two
things.
One
I'm
dropping
a
link
into
the
chat
right
now.
A
That's
the
link
to
the
next
CN
CF
webinar
that
we're
going
to
do,
which
is
going
to
be
about
an
introduction
to
cloud
native
storage,
something
when
working
on
months,
so
I
hope
that
song
is
going
to
turn
up
and
second,
of
course,
see
Ranga.
Thank
you
so
much
for
your
time
today
for
putting
these
slides
together
and
explaining
something
about
cloud
native
traffic
to
us.
Thanks.