►
From YouTube: How to Design For a Multi Cloud Deployment
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
it's
all
the
way
from
co-location
to
true
multi-cloud
to
hybrid
cloud,
to
you
know,
public
and
private
cloud
and
everything
that's
involved
in
the
idea
of
having
you
know
more
than
one
provider
for
the
services
you
use
for
reasons
such
as
redundancy
to
you
know,
picking
the
best
of
breed
service
and
and
everything
in
between
so
really
multi-cloud
to
us
and
enabling
multi-cloud
and
the
challenges
of
multi-cloud
are
around
these.
You
know
different
platforms
with
different
apps
and
different
stages
and
different
locations
around
the
world.
A
So
what
we're
going
to
talk
about
today?
The
main
focus
of
the
webinar?
Now
what
to
use
multi-cloud
for
how
to
choose
between
multiple
cloud
providers?
How
to
deploy
to
multiple
cloud
providers?
How
to
then
secure
them
and
think
about
security
across
different
providers
and
then
a
little
bit
of
insight
into
how
snap
does
it
in
our
organization.
A
So
the
first
step
what
to
use
multi-cloud
for
what
are
the
benefits
of
multi-cloud
rights,
and
what
do
you
need
to
be
looking
at
it
for
what
are
the
like
kind
of
the
low-hanging
fruits?
If
you
will
obviously
we're
of
the
opinion
that
all
organizations
should
really
be
preparing
for
or
already
using
a
multi-cloud
strategy,
as
you
saw,
some
81
percent
of
organizations
are
already
in
multiple
locations
and
clouds,
but
the
common
benefits
and
kind
of
the
easiest
upsides,
I
think,
are
what
I'd
like
to
focus
on
first
and
the
first
one.
A
By
far
probably
the
biggest,
I
think
where
we
see
clients
going
to
multi-cloud
first
and
the
the
most
immediate
kind
of
return,
I
think,
is
be
for
redundancy
and
availability.
A
You
can
see
in
the
figure
on
the
top
right
that
out
of
you
know,
1200
respondents
27
said
they
have
no
downtime
per
month
from
cloud
providers,
so
the
vast
majority
experience
some
amount
of
downtime
each
month
from
outages
of
cloud
providers
and
when
you
look
at
large
organizations
you
know
you
might
not
just
be
looking
at
outages
in
terms
of
you
know:
availability
zones
or
data
centers
going
down,
but
also
just
system
outages
right
losses
of
virtual
machines
or
instances
or
services.
A
They
do
happen.
Right
clouds
have
a
one
to
two
percent
failure
rate
per
year.
So
if
you've
got
hundreds
of
systems,
you
are
looking
at
a
situation
where
they
will
be
failing
throughout
the
year.
You
know
on
a
daily
basis,
almost
if
you're
large
enough
and
that's
the
critical
kind
of
component
here
I
think
in
the
last
year
we've
seen
a
lot
of
this
right,
a
lot
of
outages
and
clouds.
A
You
know
from
fires
of
providers
to
network
outages
to
power
problems
and
like-
and
I
think
a
real
high
availability
strategy
today
must
include
a
multi-vendor
strategy
right.
It's
very
hard
to
say
that
your
application
can
guarantee
100
uptime
if
you're
dependent
on
one
single
provider
or
one
single
network,
or
even
worse,
one
single
data
center.
A
There
are
challenges
around
redundancy
and
availability
right,
very
common
for
us
to
see
clients
that
have
a
deployment
in
europe
and
one
in
the
us,
and
if
the
u.s
should
go
down,
american
users
should
be
sent
to
europe,
but
that
still
doesn't
come
for
free
right.
There's
a
design
element,
that's
really
important
there.
A
The
next
biggest
driver
for
multi-cloud
that
we
see
are
geographic
considerations
right,
more
and
more
workloads
today
are
very
latency
sensitive
and
getting
traffic
to
consumers.
In
you
know,
the
lowest
amount
of
time
possible
is
critical
for
all
sorts
of
different
methods.
A
You
know
from
things
like
finance
to
online
gaming
to
gambling
sports
betting,
even
just
websites
really,
but
you
know
when
you
look
at
50,
millisecond
or
below
latency
you're,
basically
talking
about
geographic
considerations
right
to
give
you
an
idea
to
get
to
the
west
coast
of
america
from
south
africa
takes
300
milliseconds.
If
you
have
no
delay
it's
just
that
far.
So,
there's
simply
no
way
like
that.
An
amazon
deployment
on
the
west
coast
of
the
us
can
can
provide
you
know
a
200
millisecond
sla
to
clients
in
southern
africa.
A
They
would
have
to
deploy
in
south
africa.
Now
until
recently,
the
only
public
cloud
available
in
south
africa
of
the
big
three
was
azure,
so
you
could
be
an
amazon
shop,
bring
on
a
big
client
in
southern
africa
who
requires
a
sub
200
millisecond
reply
on
an
api
of
yours
and
all
of
a
sudden
you
have
to
deploy
into
azure
as
well,
because
there's
simply
no
amazon
data
center
there.
So
we
see
this
kind
of
thing
a
lot.
A
It's
not
just
a
global
problem.
It's
an
in-country
problem
as
well
right
like
in
state,
for
example,
you
know
how
do
you
deploy
this
workload
in
that
specific
state
and
that
really
drives
people
towards
multi-cloud
and
remember
the
way
I
talk
about
multi-cloud?
Is
you
know,
deployments
in
many
places
right?
You
might
find
that
there's
no
feasible
cloud
in
a
state
you
need
to
deploy
in
that
you
could
use,
but
there
is
a
data
center
or
whatever.
A
So
you
ultimately
wind
up
with
the
same
problem,
but
this
is
rising
big
time
the
need
to
have
systems
within
you
know
we
used
to
call
in
old
days
the
last
mile.
It's
obviously
not
a
physical
mile,
but
within
the
last
mile
of
the
consumer,
is
a
big
trend
and
a
big
concern
of
a
lot
of
application
types.
A
The
next
one
is
cost
and
feature
options,
so
the
best
possible
cloud
for
each
workload,
and
it's
easy
to
say,
but
really
it's
talking
about
using
the
most
optimal
solution
for
each
problem,
that
you
have
right
problems,
typically
being
your
applications,
but
an
application
is
many
things.
It
can
often
be
a
sql
database
and
a
key
value
store
and
a
bunch
of
web
servers,
or
it
can
be
serverless
or
it
can
be.
You
know
just
general
purpose
workloads,
but
you
can
see
at
the
top
right.
You
know
the
costs.
A
No
one
cloud
is
cheaper
than
the
others
right.
It
depends
on
what
you
need,
what
instant
size,
you're
launching,
what
services
you
need
to
consume
and
so
on
and
really,
if
you
have
a
truly
multi-cloud
workload
or
application
delivery
strategy,
there's
no
reason
why
you
can't
always
be
deployed
in
the
cheapest
cloud.
A
It's
like
something
of
a
shocking
example
to
people
often,
but
we
have
a
client,
for
example,
that
spends
five
to
six
million
dollars
a
month
on
cloud
and
they
will
move
their
workloads
between
the
big
three
clouds
multiple
times
a
day,
depending
on
the
rates
that
they
get
or
you
know
their
spot
pricing
or
the
instance
requirements,
or
things
like
that
right.
So
that's
obviously
an
extreme
example,
but
there's
really
no
reason
why
you
shouldn't
be
able
to
benefit
from
that.
The
other
thing
to
remember
is
that
cloud
can
become
a
platform
for
you.
A
You
might
have
an
entirely
containerized
workload
where
you
really
can
deploy
it
anywhere
and
then
there's
no
reason
not
to
just
say
well,
you
know,
run
more
containers
wherever
it's
cheaper,
right
and
so
on
and
the
final
point-
and
this
is
an
important
one-
is
avoiding
vendor
lock-in.
A
So
when
you
design
for
a
single
cloud,
you
tend
towards
a
vendor
lock-in
problem,
you
don't
it
doesn't
mean
you
definitely
have
one
by
working
in
one
cloud,
but
you
start
to
utilize
local
cloud
services
and
applications
right
products
that
only
amazon
has
or
an
api
that's
very
specific
to
azure,
or
you
know
gcp
or
whatever
it
might
be,
and
it
then
becomes
very
difficult
to
get
away
from
them.
So
what
happens
then?
A
Is
you
know
your
business
is
looking
to
be
acquired
by
someone,
but
they
needed
to
run
in
a
different
cloud
and
it's
not
easy
to
migrate,
or
you
know
that
cloud's
pricing
changes
and
all
of
a
sudden,
it's
not
the
best
option
for
you
anymore,
but
you
cannot
move
your
workload
off
because
it's
become
so
dependent
on
it.
These
are
lessons
that
we
as
an
I.t
community,
learned
long
ago,
right
with
like
sql
providers
and
things
like
that,
that
used
to
be
locked
in
and
you
could
never
leave
them.
A
There
are
deployment
models
that
are
helping
this
natively
like
being
able
to
deploy
things
into
kubernetes
and
then
that's
very
much
more
easy
to
lift
and
shift,
but
designing
your
application
with
the
intention
of
running
it
in
more
than
one
provider
really
forces
you
not
to
become
locked
into
a
vendor
of
course.
So
this
is
more
like
a
a
side
benefit,
I
think
of
it,
but
it
is
a
big
concern.
A
A
So
the
next
topic
is
choosing
cloud
providers
right
how
to
choose
the
cloud
providers
for
your
business
when
you're
looking
at
these
multi-cloud
strategies,
you
know
often
you
will
already
have
one,
but
how
do
you
choose
the
second
one
or
how
do
you
choose
many?
You
know
etc,
and
I
think
the
first
you
know
my
first
tongue-in-cheek
point
would
be
really.
Firstly,
don't
design
for
a
specific
class?
Don't
choose
to
doesn't
mean
you
can't
pick
two
to
run
in
today,
but
don't
choose
two
to
build
for
build.
A
You
know
in
such
a
way
that
you
could
run
in
any
environment.
Is
the
key
really.
I
think,
because
that
helps
you
to
avoid
these
lock-in
dangers,
and
then
you
know
it
also
helps
you
to
build
for,
like
non-restricted
environments
and
platforms.
Right,
like
I
said,
like
kubernetes,
where
you
can,
you
know
ultimately
design
your
workload
for
anywhere
or
containers
or
even
virtual
machine
images.
It
all
depends,
you
know,
can
you
use
relational
database
services
in
a
public
cloud?
A
Yes,
because
almost
all
public
clouds
have
them,
but
as
you
go
down,
that
kind
of
you
know
vendor
provided
set
of
services.
You
obviously
become
more
and
more
locked
in
so.
The
first
thing
is,
I
think,
don't
choose
cloud
providers
as
a
part
of
your
application,
design
or
work
or
deployment
architecture.
You
know
you
can
choose
them
based
on
what
the
cheapest
one
is.
What
the
closest
one
to
your
you
know,
location
is
what
the
latency
is
like.
A
The
next
point
is
to
plan
your
architecture.
This
is
a
big
important
part
that
I
mentioned
right,
location
and
requirements.
So
where
do
you
need
to
physically
exist?
You
know
where
will
it
benefit
you
most
to
exist
right?
Maybe
you're,
looking
at
a
multi-cloud
strategy
for
redundancy
and
reliability,
high
availability
reasons,
but
your
business
has
clients
primarily
on
the
west
coast
and
east
coast.
Then
you
should
deploy
on
the
west
coast
and
east
coast.
A
If
you've
got
a
lot
of
clients
in
the
uk,
maybe
it
would
be
a
good
idea
to
have
a
data
center
in
europe.
You,
you
know
a
lot
of
these
things.
You
get
benefits
for
free
right.
You
need
two
data
centers
for
reliability,
but
you
also
then
get
a
better
performance
in
europe
because
you're
sending
europeans
to
a
local
data
center.
So
I
think
that's
the
big
thing
and
then
I
think
you
know,
depending
on
your
workload,
you
really
should
look
at
the
costs
right.
A
There
are
wildly
different
costs
between
cloud
providers,
especially
when
you
start
to
look
at
an
enterprise
workload.
Egress
fees
can
be
extremely
expensive.
You
know
the
fees
on
ingress
swaps
or
load
balances
can
be
extremely
expensive.
This
is
not
an
example.
That's
specific
to
just
amazon
but
by
you
know
way
of
example,
if
you
had
say
a
hundred
thousand
new
connections,
a
second
to
an
alb
system
on
amazon,
just
the
load
balancing
wealth
of
that
could
cost
you
20
30
40
000
a
month.
A
So,
as
you
look
to
scale
up,
you
know
you
can
get
a
fright
basically,
and
you
should
really
look
at
the
underlying
costs
right,
not
how
much
is
my
vm
per
month,
but
you
know
how
much
data
do
I
transfer
and
will
I
ultimately
get
charged
for
that
and
how
much
database
storage
do
I
need
my
backups
will
be
keeping
them.
A
A
Now,
of
course,
you
will
typically
want
to
have
some
sort
of
pre-production
environment
in
your
actual
production
environment,
but
there
are
many
companies
who
will
deploy
their
staging
systems
and
ci
cd
infrastructure,
and
things
like
that
somewhere,
like
digitalocean,
where
the
infrastructure
might
be.
You
know
two
to
three
times
cheaper
than
aws
or
gcp,
oh
or
azure,
you
know-
or
maybe
there
are
other
clouds
right,
there's
lenod
there
could
be
vmware
your
own
infrastructure.
You
know
you
could
do
it
on
hardware
locally
right.
A
Next
thing
is
to
ensure
your
requirements
are
met.
I've
mentioned
some
of
these
things
already,
but
you
need
to
look
at
databases
right.
Many
clients
will
rely
on
the
cloud
provider
to
provide
relational
database
services
as
well
as
nosql
key
value
stores
things
like
that,
and
they
vary
quite
a
lot
between
them.
You
know:
there's
a
good
20
to
30
percent
variance
between
them.
Egress
data
fees
can
be
very
different
right.
So
that's
the
cost
of
sending
data
out
of
your
instances
and
it
costs
a
lot
in
cloud.
A
If
you're
sending
a
lot
of
data,
it
can
catch
you
by
surprise
your
staff
abilities
and
training.
This
is
something
that
people
don't
talk
about
enough
when
they
talk
about
multi-cloud,
in
my
opinion,
but
you
know
if
your
staff
are
familiar
with
azure
and
gcp,
then
that's
the
two
clouds
that
I
would
pick
first.
Those
would
be
the
two
low-hanging
ones.
There's
no
need
to
deploy
to
the
cloud
where
troubleshooting
will
be
hard
or
where
you
need
to.
A
You
know
upskill,
your
staff
and
train
them,
and
things
like
that,
I
think
more,
so
you
really
want
to
have
a
neutral
experience
where
you
can
use
whatever
clouds
are
appropriate,
but
there's
no
need
to
to.
You
know
pick
the
one
that
no
one
understands,
and
then
you
also
need
to
consider,
especially
at
larger
enterprise
regulatory
requirements,
right,
for
example,
maybe
you're
working
with
government
and
the
you
know.
You
need
a
cloud
provider
that
has
an
acceptable
environment
for
you
to
deploy
into
for
federal
work
or
things
like
that.
A
Another
big
thing
that
we're
seeing
lately
is
data
sovereignty
right.
American
companies
want
to
know
that
their
data
is
stored
on
servers
that
are
in
america.
European
countries
might
want
to
know
that
their
data
is
stored
in
europe,
australia
you
may
need
to
keep
it
in
australia
and
so
on
right.
So
a
lot
of
the
time
where
we
say
you
know
this
last
mile
deployment
style
can
also
be
because
you
know,
if
you
want
to
do
business
with
the
australian
government
as
an
american
company,
you
may
well
need
servers
that
are
deployed
in
australia.
A
So
you
know
what
are
the
regulatory
requirements
around
that
if
you
plan
on
doing
particularly
government
business,
but
you
may
have
regulatory
requirements
on
your
business
that
only
certain
clouds
provide
right
like
pci
dss
but
or
you
know
how
easy
will
it
be
for
you
to
get
stock
to
compliance
on
a
cloud
and
all
of
those
types
of
considerations
are
also
important.
A
Another
good
point
about
why
it's
nice
not
to
be
tied
into
a
cloud
right,
because
if
you
ultimately
do
need
to
move
for
some
outside
consideration
like
that,
then
you
know
it
makes
the
whole
process
much
easier.
Of
course
that
takes
us
to
looking
at
some
of
the
commercials
and
the
differences
right
contracts
and
commercials
between
the
different
clouds.
A
It's
very
common
for
large
consumers
to
get
better
deals
with
different
clouds
by
signing
contracts,
and
things
like
that,
but
also
the
various
types
of
support
you
get
per
cloud
is
very
different
right,
so
slas
might
be
different,
their
fees
might
be
different.
You
know
you
can
see
that
they
can
scale
up
a
lot
right.
This
particular
cloud
example
is
fifteen
thousand
dollars
a
month
for
15-minute
support.
Right
now,
if
you've
got
a
hundred
percent
sloa
guarantee-
or
you
know,
five
nines-
you
need
15-minute
support.
So
then
you
need
to
account
for
that
right.
A
What
are
the
liability
and
performance
guarantees
they
they
give
and
so
on?
You
know,
and
and
how
does
that
affect
the
slas
that
you
ultimately
have
to
cover
because,
as
you
saw
on
one
of
my
very
first
slides,
you
know
23
of
clients
or
27
percent
of
clients,
so
they
haven't
had
an
outage
this
month
in
public
cloud.
That's
enterprises
right
with
big
workloads.
So
you
know
you
really
need
to
consider
that.
I
think
it's
quite
important.
A
A
This
we
see
a
lot
and
you
know
it
went
through
like
kind
of
a
bit
of
a
wave
or
should
I
say,
like
a
valley
where,
in
the
beginning
of
people
using
more
and
more
cloud,
consumption
hybrid
was
very
common,
hybrid
being
that
you
had
physical
data,
centers
or
on-premise
workloads,
and
you
also
had
public
cloud
workloads
right.
A
What
I
mean
by
the
the
valley
is
that
that
kind
of
started
to
shrink
down
and
now
we're
seeing
a
big
uptick
in
that
in
cloud
plus
x,
and
often
it's
like
a
b
c
d
e,
f
g,
you
know
much
more
than
just
x
because
people
are
putting
workloads
on
edge
deployments
or
people
need.
You
know,
custom
hardware
for
certain
types
of
workloads
where
you
know
hardware
is
much
better
or
they
need
gpu,
workloads
or
whatever.
A
That
might
be,
but
you
know
like
I
said
I
think
multi-cloud
and
the
story
around
multi-count
is
a
good
one,
because
you
know
70
of
people
will
just
be
in
multiple
clients,
but
don't
necessarily
restrict
yourself
to
to
just
clouds.
When
you're
looking
at
deployment
types
right,
data
centers,
app-based
services
right
places
where
you
can
deploy
apps
as
a
service
and
they
scale
them
for
you.
You
know
infrastructure
is
code,
stuff,
serverless
things.
There
are
many
different
ways
of
deploying
an
application
in
you
know
in
multiple
locations
and
and
handling
that
kind
of
thing.
A
So
the
next
thing
is
how
to
deploy
to
multiple
clouds
and
what
to
think
of
right-
and
the
first
thing
is
again
to
consider
your
architecture,
so
vms
are
very
different
from
containers,
kubernetes
versus
monolithic
environments
right,
so
you
know
when
you're
deploying
vms
to
multiple
locations,
they
obviously
need
a
lot
more
consideration
than
just
containers.
A
Sometimes
containers
can
be
actually
much
more
work,
I
suppose,
but
it
really
depends
on
what
you're
deploying
into
what
environment
and
a
lot
of
the
time.
You
know
we
got
told
this
kind
of
story
long
ago.
That
hardware
was
dead
and
everything
was
going
to
be
virtual
machines
and
then
the
next
part
of
the
story
was
that
virtual
machines
were
dead
and
everything
was
going
to
be
cloud.
A
A
They've
got
hardware,
workloads,
they've
got
vms,
they've
got
mainframes,
they've
got
containers,
they've
got
kubernetes,
workloads,
they've
got
serverless
vms
cloud
instances,
and
I
think
that's
here
to
stay.
For
a
long
time
and
like
I
said,
we
are
seeing
more
and
more
hardware
workloads
and
more
more
age
workloads.
So
it's
not
just
you
know
what
people
are
slow
to
get
to
cloud.
I
think
the
real
key
is
you
know,
what
does
your
infrastructure
require?
How
does
it
get
deployed
and
how
do
you
ultimately
deploy
to
it?
A
The
next
kind
of
point
there
is
to
automate
absolutely
everything
you
can
around
deployments
right
so
like
for
us.
For
example,
we
have
a
ci
cd
environment,
so
we
continuously
deploy
and
it
does
our
entire
deployment
chain
for
all
of
our
applications
and
systems
and
servers
and
sites
and
everything
around
the
globe.
A
All
are
automated
from
from
our
repositories
right
now,
that's
not
always
the
case
for
every
business,
but
as
best
as
you
can
look
for
tools
that
will
allow
you
to
you
know,
automate
the
deployment
process
into
environments,
especially
where
you
can
define
those
environments
through
config
files
and
code
and
apis,
and
things
like
that
right,
where
you
can
integrate
them
into
your
test
environments
and
your
continuous
integration
environments
and
so
on
and
deploy
automatically
into
these
different
cloud
providers.
A
So
things
like
terraform,
for
example,
make
it
very
easy
to
take
complex
systems
like
full,
stack,
vms
and
deploy
them
into
two
different
cloud
providers
and
configure
them
to
use
the
local.
You
know
deployments
and
things.
So
it's
not.
You
know
just
that
containers
of
the
answer.
For
example,
I
think
that
you
know
the
automation
process
of
it
is
very
important
and
also
knowing
that
everything
has
to
be
automated,
really
lends
your
team
towards
building
and
designing
your
application
infrastructure
and
even
your
developers
to
developing
it
in
such
a
way.
A
A
You
know
running
our
business
around
the
world,
you
know
soon
see
we
have
hundreds
now
and
it's
you
know
when
you,
when
you
need
to
scale
up
rapidly.
It
really
helps
a
lot.
A
The
next
point
is
to
utilize
auto
scaling
functions
where
you
can
so
one
of
the
problems
with
costs
around
high
availability
and
multi-cloud
strategies
is
that
if
you
deploy
the
same
environment
into
two
places,
generally
speaking,
you'll
be
paying
twice
for
it
and
it
gets
much
worse
when
you
deploy
it
into
50
places.
You
know:
we've
got
clients,
for
example,
that
have
got
somewhere
between
12
and
14
000
applications
running
in
60
countries,
they've
got
450
adc's
a
thousand
behind
each
adc.
They
can
average
between
10
and
20
servers.
A
So
you
know,
do
the
math
right
it
becomes
a
real
nightmare
in
terms
of
cost
and
scaling
and
management,
and
so
on.
Auto
scaling
is
something
that
exists
in
all
orchestration
platforms
and
in
most
cloud
platforms,
and
if
it
doesn't
exist,
it
can
be
scripted
and
tooled
in
quite
easily
and
what
it's
really
saying
is
well,
let's
take
a
simple
example:
you've
got
a
website
and,
depending
on
how
busy
your
website
is,
depends
how
many
servers
you
need
now
that
website
can
no
longer
afford
to
fail
right.
A
You
can't
afford
to
have
an
outage,
so
you
decide
to
deploy
it
into
two
different
locations,
but
in
your
example
here
you
send
all
of
your
traffic
to
your
new
york
data
center,
and
only
if
it's
down
do
you
decide
to
send
it
to
the
new
jersey.
One
there's
no
need
to
have
the
same
number
of
servers
running
in
new
jersey,
as
you
do
in
new
york,
when
the
new
york
system
is
online,
and
this
is
a
key
consideration
of
multi-cloud
deployments.
A
Even
if
you've
got
an
active,
active
environment
like
we
do
where
all
of
your
servers
could
receive
traffic
at
any
point
in
time.
You
know
our
workload
right
now
is
largely
in
the
us,
but
you
know
two
three
hours
ago:
it
was
a
split
between
the
u.s
and
europe.
A
You
know
six
or
seven
hours
ago
it
was
largely
europe,
and
so
you
know
your
systems
can
scale
up
and
scale
down
as
needed
during
those
times
right.
A
simple
example
is
something
like
you
know,
black
friday
or
cyber
monday
right.
Do
you
really
need
to
run
as
many
servers
as
you
need
on
cyber
monday
all
year?
Obviously
not,
but
clouds.
A
So
you
know
it's
really
a
high
availability
function
as
well
and
very
useful
and
then
ultimately
saves
you
a
huge
amount
of
money,
of
course,
because
most
of
the
time
your
environment
hasn't
failed.
Of
course,.
A
The
next
learning
and
advice
of
ours
is
through
traffic
with
gslb
gslb
is
an
acronym
stands
for
global
server
load
balancer,
it's
quite
a
simple
concept
really,
but
what
it
is
to
do
with
is
having
an
intelligent,
dns,
server
or
service.
There
are
many
companies
that
provide
it
most
clouds
provide
it.
A
Snap
provides
it,
for
example,
that
will
route
people
to
different
locations
around
the
world,
or
data
centers
based
on
some
information
about
them
or
about
the
place
that
they're
trying
to
go
so
dns
normally
is
the
process
of
saying
okay,
I'd
like
to
go
to
www.snap.net
and
you
leave
your
house
or
office,
and
you
go
directly
to
the
ip
address
that
snap.net
resolved
to
in
the
data
center
somewhere.
A
A
So
it's
like
trying
to
manage
a
traffic
jam
at
the
toll
gate.
You
know
it's
very
difficult.
At
the
toll
gate
to
move
people
to
a
different
road
or
interstate
or
highway
or
whatever
it
is,
or
at
the
you
know,
at
the
front
of
the
traffic
gym,
if
there's
been
an
accident
on
the
road
by
the
time,
people
get
there,
it's
very
hard
to
move
them.
A
What
gslb
is
is
about
doing
is
changing
where
they're
going
when
they
leave
the
house,
so
it's
telling
their
gps
to
send
them
on
a
different
road
right
and
it
lets
you
respond
to
dns
queries
for
a
site
or
many
sites
based
on
the
health
of
the
destination.
So
you
can
say
well.
Is
this
data
center
online?
How
busy
is
it
yeah?
Is
it
saying
that
I
should
shift
traffic
elsewhere?
You
know
think
considerations
like
that.
Is
it
dead
right
and
also
based
on
the
source
of
the
user?
Where
are
they?
Where
are
they
from?
A
A
Then,
let's
also
say,
is
the
west
coast
us
data
center
online,
if
so,
cool
we're
good,
if
not
send
them
to
the
next
closest
data
center
new
york,
maybe-
or
we
might
say-
well
this
user's,
not
from
america.
Where
do
we
want
to
send
them?
You
know
things
like
that
right,
so
gslb,
very
powerful
system
for
being
able
to
move
between
multiple
clouds.
A
It's
a
great
enabling
way
of
doing
that,
and
it
allows
you
for
to
do
things
like
test
deployments
very
easily.
It
allows
you
to
weight
things
very
easily,
so
you
know
you
might
be
saying:
okay.
Well,
how
do
I
get
to
this
multi-cloud
dream
that
dave
keeps
going
on
about
I've?
You
know
I'm
going
to
set
up
my
infrastructure
in
azure
as
well,
so
we're
not
just
in
aws
anymore,
we're
not
launching
in
azure.
How
do
I
send
some
traffic
there?
A
So
next
thing
to
talk
about
is
how
to
secure
multiple
clouds,
and
this
is
hard
it's
hard
when
your
security
is
advanced.
It's
easy
when
your
security
is
basic,
but
all
security
is
having
to
become
much
more
advanced.
So
why
do
I
say
it's
so
hard
right,
because
security
solutions
now
are
no
longer
about
layer,
4
traffic
control,
so
layer
4
would
be
ip
layer
right.
So
you
know
does
this?
Is
this
ip
address
allowed
to
go
to
that
ip
address
on
that
port?
A
Like
all
you
know,
classic
firewalls
right,
but
much
more
now,
they're
about
layer,
seven,
which
is
what
is
the
user
actually
asking
for
and
then
above
that
would
be.
You
know
all
of
the
the
kind
of
security
techniques
that
companies
like
us
are
employing
now,
which
would
be
machine
learning,
anomaly,
detection,
etc
for
traffic
patterns,
as
you
scale
to
multiple
clouds,
it
becomes
extremely
difficult
to
do
that
kind
of
thing,
because
in
the
example
I
use
the
very
big
client,
that's
where
they
have
450
adcs
around
the
world.
A
It's
almost
useless
for
them
to
operate
as
individual
islands
looking
at
just
the
data
that
they
see
they're
only
seeing
1
450th
of
the
data
right
and
when
you
move
to
multiple
clouds.
This
can
be
a
big
problem
and
especially
as
you
scale
up
right.
So
the
first
point
that
I
have
is
to
look
for
cloud
neutral
solutions
for
layer,
7.,
specifically
layer,
seven
you'll
see,
I
suggest
cloud
for
layer
four.
So
what
I
mean
there
is
the
application
layer
right.
A
Your
things
like
wave
application,
firewalls
intrusion,
detection
systems
sims
all
that
kind
of
stuff,
should
be
able
to
run
in
any
cloud
container
edge
device.
A
Hardware
solution,
anything
it's
critical
if
you
want
to
actually
actively
manage
the
kind
of
threat,
profile
and
risk
of
your
network
that
you
are
running
that
solution
in
every
location
in
your
network,
it's
not
much
good
having
a
very
intelligent
solution,
one
cloud
when
the
other
one
is
not
or
when
the
two
clouds
cannot
communicate
with
each
other
about
threats
or
when
no
one
knew
that
half
of
your
office
was
sending
data
to
ransomware
sites
for
the
last
week.
Anyway,
you
really
need
something:
that's
neutral
right,
so
able
to
run
anywhere.
A
A
People
are
either
going
to
your
more
tried
and
trusted
vendors
in
the
industry,
who
are
more
likely
to
run
a
monolithic
environments
or
these
new
shiny
solutions
that
only
really
work
well
in
containers,
for
example,
or
in
kubernetes,
and
the
reality
of
like
all
corporate
workloads
is
that
it's
all
types
of
things
and
then
I
think
it's
very
important
to
support
your
ci
cd
environment,
your
disaster,
recovery
and
test
environments,
etc.
Like
I
said
it
should
be
able
to
run
in
all
of
those
locations
right.
A
That
brings
me
to
I
think
the
biggest
learning
we've
had
building
a
multi-cloud,
a
large-scale
multi-cloud
organization
is
centralized
monitoring
the
ability
to
track
visualize
and
verify
everything
in
one
location
right
like
the
nightmare.
A
When
you
have
a
large
network,
like
we
do
of
someone
saying
some
of
our
users
in
southam
or
even
where
someone
says,
we've
got
a
user
in
south
america
who
says
they're
getting
a
500
error
when
they
try
to
log
in
and
you're
like,
okay,
well,
that's
kind
of
odd
and
then
lo
and
behold,
two
more
tickets
come
in
from
two
more
users
in
south
america
having
errors
right.
But
you
look
on
the
system.
A
Try
and
looking
through
the
system,
and
you
notice
that
there's,
like
you,
know,
300
south
american
users
online
that
seem
to
be
working.
Fine,
where
you
know
where's.
Where
are
those
users
even
going?
Are
they
you
don't
have
servers
in
south
america?
Are
they
going
to
the
west
coast
of
the
us
or
the
east
coast
of
the
us
like,
and
then
you
begin
grabbing
through
logs?
And
it's
just
it's
completely
unmanageable
right
and
it
leads
to
things
like
what
we
call
tribal
knowledge.
A
All
the
way
up
to
cio
level
in
our
industry,
you
know
in
valley,
for
example,
our
client
types
staff
retention
is
about
five
and
a
half
months.
So
it's
very
difficult
to
have.
You
know
people
that
have
scripts
or
tools
that
that
are
not
accessible
across
the
business
right.
So
central
monitoring
is
key
between
all
these
environments,
which
really
does
lend
itself
to
a
cloud
neutral
being
the
neutral
solution.
A
Because,
ultimately
you
know
you
know
you're
going
to
be
deploying
to
multiple
environments
and
locations,
so
you
need
something
they
can
monitor
across
all
of
it
right.
So
that's
a
key
learning
of
ours
and
then
I
believe
that
you
should
use
the
local
cloud
services
for
layer,
4
solutions.
A
So
there
I'm
talking
about
firewalling
and
things
like
that
right.
What
security
groups
can
access,
what
security
groups,
what
ip
addresses
are
allowed
in
on
what
ports,
what
ports
are
not
allowed
in?
No
matter
what
you
do,
etc.
Right
your
your
standard
layer,
4,
firewalling,
a
it's
really
a
commodity
and
it's
very
cheap
to
use
public
cloud
for
that
and
b.
A
It's
very
easy
to
do
with
security
groups
and
services
like
that,
and
I
don't
think
that
it
requires
so
much
maintenance
at
a
at
a
layer
forward,
an
ip
level
that
you
have
problems
between
clouds.
What
I
mean
by
that
maintenance,
is,
you
know
the
worst
is
when
you
have
a
new
ssl
certificate.
Now,
let's
say
you're
gonna
expiry
on
your
ssl
certificate
and
you
have
to
update
it
and
where,
in
all
the
places
is
this
thing
used?
A
You
know
which
cloud
do
you
ultimately
forget
to
put
this
thing
in
or
which
application
do
you
find
out
at
1201
tonight
it's
got
an
expired
ssl
certificate.
You
put
it
there.
The
beauty
about
layer
4
is
that,
typically,
you
are
basically
just
saying:
deny
outbound
access
from
my
systems
out
and
there's
someone
send
something
in
or
they're
doing
an
update,
allow
in
port
80
and
port
443.
A
You
know
web
http
and
https
maybe
allow
in
some
other
port
from
our
office
ip
range
and
it's
pretty
static
and
you
kind
of
leave
it
at
that.
You
know
so
it
doesn't
become
this
like
this
nightmare,
to
manage
layer.
7
really
does
where
you
know,
for
example,
let's
take
that
large
client
again
450
devices
around
the
world
they
decide
as
of
the
beginning
of
next
month,
we're
not
supporting
tls
1.2
anymore.
A
It
can
be
a
huge
process
to
disable
tls
1.2
across
an
entire
organization
right.
So
that's
the
layer,
7
consideration
where
I
say
that
you
should
use
cloud
neutral
solutions
on
the
slide
of
course,
and
then
layer,
four,
when
you're
talking
about
pure
ip,
I
really
recommend
using
the
the
native
cloud
solution.
Of
course,
the
next
thing
I
want
to
do
is
a
small
preview
of
how
snap
does
it
so
the
first
thing
is
our
live
network.
This
is
from
yesterday,
but
these
are
our
live
systems
and
deployments
for
our
organization.
A
So
there's
about
500
and
something
online
servers
or
containers
that
are,
you
know,
managed
directly
by
us
in
order
to
to
deliver
the
services
that
we
do.
So
you
can
imagine
the
complexity
there
right.
You
can
see
in
south
africa,
for
example,
bottom
middle
of
the
map
in
three
different
locations,
which
are
all
three
different
providers,
and
only
one
of
them
is
a
cloud
provider
across
europe.
You
know
a
lot
of
those.
Are
data
centers
us
a
lot
of
data
centers
a
lot
of
public
cloud
as
well.
A
A
So
our
keys
are
key
takeaways.
I've
mentioned
a
lot
of
that
already
in
this,
in
this
kind
of
presentation,
is
that
we
use
automation
for
absolutely
everything.
So
there
are
no
manual
deployments
of
any
websites
applications,
api
services,
anything
we
provide
at
all.
It
all
goes
through
a
citd
environment
and
then
is
deployed
automatically.
A
We
do
obviously
in
our
product
because
we
provide
that
service
for
clients,
but
our
deployment
process
is
not
tied
to
any
specific
cloud,
so
it
is
possible
to
do
that
relatively
easily
with
the
right
design
right
and
we
secure
everything
with
the
same
product
and
then
we
use
that
product
for
monitoring
visibility,
etc.
So
we
have
a
central
dashboard
where
we
monitor
basically
our
entire
infrastructure.
A
I
think
the
most
important
lessons
that
we
have
the
most
important
kind
of
advice
to
give
here
is
really
that
we
do
not
develop
anything
that
has
a
specific
cloud
provider's
naming
or
platform
provider's
name
and
right.
We
don't
want
to
use
their
direct
apis
wherever
possible
and
that
we
automate
absolutely
everything
that
we
can.