►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
and
welcome
to
this
webinar,
my
name
is
jonathan
selig.
A
I
am
the
executive
chairman
and
co-founder
of
ridge
and
I'm
here
with
nir
sheffie,
who
is
also
a
co-founder
of
ridge
as
well
as
the
co-ceo
and
cto
of
the
company,
and
today
we'd
like
to
spend
some
time
talking
to
you
a
little
bit
about
the
distributed
cloud
paradigm,
which
is
an
alternative
to
the
centralized
cloud
model
that
most
of
us
are
familiar
with,
and
we
also
want
to
talk
about
how
managed
kubernetes
powers
this
model
of
a
decentralized
distributed
cloud
and
makes
it
possible
to
deploy
and
scale
cloud
native
applications
virtually
anywhere
in
the
world
without
some
of
the
latency
and
data
residency
challenges
that
we
currently
see
when
we
look
at
the
large
public
clouds.
A
So,
let's
dive
into
this
and
talk
a
little
bit
about
what
this
new
paradigm
looks
like
and
and
how
it
works.
B
Thank
you
jonathan.
B
As
you
know,
most
public
cloud
services
today
are
provided
through
centralized
cloud
platforms
such
as
the
big
hyperscalers
and
clearly
these
work
amazingly
well,
and
they
provide
many
services
that
I'm
sure
many
of
you
are
using
today,
but
but
we're
seeing
that
sometimes
the
centralized
public
clouds
leave
what
we
call
coverage
gaps.
In
other
words,
it
can't
provide
the
level
of
service
that
you
need
for,
for
example,
if
you
need
your
workloads
to
run
in
close
proximity
to
your
customers
and
the
public
cloud
is
not
there.
B
That's
a
physical
coverage
gap
more
and
more
applications
and
use
cases
need
to
be
close
in
close
proximity
to
end
customers
and
offer
good
performance
as
time
as
time
goes
on,
and
we
will
a
new
communication
technologies
such
as
such
as
5g
we're
seeing
more
and
more
demand
for
high
throughput
and
low
latency.
B
B
A
A
In
many
many
cases
it
can
offer
benefits
which
maybe
outweigh
the
size
and
scale
of
the
hyperscale
clouds,
and
so
the
benefit
of
being
distributed
can
be
very,
very
significant,
depending
on
the
application
type
and
the
application
requirement
the
ability
to
deploy
and
to
scale
where
you
need
to
be
even
the
ability
to
add
a
pop
if
one
isn't
available
in
a
particular
geography
that
you
want
to
run
in,
which
is
something
we've
seen
from
from
some
of
our
customers
at
ridge
can
be
a
really
big
deal
and,
of
course,
the
distributed.
You
know.
A
Paradigm
of
infrastructure
is
also
very
very
relevant
to
both
hybrid
cloud
and
multi-cloud
models,
both
things
that
are
very
much
sort
of
in
in
active
conversation
with
a
lot
of
enterprises
and
a
lot
of
companies
out
there.
It's
become
a
fundamental
part
of
any
company's
hybrid
or
multi-cloud
architecture.
To
understand
that
they're
going
to
need
some,
you
know
effectively
sort
of
distributed
cloud
capability.
B
You
know
jonathan
now
that
we've
discussed
the
public
cloud
coverage
gap
challenge
and
with
raising
the
idea
of
a
non-centralized
cloud
or
what
we
call
a
distributed
cloud.
Let's
discuss
how
it's
done
our
vision
when
we
founded
our
company
was
that
we
wanted
to
run
our
cloud
in
on
any
underlying
infrastructure
or
any
heterogeneous
physical
servers
on
any
underlying
ios
or
physical
or
virtualization
systems,
orbit
bare
metal
machines.
B
So
we
could
achieve
a
cloud
that
hypothetically
could
be
expanded
to
hundreds
and
thousands
of
locations
or
regions
again
lack
of
a
better
world
and
which
we
could
offer
fast
integration
and
capacity
all
over
the
world
to
users.
It
would
feel
exactly
like
a
public
cloud
that
they're
used
and
familiar
with
for
this
to
work.
We've
built
a
platform
based
upon
cloud
native
building
blocks.
The
first
of
these.
B
These
are
a
fully
managed
kubernetes
solution
which
enables
users
to
run
whatever
they
would
like
to
run
since
it's
based
on
a
de
facto
spec
of
deployment
on
a
cloud.
That's
kubernetes,
any
application
on
aws,
gcp
or
azure
running
on
eks,
gke
or
aks
can
run
on
ridge
without
needing
to
change.
A
single
line
of
code,
except
that
it
that
with
ridge
it
can
run
on
hundreds
and
thousands
of
locations.
B
The
second
building
block
is
our
container
service,
which
allows
users
to
deploy
containers.
If
you
don't
want
to
have
the
you
know
full
full-blown
kubernetes
or
sometimes
you
don't
need
that,
then
you
could
just
say.
I
want
this
image
and
run
it
in
thousands
and
hundreds
and
thousands
of
locations,
and
we
take
care
of
all
of
the
heavily
lifting
of
the
physical
infrastructure
and
the
last
building
block
that
we've
deployed
is
our
fully
object.
Storage,
fully,
compatible
s3
api
object,
storage
solution,
the
that
has
again
fully
compatible.
B
In
this
case
the
defacto
spec
is
s3,
but
the
difference
is
that
it
could
run
globally
in
hundreds
and
thousands
of
locations
across
the
ridge
network.
B
All
this
works
on
top
of
any
underlying
physics.
Rich
doesn't
own
anything.
We
use
an
amazing
data,
centers
intel
codes
that
are
already
out
there.
That's
why
we
can
scale
and
endless
public
or
private
regions.
B
As
a
customer,
you
just
need
your
credit
card
pay
as
you
go,
you
don't
need
any
prior
commercial
agreements
with
any
of
our
data.
Centers
around
the
world
ridge
distributed
cloud
enables
developers
to
to
describe
the
required
resources
as
they
deploy
their
kubernetes
clusters
container
or
object
storage
and
as
a
managed
kubernetes
service.
The
distributed
platform
will
adjust
workloads
automatically
by
spinning
up
computing
instances
wherever
they're
needed.
A
I
think
one
of
the
things
to
talk
about
here
is
that
the
the
flexibility
and
the
functionality
that
you've
described
is
becoming
more
and
more
essential,
with
the
increase
in
cloud
native
activity
and
cloud
native
application
development,
and
to
be
able
to
do
this
anywhere
that
that
it's
needed
the
big
promise
in
cloud
computing
was
always
the
abstraction
of
infrastructure
complexities,
meaning
that
developers
were
going
to
be
freed
up
to
focus
on
writing
great
code.
A
But
you
know,
I
know
that
in
a
lot
of
conversations
that
we've
had
with
folks,
we
find
that
you
know
today's
advanced
containerized
micro
services
based
and
cloud
native
applications
are
often
so
complex
that
developers
are
finding
themselves
spending
actually
a
lot
more
time
dealing
with
infrastructure
configuration
and
design
than
you
know,
sometimes
even
than
with
coding.
So
it's
kind
of
not
what
the
promise
was
was
going
to
appear.
B
However,
the
full
potential
of
many
cloud
native
applications,
often
with
strict
latency
and
throughput
requirements,
cannot
be
realized
until
they
can
be
deployed
anywhere
to
ensure
superior
performance
and
we're
seeing
applications
today
that
need
that
performance.
For
example,
we
have
a
customer
offering
a
remote
desktop
and
a
vdi
and
they
need
extremely
low
latency.
B
A
I
think
that's
for
sure
we
keep
seeing
more
and
more
as
we
engage
customers
that
more
and
and
more
applications
are
being
developed
to
become
latency
sensitive
and
they
care
a
lot
about
that
before
near
starts
a
demo
of
how
managed
kubernetes
is
used
in
the
distributed
architecture
that
we're
describing
here.
I
want
to
discuss
just
a
couple
of
our
current
deployments
with
customers
who
are
running
applications
on
ridge,
and
these
are
applications
that
really
were
made
possible
because
of
ridge's
distributed
cloud
paradigm.
A
A
And
cyber
security
solutions
for
that
solution
is
basically
the
idea
of
replicating
desktops
to
make
sure
that
they
are
malware
free
and,
as
you
can
imagine,
end
users
who
are
using
these
remote
desktops
can't
sense
any
delay
or
lag
in
their
browser
and
and
feel
like
this
experience
is,
you
know,
as
a
simulated
desktop
experience.
A
The
company
that
we
were
working
with
on
on
this
deployment
has
told
us
that
when
they
had
a
set
of
users
in
paris
connecting
to
a
hyperscale
data
center
in
frankfurt,
the
latency
level
in
that
communication
path
was
simply
unacceptable.
People,
using
that
virtual
desktop
offering
felt
like
there
was
lagged,
and
so
the
ability
to
find
a
distributed
solution
that
gives
them
a
point
of
presence
right
close
to
those
parisian
users
was
critical
for
the
functionality
and
the
customer
satisfaction
of
their
offer.
A
Another
deployment
I
can
describe,
which
is
a
pretty
interesting
one,
is
we:
have
customers
created
an
eyewear
simulation
software
that
enables
you
to
try
on
glasses
virtually
through
an
app
it's
a
a
large
omni-channel,
eyewear
retailer
and
and
users
love
this
functionality,
but
this
functionality
depends
on
having
gpu
in
proximity
to
that
that
end
user.
All
of
this
is
being
dealt
with
with
kubernetes
as
the
the
management
platform
for
these
workloads
and
the
customers.
Workloads
are
running
on
local
data
centers
in
lots
of
different
places.
A
Moving
this
capability
to
a
public
cloud
really
wasn't
an
option
that
was
going
to
be
effective
for
this
company.
It
would
have
added
a
lot
of
latency
would
have
degraded
the
app
experience,
and
so
they
came
to
us
because
they
knew
that
we
had
the
ability
to
easily
give
them
these
cloud
native
services
with
gpu
on
the
back
end
in
lots
of
localities
where
they
needed
that
capability.
So
those
are
just
a
couple
of
examples
of
places
that
we've
found
this
this
real
embracing
of
and
in
fact,
this
requirement
for
a
highly
distributed
cloud.
B
A
I'll
stop
sharing
this
screen
and
nero
will
bring
up
your
your
desktop
and
let
you
take
up
folks
on
a
tour
of
how
the
ridge
cloud
operates.
B
Cool
you
jonathan,
so
that's
rich!
That's
the
ui
and
obviously
there's
an
api
you're
welcome
to
go
ahead
to
our
website
as
a
link
to
our
developer
portal.
You'll
be
able
to
see
all
of
it
is
fully
restful
api.
So
you
could
download
our
open
api
spec
and
you
could
try
it
out,
and
this
is
the
ui
of
the
end
user
developer,
devops
idea
and
whatnot
that
you
could
interact
with
our
cloud.
So
once
you
log
in
and
we
are
connected
to
any
external
identity
providers
such
as
google
or
github
and
microsoft,.
B
Or
external
offer
services
you
are
in
a
context
of
an
organization
and
a
project
here,
so
we
are
handling
different.
You
know
all
of
our
identity
and
management
system
could
you
could
manage
members,
give
them
permissions
and
so
on
very
similar
to
what
you
might
find
on
on
a
public
cloud.
I'm
not
gonna
get
into
too
much
detail
here
in
in
this
demo.
Bear
in
mind
that
we
do
provide
that
out
of
the
box.
What
you
see
here,
those
are
data
centers.
B
We
could
connect
to
more
and
more
locations
as
as
time
go
and
as
customer
demands,
we
can
connect
to
public
data
centers
similar
to
what
you
might
call
or
use
as
zones
or
regions,
and
you
could
see
here
that
we
show
those
public
data
centers,
that
reach
has
integrated
and
has
commercial
agreements
with
to
to
to
you
to
you
guys
to
you
to
the
users
to
the
end
user.
As
you
could
see
here,
we
don't
hide
the
fact
that
it
is
operated
by
a
specific
data
center
provider,
for
example.
B
This
is
operated
by
a
catalyst
from
new
zealand
and
you
could
see
everything
in
a
transparent
way
like
certifications,
hardware
requirements,
obviously
the
location
and
pricing
here.
So
you
can
see
pricing
here
for
lack
of
a
better
word.
This
is
those
are
the
instance
types,
but
as
you
might
imagine,
each
data
center
each
location
has
their
own
heterogeneous
underlying
physics
heterogeneous
underlying
infrastructure.
So
we
transparently
show
it
to
you.
So
you
could
see
here
different
providers.
B
Everything
is
in
a
transparent
way,
so
you
could
choose
the
best
location,
the
certification,
best,
sla
and
obviously
best
price.
So,
for
example,
if
you
could
see
here,
this
is
one
of
our
partners
in
in
hong
kong,
and
you
could
see
here
that
the
instance
price
is
a
little
bit
different
because
they
offer
an
ability
to
have
flexible
resources.
B
So
those
would
be
a
public
data
centers
that
we
manage,
as
you
might
imagine,
we
are
also
able
to
connect
to
on-premise
or
private
data
centers.
So
if
you
as
a
customer,
have
some
internal
data
center
private
installation
in
your
own
data
center
or
in
on
top
of
one
of
the
data
center
that
exists
out
there,
we
can
connect
to
that
and
connect
to
any
underlying
ios
technologies
based
on
vmware,
openstack
or
whatnot.
B
You
know
all
of
the
different
kinds
of
flavors
and
versions,
and
it
becomes
a
popular
system
or
a
region
in
our
system,
obviously
fully
private
to
your
organization.
So
nobody
else,
it's
a
fully
multi-tenant
system.
So
nobody
else
who
is
able
to
connect
to
that
and
you
could
deploy
anything
that
you
could
deploy
in
the
public
regions
using
ridge.
So
those
are
public
data
centers.
We
also
offer
an
on-premise
solution
that
we
can
connect
to.
B
So
that's
the
data
centers
on
top
of
all
of
those
data
centers,
we
have
developed
web
services,
and
so
we
take
legacy
infrastructure
where,
like
basic,
is
solution
and
turn
it
into
a
fully
cloud
native
web
services
and
our
flagship,
as
I
could
show
you
right
now-
is
our
fully
managed
kubernetes
solution.
B
We
offer
a
fully
managed
kubernetes
solution,
same
features,
same
capabilities
as
you
might
find,
on
aws,
gcp
and
azure,
eks,
gqe
or
aks.
The
only
difference
is
that
our
solution
can
run
in
hundreds
and
thousands
of
locations
across
all
of
the
data
centers
that
we
are
integrated
to.
B
We
made
a
lot
of
effort,
as
you
will
be
able
to
see
in
making
this
a
very,
very
simple
experience
to
onboard,
so
you
will
be
able
to
hopefully
see
that
we
could,
for
example,
spin
up
clusters
in
a
few
minutes
like
three
to
four
minutes.
We
could
manage
them
and
we
we
managed
the
cluster
end-to-end
by
auto
provisioning,
auto
scaling
or
to
healing
or
to
upgrades,
and
obviously
we
manage
all
of
the
underlying
physics
like
load,
balancing,
persistent
volume
and
so
on.
B
So
let
me
show
you
how
easy
it
is
to
create
a
kubernetes
cluster
and
in
one
of
the
ridge
points
of
presence
around
the
world
and
again
this
could
be
in
hundreds
and
thousands
of
locations.
So
let's
call
this
cluster
demo.
We
support
high
available
and
low
available
control
plane,
and
that
means
the
amount
of
masternodes.
B
Obviously,
if
you
don't
need
to
be
high
available
like
for
you
know,
development
or
qa,
you
can
uncheck
this
and
we
only
create
one
master.
We
support
kubernetes
versions
as
a
part
of
cncf.
We
are
a
member
of
cncf.
We
comply
to
all
cncf
conformity
testing.
That
means
that
we
are
fully
kubernetes
distribution
and
fully
con
have
full
certificates
for
hosted
providers
in
the
same
way
that
aws,
gcp
or
azure
has
so.
If
something
runs
on
kubernetes,
it
can
run
on
rage
seamlessly.
We
don't
need
to
change
one
line
of
code.
B
Now
we
could
choose
the
location.
So
let's
choose
a
location.
I
don't
know,
let's
choose
something,
for
example
in
paris
here.
So
as
you
could
see,
this
is
one
of
our
partners
orange
in
europe.
So
I
could
choose
this
one
and
if
I
choose
an
output
for
those
of,
you
are
not
familiar
with
the
terms.
Node
pull
is
a
group
of
worker
nodes,
poker
nodes,
those
are
machines
who
actually
do
the
work,
so
I
could
give
it
a
name.
B
We
support
fully
auto
scaling
capabilities.
That
means
that
we
could
say
I
want
a
minimum
of
two:
must
two
worker
nodes
in
this
node
pool
and
a
maximum
of
maybe
three
nodes
and
we
automatically
scale
it
up
in
case
kubernetes
cannot
allocate
a
pod
and
since
there's
lack
of
resources,
so
we
choose
a
node
and
we
scale
it
up
automatically.
B
So
in
this
demo,
I'm
not
gonna,
auto
scale
anything
and
then,
as
you
could
see,
I
chose
a
location
in
paris
and
will
be
able
to
see
that
all
of
the
resources
were
propagated
here
according
to
their
location.
So,
for
example,
here
this
is
the
instance
types
or
the
ability
to
run
and
in
this
specific
location,
if
I
choose,
for
example,
another
location,
let's
say
I
choose
the
one
in
bungalow
here,
you'll
see
it's
a
little
bit
different,
because
this
one
offers
more
flexibility
to
the
instance
types.
B
Let's
go
back
to
our
data
center
in
orange
here
and
let's
choose
like
a
small
machine
with
two
cpus
and
four
gigabytes
or,
as
you
could
see
here,
there's
estimation
of
cost.
You
could
add
labels
and
paints
to
this
node
tool
and
add
more
node,
pools
and
different
sizing,
different
capabilities,
but
basically
that's
it.
Once
I
press
create,
you
could
sit
back,
relax.
It
takes
about
three
to
four
minutes,
so
we're
pretty
fast
in
auto
provisioning.
The
cluster
and
we
create
a
fully
isolated
cluster
here,
similar
to
a
vpc,
create
an
isolated
vlan.
B
We
create,
we
create
the
machines,
the
machine,
each
machine.
We
install
operating
system
on
the
machine.
As
you
can
see
here,
it's
already
allocated
and
not
gateway
ip
for
the
for
the
vpc.
So
we
installed
kubernetes
certification
security.
We
configure
all
of
the
underlying
physics,
so
you
guys
don't
need
to.
We
configure
load,
balancing
and
persistent
volumes
of
everything
so
once
the
cluster
switches
into
from
a
creating
into
a
running
state.
B
That
means
that
all
worker
nodes
has
been
provisioned
correctly
and
and
all
the
worker
nodes
and
sorry
orca,
nodes
and
masternodes
are
in
a
ready
state.
So
they
could
accept
incoming
traffic
sorry
deployments
on
the
and
start
deploying
an
application.
B
You
could
see
here
at
the
node
pools
you
could
see.
There
are
two
walker
nodes
being
created
right
now,
so
that's
auto
provision.
We
support
auto
scaling
for
each
and
every
one
of
the
node
pools
and
we
also
monitor
the
integrity
of
the
cluster
24
7..
That
means,
if
one
of
the
nodes
fail
for
whatever
reason
we
know
how
to
auto
heal
it.
So
we
know
how
to
gracefully,
kill
the
unhealthy
node
create
another
one.
B
So
we
showed
auto
provisioning,
auto
scaling,
auto
healing.
We
also
have
auto
upgrades
so
between
kubernetes
versions
and
a
click
of
a
button.
You
could
go
ahead
and
upgrade
once
it
is
upgradable.
It
will
appear
here
on
in
this
menu
and
you
could
click
it
and
it
will
be
upgraded
and
we
do
intend
to
release
version
22
and
23
in
the
next
weeks.
B
B
What
I'm
about
to
show
you
right
now
is
once
this
switches
into
into
a
running
state,
hopefully
soon,
if
the
gods
of
the
demo
like
me,
I
will
create
an
access,
key
access,
configuration
file
to
it
and
we'll
deploy
an
application,
we'll
deploy
something
simple
from
using
standard
kubernetes
tools
such
as
helm,
to
deploy
from
helm
repository
from
bitnami
I'm
going
to
deploy
wordpress,
which
is
a
website
from
bitnami
the
website
you
utilize
or
uses,
or
deploys
a
wordpress
application.
That's
a
container.
B
It
will
deploy,
also
mariadb,
which
is
a
mysql
database,
and
it
will
require
a
load
balancer
because
we
want
to
have
our
customers
have
ingress
traffic
internally
to
the
cluster
and
we'll
use.
Also
persistent
volumes,
so
you'll
see
that
disks
will
be
needed
to
be
created.
Yeah
cluster
is
running
talk
about
for
less
than
four
minutes,
so
you'll
see
that
we
we
will
want
to
run
also
persistent
volumes.
B
So
you'll
see
what
I'm
about
to
show
you
like
shows
you
how
we
can
configure
everything
seamlessly
so
all
of
the
resources
all
of
the
physical
resources.
So
you
guys
don't
need
to.
Let
me
just
create
a
an
access
token
here.
This
is
like
giving
permissions
to
one
of
the
members
of
my
organization
and
in
this
use
case,
that's
me
here.
I'm
going
to
grant
myself.
Let's
call
this
a
demo,
and
I
could
associate
this
with
an
arbuck
internally
to
the
cluster
and
I
can
create
this.
B
I
hope
it's
this
one
so
sorry
and
do
get
notes.
If
everything
works.
Fine,
you
will
see
that
we
have
three
masternodes
in
paris
right
now
and
two
volcanoes
as
expected
as
we
expected.
Let's
deploy
our
wordpress.
This
is
done
from
vietnamese
stable
repository
helm
repository
for
those
of
you
not
familiar
with
this.
B
So
let
me
clear
this
and
let's
see
what
we
have
deployed.
If
we're
looking
at
pods
right
now,
you'll
see
there
are
two
pods
running
right
now:
wordpress
and
mysql,
and
let's
look
at
services.
B
And
you'll
see
that
there
is
a
load
balancer
of
there's
a
service
of
a
type
load
balancer
that
requires
ingress
traffic
in.
If
we
switch
to
our
ui,
you
will
see
the
cluster
is
switch
into
a
configuring
state
pretty
fast,
so
we
missed
it,
but
it's
switched
into
a
running
state,
but
you'll
see
that
there's
a
load
balancer
here
was
created
for
us.
B
So,
as
you
could
see,
we
take
care
of
all
of
the
underlying
configuration,
so
we
automatically
knew
that
there's
a
load
balancer
a
requirement
from
the
application
that
requires
those
protocols
and
ports.
We
also
support
firewall
configuration
health
checks,
as
you
could
see
here.
Those
are
the
health
checks
on
the
nodes
for
each
port
and
we
also
take
care
of
the
public
ip.
So
you
can
see
in
this
use
case.
We
allocated
a
public
id.
Hopefully,
if
we
do
this
again,
this
public
ip
is
propagated
here
internally
and
wired
up
into
kubernetes.
B
If
we
look
at
the
persistent
volumes,
you
could
see
that
we
have
requested
two
disks,
eight
gigabytes
and
that
should
be
connected
to
mariadb.
That's
the
database
and
10
gigabytes
connected
to
wordpress,
persistent
disks
or
persistent
volumes
allow
the
user
if
the
node
goes
down
or
the
port
goes
down.
No
worries
data
is
still
still
still
persists,
so
kubernetes
can
allocate
the
new
port
on
another
node,
and
the
system
could
function
with
no
data
loss
and
minimum
data.
B
If
we
go
to
our
and
by
the
way
before
I
do
that,
we
look
at
our
pods
and
we,
like
some
more
information,
you'll
see
that
kubernetes
has
decided
to
allocate
wordpress
on
on
this
node
and
maria
db.
The
database
on
a
different
node
and,
as
you
could
see
here,
if
we're
going
to
go
to
persistent
volumes,
our
system
knows
that
this
was
required
by
kubernetes
and
knows
how
to
create
those
disks
attach
those
disks
into
specific
nodes.
B
Here,
as
you
could
see,
different
nodes
and
wire
it
up
internally
in
kubernetes
in
case
a
node
goes
down
or
a
pod
goes
down
and
kubernetes
allocates
it.
We
know
how
to
detach
it.
Attach
it
again,
so
you
guys,
as
application
developers,
don't
need
to
do
anything
other
than
deploy.
As
I
showed
you
here,
your
application,
let's,
let's
see
if
our
application
runs
and
as
you
could
see
it
is
let's
copy
the
and
the
ip
here,
and
if
we
go
here
and
browse,
we
should
get
our
website
yay.
B
So,
under
five
minutes
with
we
created
a
fully
running,
managed
kubernetes
solution
very
similar
to
what
you
might
find
on
the
hyperscalers
in
a
very,
very
simple
way,
but
we
could
do
it
in
hundreds
and
thousands
of
locations
across
the
ridge
network.
So
that's
the.
That
concludes
the
first
part
of
kubernetes.
B
On
that
note,
I'd
like
to
thank
you
all
for
joining
this
webinar
feel
free
to
contact
us
at
any
time.