►
From YouTube: 2021-10=06 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Welcome
everyone
today
we're
going
to
use
this
time.
I
should
have
quick
agenda,
I'm
assuming
there's
nothing
extra
on
that
today.
We're
going
to
use
at
least
majority
of
this
time
to
let
scarbeck
run
through
on
his
kubecon
talk.
So
we
go
we'll
do
this
as
a
full
run
through
with
questions
at
the
end,
and
this
is
as
much
kind
of
for
everybody
here
to
also
learn
about
the
topic
of
the
talk,
as
well
as
us
provide
any
feedback
for
to
the
talk
itself.
B
So
hopefully
this
gives
a
bit
of
a
kind
of
like
real
life
type
of
rehearsal
for
for
scarbeck
before
kubecon
next
week,
when
I
do
skull
workers
record
this.
If
you
want
this
beforehand,
then
let
me
know
otherwise,
after
you've
done
your
talk
next
week,
I'll
publish
it
up
and
we
can
actually
like
put
it
with
your
talk
title
in
case.
People
want
the
topic
as
well.
B
C
Thank
you
amy,
good
morning,
everyone,
as
you
know,
my
name
is
john
skarpic,
and
today
I'm
going
to
have
a
discussion
with
all
of
you
regarding
how
gitlab.com
has
reduced
the
cloud
bill
by
deploying
additional
clusters
kubernetes
clusters,
so
I
am
john
feel
free
to
just
call
me
scarbeck,
I'm
a
beekeeper,
but
I'm
also
a
site.
Reliability
engineer
for
the
delivery
team
at
gitlab,
been
here
for
about
three
years,
been
creating
all
kinds
of
fun
incidents,
and
everyone
loves
me
for
it.
C
In
today's
conversation,
I'm
going
to
start
a
little
bit
with
how
gitlab.com
got
started
with
kubernetes.
I
will
discuss
one
of
the
problems
specific
to
cloud
costs
and
the
use
of
highly
redundant
clusters
that
we
came
across.
I
will
then
highlight
the
solution
that
we
chose
to
alleviate
that
problem
and
then
go
over.
Some
of
the
bonus
wins
that
we
are
able
to
achieve
with
said
solution.
C
So,
firstly,
what
is
gitlab.com?
We
are
the
sas
variety
of
the
gitlab
product
and
a
full-blown
devops
platform
built
into
one
application.
C
D
C
A
few
places,
but
for
the
most
part,
ruby
on
rose,
is
the
primary
driving
part
of
the
application
in
order
to
make
gitlab.com
easier
to
manage,
as
we've
scaled,
our
sas
application,
as
well
as
just
the
growth
over
the
course
of
time.
Our
infrastructure
team
has
had
to
break
up
chunks
of
the
application
into
different
components.
C
With
all
those
pieces
now
put
together
in
2019,
our
infrastructure
steam
team
started
to
look
into
what
it
would
take
to
get
kubernetes
in
place
and
start
migrating
parts
of
our
infrastructure
into
kubernetes,
and
at
this
point
we
now
have
a
hybrid
architecture.
We've
been
running
kubernetes
for
a
little
bit
now
and
about
90
of
our
front-end
workloads
are
now
deployed
into
kubernetes,
much
of
our
stateful
applications
or
stateful
components
such
as
our
databases
and
file
servers.
They
remain
as
virtual
machines,
all
of
that
managed
via
chef
and
terraform
at
the
moment.
C
So
because
of
that,
we
wanted
to
choose
something
that
was
going
to
be
very
quick
and
allow
us
to,
in
a
very
simple
manner,
configure
a
communities
cluster
available
to
us.
So
we
do
utilize
google's
cloud
platform
as
our
infrastructure
provider,
so
it's
very
natural
for
us
to
check
out
the
google
kubernetes
engine
or
gke
and
doing
so.
This
allowed
our
infrastructure
teams
to
focus
on
what
we
deem
more
important
to
us.
C
That
included
things
like
ensuring
that
we
could
run
kubernetes
alongside
of
a
virtual
machine
infrastructure
and
because
that
we
knew
migrating
into
kubernetes
was
not
going
to
be
a
short
process.
You
know
it's.
This
is
a
multi-year
project
for
us.
We
needed
to
ensure
that
we
had
the
compatibility
between
an
application
moving
from
a
virtual
machine
into
kubernetes
and
go
back
and
forth
in
case.
We
ran
into
problems
and
needed
to
roll
back
during
the
migration
period.
C
So
we
wanted
to
make
sure
that
our
observability
was
in
place
and
was
backward
compatible
with
whatever
infrastructure
application
was
running
on.
We
wanted
to
make
sure
that
our
deployment
mechanism
worked
that
way.
When
we
deployed
to
our
virtual
machines,
we
could
also
deploy
the
application
to
kubernetes
without
impacting
the
delivery
team
unnecessarily.
C
So
with
this
we
chose
to
utilize
gke,
and
with
that
we
followed
google's
best
practices
for
setting
up
kubernetes
clusters.
We
wanted
the
best
redundancy
possible,
so
we
made
a
regional
cluster
and
we
ran
with
it.
C
C
C
So
we
were
able
to
again
make
sure
all
of
our
observability
was
working
as
we
desired,
making
sure
and
fixing
any
bugs
that
we
introduced
for
our
deployment
mechanisms
and
we
could
test
and
validate
that
changes
that
we
make
to
our
virtual
machine
infrastructure,
land
inside
of
recruitment
as
infrastructure
as
well.
And
this
was
very
important
for
us
because
we
wanted
to
be
able
to
set
the
expectation
for
how
we
run
kubernetes
and
determine
what
we
need
to
look
forward
to
when
it
came
to
future
migrations.
C
So
inside
of
a
original
cluster,
where
you
have
nodes
that
span
across
all
zones,
a
deployment
technically
consists
of
two
primary
things:
the
deployment
of
the
web
service
itself,
that's
going
to
run
the
application,
or
in
our
case
some
component
of
gitlab,
and
you
have
the
nginx
ingress
controller
or
configuration
for
an
existing
ingress
controller.
We
utilize
nginx
ingress
in
our
particular
case,
so
again,
because
we
use
google's
cloud
platform.
We
are
also
provided
with
a
single
ingress,
endpoint
or
internal
load.
C
Balancer
provided
to
us
inside
of
the
google
network
stack
so
doing
so
provides
us
with
a
very
special.
You
know,
happy
path
and
a
potential
bad
path
where
traffic
can
take.
That
will
change
the
behavior
of
how
an
application
works,
and
this
is
kind
of
important
for
us
to
understand
what
that
might
look
like.
So
when
a
client
request
comes
in
through
our
front
door.
In
this
case,
we
use
aj
proxy
and
that
traffic
gets
filtered
into
our
original
cluster,
we're
going
to
land
on
an
nginx
ingress
controller.
C
The
only
purpose
that
the
ers
controllers
do
here
is
a
little
bit
of
buffering
and
then
some
low
bouncing,
with
the
way
that
a
regional
cluster
is
configured
in
the
way
that
an
ingress
controller
learns
about
its
environment.
So
it
understands
where
to
low
balance
traffic,
to
there's
the
possibility
that
that
traffic
might
get
load
balance
into
a
pod
that
exists
inside
of
a
different
zone.
C
C
In
cases
like
this,
we
do
not
incur
any
additional
network
latency,
because
we
stayed
inside
that
zones,
there's
no
additional
network
copying
and
because
there's
no
traffic
has
egressed
from
that
zone.
We
have
not
hit
a
new
charge
that
gets
added
to
your
cloud
bill.
C
He's
got
some
sort
of
algorithm,
but
if
he
chooses
a
pi
that
lives
in
a
different
zone
you're
going
to
incur
it
will
be
a
minor,
a
network
latency,
because
we're
adding
additional
network
cops
to
get
to
that
node
and
because
we
are
egressing
traffic
from
that
zone,
you're
going
to
incur
a
charge,
a
network
cost
for
the
bailout
that
gets
utilized
for
that
connection.
C
So
let's
look
at
how
bad
that
could
potentially
be
as
far
as
costs
go
so
prior
to
us
moving
our
next
large
service.
We
did
a
quick
investigation
now,
keep
in
mind
all
cloud
providers.
Do
this
so
insert
your
favorite
cloud
provider,
calculator
and
network
pricing
schema
here,
but
for
google
in
our
particular
case,
if
you're
moving
about
500
terabytes
of
data
across
a
zone,
that's
going
to
charge
you
a
little
over
five
thousand
dollars
over
the
course
of
an
entire
month.
That's
not
something
to
blink
an
eye
out!
C
That
is,
you
know,
tenfold
more
expensive
than
our
estimation.
So
like
this
is
a
problem
we're
solving,
but
this
also
gets
into
one
of
the
problems
related
to
cloud
costs,
and
you
know
predicting
what
your
cloud
bill
is
going
to
look
like
again.
We
use
the
manage
cluster.
Google
tells
you
precisely
how
much
they're
going
to
charge
for
that
cluster
for
a
given
hour.
C
We
know
the
instances
because
we
choose
which
instances
we
want
to
run
inside
of
our
clusters.
So
we
can
predict
how
many
nodes
we
need
to
run
to
serve
our
workloads
and
we
could
go
to
our
financial
person
and
say:
hey
it's
going
to
cost
us.
This
amount
to
run
this
cluster
what's
harder
to
predict,
especially
for
gitlab.com,
because
our
network
traffic
is
very
much
driven
by
a
customer
base
and
the
workloads
that
they
have
and
the
amount
of
data
that
they
store
on
us.
C
So
one
thing
to
keep
in
mind
in
regards
to
predicting
network
calls
is
that
you're
not
going
to
be
able
to
prevent
those
zonal
boundary
crossings
happening
everywhere
within
our
stack.
C
So
here's
just
a
a
diagram
showcasing
what
happens
below
web
service
pod,
where
eventually,
due
to
the
way
that
we
have
deployed
an
infrastructure,
there
is
the
possibility
of
crossing
a
zonal
boundary
and
eventually,
we'll
have
to
talk
to
a
file
server
or
database
server.
C
So
now,
if
we
look
at
the
network,
stack
specifically
like
this
chart
showcases
when
your
traffic
goes
through
an
aja
proxy
node
located
in
save
zone
b,
it's
going
to
favor
sending
that
traffic
to
a
cluster
located
in
that
same
zone.
C
Therefore,
that
request
can
only
go
to
a
web
service
pod
located
on
a
node
somewhere
inside
of
zone
b,
so
we've
eliminated
the
cross,
zonal
traffic
happening
in
our
front
end
layer
and
we've
eliminated
the
because
of
that
we've
eliminated
the
network
costs
associated
with
that
bandwidth
and
we've
also
eliminated
the
network.
C
Latency
that
might
get
incurred,
as
we
potentially
would
have
been
crossing
zero
boundaries.
Again,
however,
below
a
web
service
pod,
we
will
still
incur
the
zonal
bandwidth
charges
that
happen
as
we
talk
to
file
servers
or
database
servers.
But
again,
that's
an
unavoidable
cost
that
we've
learned
to
accept
at
this
point.
C
So
there's
a
few
benefits
that
we
gained
from
having
all
these
additional
clusters
one.
It's
a
lot
less
expensive.
You
know
we
use
a
managed
cluster.
It
cost
us
a
few
hundred
dollars
per
month
to
maintain
these
extra
clusters
in
comparison
to
the
tens
of
thousands
of
dollars.
It
would
have
cost
to
manage
the
network
traffic
for
just
a
single
web
service
being
pushed
into
a
regional
cluster.
For
example,
we
also
get
the
benefit
of
being
able
to
spread
out
cluster
upgrades.
C
C
C
And
lastly,
this
was
a
recommendation
by
google,
so
we
took
advantage
of
that
for
the
ultimate
redundancy.
However,
proxy
is
aware
of
all
of
our
clusters.
That
way.
If
we
do
suffer
some
sort
of
failure
inside
of
a
cluster,
we
could
isolate
it
and
that
traffic
will
get
low
bounce
to
our
two
other
clusters
automatically.
C
So
there's
many
to
choose
from
and
at
the
time
that
we
were
evaluating
what
we
wanted
to
accomplish.
We
don't
have
a
service
mesh
installed
anywhere
inside
of
the
gitlab
application,
again
we're
using
our
helm
chart,
which
also
does
not
have
any
sort
of
service
mesh.
So
we
have
all
the
puzzle
pieces
that
bring
our
applications
together
from
the
get-go.
There's
no
need
to
add
this
new
service.
C
However,
the
benefits
of
using
more
than
one
cluster,
we
saw
very
quickly
and
decided
to
use
that
over
regional
cluster.
But
you
know
you've
got
options
to
the
way
that
you
make
changes
to
how
you
deploy
your
node
pools,
which
would
require
either
additional
deployment
and
service
objects
or
modifications
to
existing
deployment
and
service
objects.
C
And
the
last
potential
option
that
I
know
about
is
there's
a
kubernetes
alpha
feature
called
topology.
Where
hints,
this
is
still
an
alpha
feature,
so
even
we
at
gitlab
can't
yet
take
advantage
of
this,
but
there's
now
more
stuff
available
today
than
there
was
when
we
were
first
looking
into
this
problem.
C
And
at
this
point,
we're
taking
cluster
configurations
and
we're
kind
of
copying
and
pasting
them
we're
just
stamping
up
new
clusters
as
necessary.
The
amount
of
difference
between
our
clusters
is
very
minimal.
You
know
which
zone
it's
deployed
into
the
network
space
it
gets
utilized
and
maybe
the
name
would
change,
but
it's
effectively
the
same.
C
The
deployments
or
the
the
workloads
running
on
these
clusters
is
pretty
much
identical.
You
know
they
all
serve
the
same
purpose.
They
serve
our
front
end
traffic,
so
you
have
both
the
configurations
for
the
clusters
and
the
configurations
for
deployments
are
pretty
much
identical,
so
they're
very
well
defined
and
easily
manageable.
At
this
point,
we're
treating
our
clusters
more
like
cattle
versus
pet
clusters,.
C
C
So
some
bonus
items
now
that
we've
got
multiple
clusters.
We
have
fun
ways
to
mitigate
incidents,
so
this
is
a
chart
that
is
showcasing
the
fact
that
we
ran
up
against
a
limit
in
our
nat
nat
port
usage
when
we
first
created
all
of
these
additional
clusters
and
then
over
the
course
of
time.
As
we
started
pushing
more
things
into
our
kubernetes
clusters,
we
ran
into
a
situation
where
we
did
a
deployment.
C
C
One
of
the
more
important
items
I
want
to
highlight
is
the
ability
for
us
to
tune
specific
workloads,
so
here
is
a
chart
showing
how
many
pods
are
being
run
across
all
for
a
given
for
the
api
workload.
Here's
the
pod
count
for
all
all
of
the
clusters,
so
we
could
see
right
in
the
middle
of
the
chart.
We
deployed
some
sort
of
change
that
lowered
the
amount
of
pods
the
api
service
needed
to
run.
C
So
this
enables
us
to
compare
the
behavior
of
how
either
kubernetes
or
the
workloads
are
behaving
on
a
per
cluster
basis,
and
we
can
make
whatever
changes
we
want
here
and
we
have
the
ability
to
compare
this
test
change
in
one
cluster
to
our
existing
two
control
clusters
per
se,
and
if
we
see
a
positive
change
in
behavior,
we
can
push
that
change
out
to
all
of
our
clusters.
If
we
see
negative
behavior,
we
would
obviously
roll
that
back
or
if
it's
incident
inducing.
C
C
We
can
make
changes
to
how
those
pods
request
information
out
of
the
nodes
like
the
resource,
requests
and
resource
limits,
and
we
could
also
change
the
horizontal
pod,
auto
scaler.
So
we
could
change
the
metric
that
we
scale
on
and
also
the
target
metric
value
that
we
utilize
for
determining
when
it
scales,
and
we
can
also
make
changes
to
the
application
itself.
C
C
And
the
last
item
I
want
to
highlight
is
the
ability
to
introduce
fun
maintenance
procedures
when
required.
So
here
is
a
chart
showing
web
requests
over
the
course
of
time
across
three
clusters
during
a
maintenance
procedure
event.
So
in
this
case
we're
fixing
a
bug
or
a
helm
chart
the
fix
for
our
home
chart
would
have
been
incident
inducing
if
we
did
not
roll
this
out
in
a
very
controlled
manner.
So
what
we
can
do
now
is
remove
all
the
traffic
coming
into
a
single
cluster.
C
C
So
to
wrap
up
the
conversation,
look
at
your
cloud
bill.
You
know
determine
if
this
is
a
problem
that
you
might
be
suffering
from
for
us.
We
knew
this
was
going
to
be
a
problem,
so
we
were
predicting
the
future
as
effectively
before
we
migrated
our
next
big
service
that
took
a
lot
of
client
workloads,
also
keep
in
mind
what
solutions
are
available.
C
C
So,
thank
you
all
for
listening
to
me,
blabber
on
for
quite
a
long
time.
At
this
point
I'll
take
questions.
B
So
I
have
one
around
so
you
mentioned
that
when
we
made
this
decision,
it
was
like
some
time
ago
and
things
have
changed,
there's
some
other
options
and
things
like
that.
So
if
we
were
facing
this
problem
now,
how
do
you
think
we
would
have
decided
if,
like
multi-clusters,
was
still
the
way
forwards.
C
B
And
then
on
calculating
costs.
So
as
someone
as
someone
who
wants
to
go
away
and
do
this
we're
calculating
costs,
how
much
did
you
factor
in
the
sort
of
potential
growth
and
like
the
kind
of
increasing
cost
in
terms
of
of
trying
to
decide
when
to
do
this
now
like?
Is
there
any?
You
got
any
tips
for
for
how
to
actually
decide
when
to
make
this
change.
B
I
suppose
a
simpler
one
then
like
how
much
did
we
have
to
do
that
when
we
made
this
decision
like
did
we
did
we
just
take
a
kind
of
here's,
our
bill
right
now
and
kind
of
assume
it's
going
to
increase
in
the
future,
or
did
we
try
and
do
any
projections
on
like
scale
as
well.
C
D
C
For
us,
because
we
knew
one
service
itself
was
going
to
be
expensive
enough,
we
knew
it
was
worth
looking
into
and
solving
that
problem
pretty
much
immediately
without
having
to
look
at
the
future.
Just
looking
at
one
service
was
enough
to
say
hey,
this
is
going
to
be
problematic.
We
should
solve
this
problem.
B
I
see
okay
yeah,
that
makes
sense
son,
so
I
was
kind
of
having
that
prior
knowledge
of
our
services
and
how
and
how
their
kind
of
different
traffic
based
was
kind
of
key
for
us.
B
And
then
I
feel
like
I
had
one
more
question:
oh
yeah,
so
we
have
three
well,
we
have
four
clusters,
but
in
the
sake
of
this
talk,
we
have
three
clusters.
Is
that
the
right
number
like?
What
would
what
would
be
the
kind
of
decision
making
to
have
five
clusters?
For
example
like?
How
do
we
pick
the
number
of
clusters.
C
I
don't
think,
there's
a
magic
number
of
clusters.
I
think
it's
primarily
based
on
what
traffic
is
doing
and
what
kind
of
workloads
you
need
to
be
running.
You
know
we've
effectively
created
like
a
front-end
layer
of
clusters
that
handle
you
know
set
of
clusters
that
hand
front-end
traffic
and
then
we've
got
a
single
cluster
that
handles
all
the
back-end
work
and
so
far
that
has
worked
well
for
us.
C
You
know
we
haven't
had
any
problems
with
the
kubernetes
api
blowing
up,
because
we've
got
too
many
operations
handling,
and
I
think
that
might
be
one
of
the
governing
factors
is
how
you've
configured
those
clusters.
Do
you
have
enough
network
capacity,
for
example,
and
then
for
clusters
that
span
a
lot
of
nodes?
It
might
be
what
limits
are
you
running
into
within
the
kubernetes
api
itself?
C
You
know,
even
though
gitlab.com
is
a
massive
application,
we
aren't
yet
hitting
the
limits
of
kubernetes
itself,
we're
closer
to
running
out
of
the
available
ip
address
space,
just
due
to
the
way
that
we
configure
things
so
we'll
probably
run
out
of
that
before
we
run
out
of.
C
So,
no,
I
don't
think,
there's
a
magic
number
to
determine
how
many
clusters
that
can
be
run.
I
think,
as
long
as
you're
observing
the
kubernetes
api
and
its
performance
to
make
sure
you're
not
unnecessarily
bringing
kubernetes
down
as
well
as
not
running
into
the
limitations,
whether
that
be
your
cloud
quota
limits,
or
maybe
the
kubernetes
limits
for
kubernetes
itself,
because
I
know
there
are
some
limitations
within
kubernetes
api
that
may
prevent
you
from
doing
certain
things.
C
B
There's
one
thing:
this
is
not
relevant
to
your
topic
at
all,
but
you
family
think
about
it
right,
which
is
that
as
we
in
the
future
like
clusters
could
be
an
interesting
break
point
in
the
deployment
process
so
like
as
a
phased
roll
out
per
cluster,
and
I
wonder
at
that
stage
whether
there's
actual
benefit
towards
having
more
smaller
clusters
that
give
us
a
bit
more
of
a
granular
deployment
flow
versus
there's.
Obviously,
the
overhead
of
like
running
more
clusters
right.
C
You
know
it
takes
an
excruciatingly
long
period
of
time
for
or
get
deployment
to
complete
because
it
takes
us
like
260
seconds
before
a
pod
will
end
and
like
we
run
nearly
100
pods
across
all
of
our
clusters.
For
that
deployment,
like
that's
the
little
deployment
itself
takes
an
excruciatingly
long
period
of
time
to
complete.
C
C
C
A
C
E
Yeah,
I
was
thinking
about
the.
If
we
are
in
a
special
position
compared
to
others.
Usually
people
run
kubernetes
in
on
clouds.
So
probably
the
building
will
be
similar,
but
if
you
are
going
to
run
kubernetes
at
home,
let's
say
on
premise
and
then
you
have
to
make
the
api
server
highly
available,
which
means
at
least
three
nodes.
You
need
three
master
nodes,
three
atcd
nodes.
It
really
depends
on
how
you
want
to
make
it
highly
available
right.
E
The
workload
over
several
clusters,
but
so
that's
probably
a
good
point
to
make
that,
if
you're
running
on
a
cloud
provider
that
just
when
the
bill
is
similar,
let's
say
then
you
can
go
this
route
or
if
not,
you
can
do
some
of
those
things
using
using
multiple
deployments
instead
of
multiple
clusters.
C
I
am
curious
as
to
how
other
cloud
companies
charge
for
that
because
it
doesn't
matter
like
we
deployed
zonal
clusters.
So
you
get
one
api
node
for
zone
clusters,
whereas
our
regional
cluster,
you
get
one
api
node
for
every
zone
you
deploy
to
which
you
know
the
regional
clusters
are
more
redundant
in
terms
of
the
kubernetes
api
arizona
clusters
are
less
so
so
anytime,
there's
maintenance
being
or
occurring
on
horizontal
clusters.
It
kind
of
puts
everything
on
pulse
and,
like
that
happens
during
apex.
So
it's
only
craig
and
graham
who
sees
that
problem.
C
B
B
Awesome,
what
does
go
back?
That's
a
good
talk.
F
I
have
one
thing
which
I
just
thought
during
the
talk.
C
F
The
way
awesome
talk,
would
you
liked
it
and
what
I
told
you
I
think
last
time
already,
I
think
I
I
saw
clearly
or
clearly
know
what
you
could
do
with
adding
some
pictures,
maybe
to
the
slides,
because
I
think
you
added.
F
Yeah,
I
think
you
are
telling
a
story
which
is
really
nice
and
I
think
it
would
be
cool
at
least
like
at
the
beginning.
When
you
say
first,
we
were
at
you
know,
get
up
with
vms.
Then
we
wanted
to
switch
over
to
kubernetes
like
there's
a
lot
of
room
on
your
slides.
Besides.
F
Have
something
showing
a
very
simple
diagram
with
them?
First
or
maybe
vm
deployment
structure,
and
then,
when
you
go
to
the
next
slide,
you
can
see
maybe
something
like
like
an
arrow
and
showing
okay.
Now
we
are
going
to
vm
how
it
looks
like
because
I
think
if
you
get
a
visual
representation
of
what
somebody
is
talking
about,
then
it's
easier
to
memorize
and
then
get
a
picture
of
that.
F
C
F
E
Yeah
I
was
going
to
make
a
similar
suggestion.
I
was
thinking
especially
in
a
specific
moment
in
your
slides
when
you
show
the
multi-cluster
architecture
and
then
before
going
to,
I
think
the
slide
after
that
is
the
one
about
other
alternatives.
If
I
remember
correctly,
so
you
just
show
multi-cluster
architecture.
You
talk
a
lot
on
that.
Then
you
talk
about
service
mesh
other.
So
then
you
move
on
right
so
right
before
so
yeah.
Thank
you
starbucks.
So
not
this
one.
E
D
C
E
No,
it's
not!
No,
I
think
two
slides
before
slide
12.
I
think,
is
that
correct?
Yes,
okay,
so
you
spent
a
lot
of
time
on
this
slide
and
there
is
before
moving
to
the
next
one,
where
you
start
talking
about
alternatives
that
may
be
viable
for
others.
You
mentioned
three
four.
I
don't
remember
key
benefits
from
this,
so
I
think
they
are
kind
of
if
these
are
points
that
are
important,
that
you
want
to
make,
they
probably
deserve
each
one,
there's
his
own
slide
with
nothing
else
other
than
the
text
and
an
image.
E
It
serves
you
because
you
have
to
explain
what
we
did
gives
you
when
you
start
talking
about
this
thing
is
amazing
this
slide
because
it
gives
you
okay,
we
were
going
crisscross
across
all
the
zone,
and
this
is
what
we
are
doing
now.
Fine
perfect.
This
is
really
understandable,
but
then
you
kind
of
change
topic
because
after
you
explain
this
you're
moving
into
and
we
were
able
to
do
x,
y
and
z,
which
have
no
there's
nothing
in
this
light,
pointing
me
in
the
audience
to
that,
and
also
there's
also.
E
Let's
call
this-
the
people
tend
to
say,
fell
asleep
during
the
presentation,
not
that
they
really
fall
asleep.
But
basically
you
say
I
just
watch
this
slide
so
you're
just.
E
Yeah,
when
you
change
it's
kind
of
a
refreshing
moment
right
so
what's
happening
now
right,
so
the
people
have
this
what's
happening
now
and
usually
hell
I
mean
again,
if
you
don't
have
time
to
change
this
light,
just
doesn't
matter,
but
when
you
are
making
instead
of
making
a
list
in
a
slide,
it's
better
to
have
three
slides
in
a
row
each
one
with
one
point,
and
only
that
point
right
because
gives
you
a
context
and
help
you
guiding
the
audience
through
the
points
that
you're
making.
A
B
C
Excellent
well
thanks
for
listening
to
me
thanks
for
the
feedback,
and
if
I
don't
see
you
all
until
I
get
back,
I
guess
I'll
see
you
most
in
half
of
october
has
passed
and
alessia.
I
might
not
see
you
until
november,
probably.