►
From YouTube: 2021-04-22 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
A
A
B
We
yeah
we
used
to
live
in
richmond,
so
everyone
just
like
descend.
A
B
On
the
river
you
know
just
near
richmond,
not
in
like
richmond
but
just
descend
on
the
river.
As
soon
as
the
weather
gets
nice,
yeah
yeah.
B
A
Hey
everyone
so
for
those
who
haven't
met
can
is
joining
us
in
delivery.
For
a
few
months
he
is
interning
but
backfilling
for
uric
as
well.
So
andrew
you
probably
haven't
met
yet,
but.
A
Everyone
doing
all
right
nice,
so
scubak
you've
got
the
first
agenda
item.
D
Indeed,
your
sounds.
A
D
Okay,
all
right,
so
the
engine
likes
ingress.
As
we
all
know,
we
run
nginx
on
our
api
nodes
and
they
have
a
specific
configuration.
D
We
enable
proxy
buffering,
I'm
not
entirely
sure
what
that
does.
I
know
it
buffers
something
and
we
have
that
enabled
on
our
virtual
machines
by
default.
Kubernetes
nginx
ingress
will
disable
that
feature,
but
we
can
configure
to
enable
it.
However,
the
other
hurdle
that
we
run
into
is
that
we
explicitly
disable
proxy
buffering
for
one
specific
endpoint
related
to
jobs
in
the
api.
D
So
I
cannot
determine
whether
or
not
I
could
do
this
same
configuration
inside
of
kubernetes,
I'm
wondering
if
andrew
or
henry
by
chance
have
any
context
as
to
well.
I
know
the
context
so
at
some
point:
we're
filling
up
our
api
nodes,
temporary
disk
space
temporarily,
due
to
nginx
buffering
large
files
with
us
desiring
to
enable
proxy
buffering,
I'm
hoping
to
go
down
the
route
where
we
disable
the
buffering
to
a
disk,
but
instead
we
buffer
inside
of
ram.
D
This
has
the
consequence
of
eating
up
a
lot
more
ram,
but
the
expense
of
not
running
a
disk
space,
which
is
fine,
but
if
I
can't
disable
proxy
buffering
for
this
one
specific
endpoint,
I'm
worried
that
this
is
going
to
cause
other
issues
like
we
disabled
proxy
buffering
for
that
specific
endpoint,
because
it
was
causing
an
incident
because
there
are
servers
running
out
of
space.
Now,
I'm
concerned
that
we're
just
going
to
run
out
of
ram
on
our
machines,
because
I
don't
think
I
can
disable
this
particular
endpoint.
D
C
We
have
this
buffering
enabled
it
will
always,
I
think,
still
bug
for
the
first
8k
or
4k
bytes
in
memory.
Only
what
is
going
over
8k
will
then
go
to
disk.
I
think
that's
what
this
buffering
is
for.
As
far
as
I
understand
from
the
documentation,
so
I
think
regardless.
If
we
enable
disable
this
buffering,
it
will
always
go
into
memory.
C
The
only
bad
thing
is
that
we
explicitly
disable
buffering
at
all
for
certain
endpoints,
because
it
doesn't
make
sense
for
big
objects.
I
think
and
then
draw
objects
or
stuff
like
that,
but
the
question
is:
if
we
would
have
so
many
of
those
that
we
would
fill
up
thousands
of
4k
pages
or
8k
pages
to
fill
up
memory,
I
I
hope
we
do
not.
C
I
mean
it
would
have
been
a
lot
of
them
and
it
would
have
been
for
a
lot
of
concurrent
connections
that
stay
long
enough
to
overlap
right,
and
maybe
we
should
just
look
into
how
many
connections
we
have
at
engine
x,
open
that
or.
C
B
C
D
Here's
the
merge
request
itself
where
we
introduced,
which
endpoint
that
we
disabled
stuff
on
so
in
this
case,
the
only
one
that
we
really
care
about
is
the
api
jobs,
artifacts
and
anything
that
ends
in
dot,
get
lfs
objects
included.
D
I
don't
believe
I
could
set
this
style
of
configuration
inside
of
our
nginx
ingress,
like
I'm
hunting
the
documentation.
My
next
step
is
to
go
to
the
kubernetes
slack
channel
associated
with
the
ingress
and
ask
there
to
see,
if
maybe
I'm
just
overlooking
a
configurable
option,
but
I
fear
that
I
may
not
be
able
to
accomplish
this.
D
So
I
guess
with
that
in
mind,
I
am
concerned
about
memory
usage,
I'm
not
really
sure
how
to
quickly
or
accurately
test
this
inside
of
staging.
Just
because,
like
I
could
create
an
artifact,
that's
massive,
but
I
don't
know
if
I'm
going
to
cause
a
problem
when
requesting
that
object
out
of
our
system
or
not,
unless
I'm
watching
these
specific
nginx
controller
that
my
request
went
through,
which
I'm
not
going
to
be
able
to
monitor
that
quickly
enough-
and
I
imagine
just
downloading
one
artifact
is
going
to
happen
so
quickly.
C
D
C
But
what's
going,
I
don't
think
that
this
will
be
buffered
and
cache
entirely
in
memory,
because
it
will
only
buffer
8k
right,
so
it
doesn't
matter
how
big
this
object
is.
It's
it's
just
measuring
how
many
concurrent
connections
are
open
on
on
engine
x
right
because,
as
long
as
they
are
open
like
the
client
still
has
connection
to
nginx,
it
will
keep
the
request
in
memory
8k
of
each
request.
D
C
C
I
read
the
documentation
of
this
configuration
option
in
engine
x
and
it
sounded
like
it
will
buffer
the
first
4k
of
the
response,
and
everything
going
over.
This
will
be
buffer
to
disk
if
this
is
enabled
to
buffer
on
this.
If
we
disable
this
buffering
to
this,
then
my
honest
thing
was
that
it
just
will
take
the
force
first
for
k
by
the
memory
and
not
more
than
that
which
would
make
sense.
I
guess,
but
I'm
not
sure.
D
B
B
D
B
Location
twiddle
so
request
buffering
the
reason
why
it's
important
is
that
you
can
like
denial
of
service
a
server.
B
Well,
there's
lots
of
reasons,
but
the
main
reason
is
that
you
can
drip
feed
a
server,
a
really
slow
request
and
if
puma's
picking
it
up,
then
you
can
it's
actually
much
more
important
with
unicorn,
because
obviously,
with
unicorn,
you've
got
like
more
limited
processes,
but
you
know
you
can
basically
saturate
unicorn
very
quickly
and
and
so
forth,
like
normal
requests,
we
want
the
client
to
like
put
it
all
into
a
single
thing,
and
then
we
trust
that
nginx
will
deliver
it
to
the
back
end
as
quickly
as
it
can
right,
and
so
you
know
most
of
this
was
probably
set
up
when
we
were
still
totally
on
unicorn.
B
D
D
B
There
we
go
so
this
is
what
we've
got
on
on
the
whatever
this
file
is
called.
What's
it
called
again
there
that
gitlab,
http
conf.
B
And
then
we've
got
this
this
match
over
here.
So
you
know
the
other
part
of
it
that
I
just
wanted
to
point
out.
Is
that
the
part?
That's
that
this
is
important
for
when
you're
testing
skybeck
isn't
downloading
big?
For
the
same
reason,
it's
not
about
downloading
big
big
chunks.
It's
about
uploading
them
right,
but
the
reason
we
allow
artifacts
through
is
because
workhorse
actually
offloads
that
work
right.
So
workhorse
will
basically
do
you
know.
Workforce
is
fine!
B
You
can't
ddos
workhorse,
because
it's
got
green
threads
and
everything,
and
so,
if
you
send
it
a
request,
slowly,
it's
just
going
to
upload
it
to
object,
storage
slowly
right!
So
that's
why
we
allow
an
exclusion
on
that
and
same
for
the
git's
endpoints,
because
we're
just
doing
a
stream
to
get,
and
none
of
that
stuff
touches
puma.
B
But
I
know
that
probably
a
person
to
speak
to
about
this
is
is
jakub
because
one
option
would
be
like
if
this
is
really
problematic.
One
potential
option
might
be
to
say:
let's
just
do
a
request
buffering
in
in
workhorse
and
then
you
know,
but
but
obviously
that's
going
to
require
change
and
then
workhorse
needs
to
be
able
to
manage
files,
and
you
know
stream
stuff
to
file
and
and
all
that
clever
stuff.
So
it's
not
ideal.
B
B
D
B
D
B
B
B
D
D
B
So
if,
if
we
wanted
to
drop
it,
something
would
need
to
do
that
request
buffering
for
most
things,
but
then
not
everything,
because
yeah
like
forgets,
it
would
be
a
terrible
thing.
So
it
seems
as
though
maybe
a
natural
thing
would
be
to
actually
think
about
putting
this
into
workhorse,
because
then
we
have
like
control.
You
know,
workhorse
knows
exactly
what
needs
to
be
buffered
and
what
doesn't
and
we
can
build
it
as
a
middleware.
B
So,
basically
anything
that's
going
to
puma
route.
You
know
has
a
middleware
that
passes
through.
That
does
the
buffering,
and
then
you
know
that
that
would
probably
be
the
best
option
for
that
yeah,
but
it
would
require
application
changes.
A
C
X
gives
a
lot
of
controls
of
how
much
we
bother
very
buffer
buffering
to
disk
or
not.
I
mean
it's
a
lot
of
functionality
that
we
would
need
to
re-establish,
I'm
not
sure
if
yeah,
maybe
it's
better
to
use
something
that
just
built
specifically
for
that.
Instead
of
trying
to
reinvent
this
with
all
the
consequences
right.
B
C
C
C
B
B
C
A
D
Then
amy
this
is
kind
of
of
risk
that
we
need
to
take
into
account
if
we're
going
to
try
to
force
ourselves
into
production.
With
this
like
this,
might
I
don't
know
if
it
will
induce
an
outage
per
se,
but
it
might
induce
some
degradation
of
some
sort
and
we
don't
have
in
android
unless
you
know
where
these
are?
I
don't
see
dashboards
with
the
word
nginx
associated
with
the
title,
so
I
wonder
if
we
don't
have
any
metrics
at
all
for
engineering.
B
We
have
so
so
the
problem
is
actually
that
nginx
is
pretty
on
what
it
emits
and
I
think
it's
got
very,
very
basic
metrics
unless
you
pray
for
the
enterprise
product
and
so
they've
got
some
very.
If
you
go
look
in
thanos
I'll
show
you
my
screen
again.
B
That's
none
of
those
ones-
oh
that's!
That's
quite
strange
so,
but
basically
I've,
never
the
nginx
exporter.
Metrics
is
basically
gives
you
five
metrics
and
they
they're
not
useful
at
all,
so
that's
kind
of
why
we've
never
used
them,
but
I'm
not
sure
why
the
exporter's
not
running
at
the
moment.
Let
me
just
check
on
that
on
that
machine
that
I
was
on.
D
I
wonder
if
the
ingress
gives
us
any
differing
metrics,
but
you
know,
alongside
of
that,
we
also
don't
or
are
lacking
dashboards
specific
to
ingress
as
a
whole.
So,
like
the
performance
metrics
that
come
from
cube,
state
metrics
and
such
we
currently
don't
have
dashboards
for
and
we're
not
capturing
the
logs
from
seven.
D
B
Is
that
in
the
epic
for
kubernetes
monitoring?
Probably
not
the.
D
A
A
D
If
I
recall,
and
with
that,
we
would
probably
get
a
better
idea
of
what
kind
of
problems
we
may
run
into
so
the
sooner
I
could
get
into
canary
the
happier
I'll
be
because
we
could
quickly
shift
traffic
to
and
from
kubernetes,
you
know,
just
a
single
command
to
stop
it
if
we're
noticing
awkward,
behavior
and
that'll
be
a
good
place
for
me
to
test
as
well
test
with
real
traffic,
not
artificial
testing
on
staging,
for
example,.
A
You
cool,
okay,.
A
Let's
so
on
monday
I
have
the
multi-large
working
group.
So
if
there's
anything
on
this
that
we
want
to
particularly
do
need
stuff
like
to
go
into
the
charts
or
indeed,
if
we
want
some
workhorse
investigation
or
time,
then
let's
make
sure
we
sync
up
before
I
go
into
that
it's
like
later
on
in
my
afternoon,
so
I
can
put
the
requests
in.
A
Cool
okay
and
then
the
other
one
kind
of
related
to
the
api
service
I
just
want
to
mention
is
the
service
discovery
sometimes
fails
inside
kubernetes,
which
is
the
blocker
that
graham
has
been
working
through.
So
just
to
kind
of
he's
got
like
updates
on
the
issue,
but
just
as
a
mention
for
this
one
that
he
is
looking
pretty
good
to
hopefully
have
a
solution.
We
have
something
on
staging
if
he's
going
to
get
it
out
to
production.
A
B
Yeah,
so
where
is
that
thing
yeah?
So
I've
started
adding
some
observability
where
I
can
and
the
the
first
thing
that
I
added
was
the
hbas
and
in
order
to
do
that,
I
needed
the
labels.
So
we've
got
the
labels
now
because
of
all
the
labeling
work
that
we
did,
which
is
great
and
then
out
of
the
back
of
that
we
got
this
dashboard
and,
as
you
can
see,
it's
not
very
exciting.
B
At
the
moment
it's
got
very
little
on
it,
but
this
is
our
saturation
monitoring
for
for
kubernetes
hpas
and
the
reason
why
it's
got
no
data
on
it
is
because
there's
a
piece
of
work
that
we
still
need
to
get
done
and
that's
because
we
break
all
of
our
infrastructure
down
by
the
service
and
then
the
stage
and
at
an
environment,
obviously
environment,
service
and
stage,
and
we
don't
have
stage
labels
on
our
on
our
hpas
yet
and
we
need
we
need
to
get
those
on
there.
B
B
Yes,
it
works,
so
we
have
specified
a
number
of
maximum
number
of
pods
that
can
be
running
for
an
hpa
and
we
measure
the
saturation
score.
You
know
zero
being
we're
not
running
any
pods
and
100
percent
being
bumping
right
up
against
the
top
of
that
the
maximum
number
of
pods.
I
should
actually
put
the
name
of
the
configuration
item
in,
and
so
you
can
see
it
sort
of
goes
up
and
down.
B
You
know
I
mean
this
is
kind
of
interesting
to
me
so
like
here
this
is
gitlab
shell
and
it
went
and
then
we
scaled
it
up
for
a
1021
and
then
at
10
26.
We
decided
to
scale
it
back
down
to
where
we
were
like
I'd,
be
interested
to
know
how
long
it
actually
takes
for
that
pod
to
start,
but
it
probably
takes
about
five
minutes
and
how
many
did
it
add?
Well,
it's
difficult
to
know
because
we're
using
percentages
here,
but
we
went
from
if.
D
B
A
D
D
B
B
B
Oh
there
yeah
yeah
yeah,
it's
just
flatlining
awesome.
Okay,
so
we've
got
that
as
well,
just
with
no
stage
label
as
well.
What
about
sidekick
now
I
haven't
really.
A
B
That's
that's
that
that's
one
of
the
things
that
I'm
that
I'm
working
on
at
the
moment,
but
it's
a
little
bit
more
difficult.
So
so
I
mean
here
has
this
fire?
Does
anyone
see?
Why
is
this
not
firing?
I
need
to
look
into
why
this
is
but
we've,
oh,
because
we
no,
we
do
have
look.
We've
got
there's
the
the
ceiling
we've
set
it
at
95
percent
and
this
psych.
The
catch-all
is
pinned
at
100
and
I
haven't
seen
any
alerts
for
that.
B
I
don't
know
if
anyone
else
has,
but
I'm
normally
quite
good
at
keeping
track
of
alerts.
So
I
need
to
figure
out
why
that
hasn't,
fired
and
also
what's
interesting,
is
I
don't
see
the
the
the
solid
line
which
is
kind
of
like
our
you
know
our
reading,
just
something
else,
that's
kind
of
interesting
about
the
way
that
we.
B
I
because
we
we
actually
include
in
the
metrics
catalog.
This
is
pretty
difficult
to
read,
but
you
can
see
here.
We've
got
this
urgency,
throttled
on
the
on
the
shots
that
are
deliberately
throttled
so
effectively.
I
just
pull
that
out
from
this
definition
over
here
and
I
include
it
so
that
the
the
hpa
saturation
metric
doesn't
include
any
of
those
throttled
shards.
I
don't
know
why
that's
a
hardware
just
a
hard
phrase
to
say,
because
otherwise
it
would
just
be
firing
all
the
time,
so
I've
deliberately
removed
those
and.
D
B
Okay,
that's
good
to
know
yeah,
so
so
so
the
list
that
we
are
going
to
filter
through,
as
I
said,
I
need
to
figure
out
because
this
should
be
firing
and
I
think
it's
valid.
So,
let's
just
look
over
a
longer
period
and
obviously,
like
I
said
you
know,
we
also
get
this
on
all
of
our
other
dashboards.
So,
oh,
you
won't
be
able
to
see
it
here
because
of
the
labeling
problem
again,
but
you
know
it
will
appear
in
this
saturation
graph
over
there.
B
B
You
won't
see
anything
until
we
fix
the
labels
either
the
the
chart,
the
stage
label,
but
that
will
appear
there
when
it's
when
it's
fixed
cool,
and
so
that's
that
henry
with
the
with
regards
to
the
node
pools,
I'm
working
on
that
at
the
moment
and
I
should
be
able
to
do
a
demo
by
next
week.
B
B
Yeah
yeah,
it's
it's
working,
well,
the
build's
not
working
yet,
but
it
should.
It's
really
close.
B
I
think
if
I'm
having
some
problems
talking
to
the
to
the
prometheus
exporter
but
yeah,
basically
what
it
does
is
it
just
kind
of
queries,
the
terraform
data
and
then
turns
it
into
prometheus
format
and
because
we
have
so
many
different
node
pool
at
first,
I
was
going
to
say:
let's
just
manually,
keep
them
synced
between
the
run
books,
project
and
the
and
this
another
project,
and
then
I
realized
that
they're
all
quite
different
and
there's
like
a
lot
of
them,
and
so
it's
it's
just
going
to
become
problematic,
like
people
are
going
to
forget
to
do
it,
and-
and
so
it
feels
like
a
bit
of
a
tangent,
but
I
also
think
that
otherwise
it's
just
not
gonna
get
used
and
also,
I
think
that
some
of
those
node
pools
are
are
totally
saturated
already
and
we
are
probably
hitting
the
limits
on
on
them
like
you
suggested
henry.
A
What
do
you
want
to
do
with
that
epic?
So
we
had
it.
It's
kind
of
we
opened
it
up
a
few
months
ago,
inter
and
it
was
sort
of
intended
to
sort
of
get
us
through
like
the
next
couple
of
stages
of
migration,
but
it
isn't
particularly
bounded.
A
B
I
can
do
that.
I
always
prefer
that,
rather
than
like
the
the
yeah
yeah
I'll,
I
I
haven't
done
anything
of
it
yet,
but
I
feel
like
I'm
still
trying
to
learn
the
lay
of
the
land
and
what
we
can
do
and
what
we
can't
and
what's
possible
as
well,
is
like
you
know
some
things
I'm
like.
Oh,
this
will
be
really
easy
and
then
it's
like,
oh
no,
it
hasn't
been
easy
at
all
and
then,
while
I've
been
looking
through
the
data,
I'm
like.
Oh
that
looks
really
interesting.
B
We
need
to
be
monitoring
that
and
so
I'll
take
some
time
and
and
and
try
and
time
but
like
set
aside
like
this
is
like
phase
one
phase,
two
phase
three
yeah
and
then.
B
A
No,
no,
it's
totally
fine.
Absolutely
I
say
it's
we've
been
kind
of
just
rolling
it
along
for
like
seven
months
or
so
anyway.
So
it's
completely
fine.
I
think,
if
you're,
if
you
don't
want
to
set
any
like
hard
kind
of
boundaries
or
you're,
not
quite
sure
what
the
hard
boundaries
are,
then
we've
been
using
kind
of
stateless
and
stateful
as
kind
of
a
almost
a
proxy
for
that,
so
hopefully
stateless
kind
of
wraps
up
after
the
web
nodes.
A
We've
got
a
bit
of
pages
work,
but
we
haven't
got
super
tons
so
like
in
the
next
three
or
four
months
and
then
there's
a
whole
other
phase.
Afterwards,.
B
A
Yeah,
that
was
a
whole
other
project,
so
yeah,
okay,
that
makes
sense.
I
feel,
free
to
scope
it
as
you
want
to
that
makes
sense
and
we
can
add
other
epics
in
as
we
need
to
awesome
just
on
the
state,
full
migration.
So
you'll
hear
bits
and
pieces
being
sort
of
mentioned
about
this,
like.
A
I
think
it's
fairly
expected
right
now
that,
after
we
finish
the
stateless
migration,
italy
will
be
the
thing
that
gets
migrated
next
and
really
that
at
the
moment,
is
a
product
decision,
because
gita
leads
the
one
thing,
that's
like
very,
very
hard
if
you're
self-managed,
to
to
do
anything
with
having
said
that,
it's
not
going
to
be
trivial,
so
the
work
that's
happening
right
now
is
josh
and
mark
so
from
distribution
and
gitly
are
spending
some
time
to
actually
think
about
what
the
product
requirements
are
like.
A
How
will
italy
need
to
need
to
act
as
a
product
in
order
to
do
this
and
then
from
there?
We
can
start
looking
at
like
what
are
the
technical
options?
We
actually
have.
It's
probably
it's
sort
of
an
assumption
that
there'll
be
some
gittily
changes
that
have
to
happen
before
it's
possible.
So
you'll
hear
things
about
this,
but
it's
certainly
not
a
planned
epic,
that's
being
ready
to
roll
out
and,
like
you,
just
haven't
caught
up
on
it.
It's
just
people
are
starting
to
think
about
this
for
later
in
the
year.
C
This
this
sounds
a
lot
like
this
will
overlap
with
ideas
about
disaster
recovery
and
cost
savings,
and
and
also
how
we
set
up
prefect
and
things
like
that.
So
I
think
a
lot
of
these
things
need
to
be
decided
before
we
start
doing
this
right.
B
The
the
other
thing
that
I'm
sort
of
realizing
is,
I
don't
know
if
we've
looked
recently
at
the
numbers,
but
the
number
of
nodes
that
we're
running
now
compared
to
before
say
for
the
git
fleet
seems
to
be
a
lot
more.
That's
my
take
like
many
multiples
and
I
think
before
we
take
a
fleet
like
gidley,
which
is
probably
our
biggest
fleet
already.
B
You
know
we
can't
be
multiplying
that
by
three
times
the
number
of
nodes,
because
it's
just
not
you
know,
we
need
to
figure
out
how
to
bring
those
numbers
down
and
maybe
even
have
a
kind
of
project
between
the
finish
of
this
and
the
start
of
that,
where
we
right
size.
All
of
that
we
need
the
metrics
first,
we
need
those.
We
need
to
understand
how
to
look
at
those
metrics
as
well,
but
I
think
at
the
moment
we're
not
running
super
efficiently.
C
Are
champions
initiatives
to
to
try
to
bring
down
the
course
and
and
bring
up
efficiency
and
things
right,
but
I
think
people
work
on
this
are
constantly
then
put
into
other
projects,
and
so.
A
C
B
B
You
know
how
many
pods
are
being
allocated
to
there's
two
things
running
a
gitlab
shell
and
gitlab
http
on
on
separate
node
pools
definitely
doesn't
help,
but
then
you
know,
I
think
you
know
before
shell
and
http
just
used
to
run
alongside
one
another
on
the
vm,
and
it
was
perfectly
fine
and
now
I
think
we
are
at
a
minimum
running
each
of
those
in
their
own
nodes,
and
so
it's
kind
of
already
double
there.
B
So
I
think
a
lot
of
it's
just
learning
how
to
allocate
things
properly
or
tune
the
allocations
in
octopods
in
kubernetes,
but
obviously
there's
also
all
the
storage
cost
efforts,
but,
like
I'm
talking
very
specifically
about
a
kubernetes
tuning
exercise
that
brings
down
like
the
bin
packing
kind
of
allocations.
A
Yeah,
I
think
that
makes
sense.
I
think,
once
we
get
to
the
kind
of
end
of
stateless,
it's
a
really
good
point
to
stop
and
review
some
of
these
things
graham,
was
also
mentioning
when
I
spoke
to
him
earlier
in
the
week
about
also
the
kind
of
complexity
around
deployments
and
whether
we
want
to
also
consider
doing
anything
to
make
that
simpler
or
look
at
h.a
proxy
or
any
of
these
other
kind
of
like
post
migration
type
projects.
We
have.
B
In
the
in
the
scalability
demo
this
morning
it
was
a
brief
discussion,
also
related
to
how
we
deal
with
sidekick
canary,
which
is
something
we've
wanted
for
a
very
long
time
and
don't
have.
And
then
you
know
the
complexity
around
the
regional
versus
the
zonal
clusters
and
one
question
we
had,
which
I
don't
know
if
this
has
been
discussed.
But
has
there
been
a
discussion
around
setting
up
an
a
fourth
zone
which
is
a
canary
zone.
B
B
A
Great
thanks
because
yeah
I
do
think
this
stuff
becomes
increasingly
more
useful.
Now
we
have
more
stuff
on
kubernetes.
A
B
C
B
The
it's
kind
of
the
same
as
shutting
down
canary
right
like
we,
we
drank
canary
all
the
time,
so
you
know
there's
no
state
in
canary.
It's
only
workers
that
are
that
are
in
canary.
So
if
you
lose
it
which
can
happen,
but
you
don't
lose
like
you
know,
we
shut
down
canary
relatively
regularly,
so
it
would
be
the
same
as
as
that,
no.
B
Yeah
yeah,
but
it
also
solves
the
problem
of
deploying
you
know,
I
think,
there's
a
problem
with
helm
and
stages
like
multiple
stages,
running
in
the
same
cluster,
that
kind
of
gets
it
sort
of
circumvents
that
problem
as
well,
because
it's
you
know,
and
also
the
other
thing.
B
That's
really
nice-
is
that
canary
workers
generate
jobs
that
are
processed
by
canary
sidekicks,
and
we
don't
get
this
problem
where
first
of
all,
there's
just
no
worker
because
it
doesn't
exist
yet
and
then
the
second
problem
is
that
you
know
the
worker
is
one
version
behind
and
doesn't
know
like
some
of
the
stuff
in
the
in
the
request
that
are
in
the
sidekick
job.
B
A
A
Fantastic
is
there
anything
else
anna
wants
to
discuss
today.
A
Fantastic.
Thank
you.
So
much
for
demos
and
discussions
and
good
luck
with
the
investigations.
Go
back.