►
From YouTube: 2020-10-29 GitLab.com k8s migration APAC
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Yeah
well,
it's
also
earlier,
but
yeah
we're
sure
it's
about
like
50
capacity
for
us
and
it's
dwindling.
Every
day
like
there
are
fewer
people
that
come
in
yeah
lockdown
is
crazy,
like
they
just
extended
it
for
another
week.
It's
pretty
pretty
bad
here.
B
We're
very
fortunate.
I
actually
don't
really
know
why,
probably
due
to
a
lot
of
luck
but
australia's
actually,
basically
coming
especially
my
state
is
coming
out
of
lockdown.
Now
we're
not
basically
fully
completely
done,
but
more
or
less
we
can
do
do
things
with
groups
of
people
up
to
about
30
people
now,
which
more
or
less
is,
is
quite
a
lot
by
our
standards.
Now.
A
Yeah,
but
who
knows,
you
may
just
like,
be
a
little
bit
behind
us
and
get
a
second
wave,
because.
B
That's
not
true
yeah
and
I
think
that's
the
the
big
key
we're
very
fortunate
in
that,
obviously
being
an
island
nation,
it's
a
lot
easier
to
police
our
borders
and
like
control
coming
in
and
out
of
the
country,
but
we
still
haven't
completely
100
eradicated
from
the
country.
So
all
it
takes
is
you
know,
1
30
gathering
and
then
it
just
goes
out
from
there
so
yeah.
I
wouldn't
surprise
if
we're
back
and
lock
down
again
by
the
end
of
the
year
or
early
next
year,.
C
B
B
A
Yeah,
let's,
let's
do
it
I
can
share
my
screen
here.
Awesome,
so
it
looks
like
build
logs
is
done
done
100
in
production.
I
want
to
make
sure
that
it's
actually
that
we're
not
seeing
any
nfs
writes
on
the
share
server,
because
there
should
be
nothing
left.
We
see
them
now
based
on
that
thanos
link,
but
it
could
just
be
that
we're
writing
to
temp
files,
because
we
have
a
lot
of.
We
have
like
the
temperature
mounted
on
the
shared.
A
So
after
after
I
confirmed
that
we're
100
sounds
like
we
are,
then
I'll
start
the
change
issue
to
unmount,
like
maybe
a
single
server,
or
I
think
we
I'll
check
to
see
if
we
did
this
on
staging
already.
But
if
we
haven't
I'll,
do
it
there
first
and
then
we'll
do
a
single
server
and
roll
it
out.
Slowly,
then,
we'll
be
done
with
the
sharon
server.
So
that's
really
exciting.
A
A
I
can
I
mean
I'm
happy
to
take
that
up
amy,
but
if
you
think
I
don't
know,
does
this
belong
to
our
team
to
unmount
the
shared
server?
Should
we
just
do
it.
C
I
think
we
should
probably
just
do
it
yeah.
I
think
we
can
clean
up
that
one
thing:
let's
just
check,
maybe
we'll
give
them
a
couple
of
days,
just
in
case
they
hit
any
problem
and
reduced.
A
A
You
mean
the
nfs
server
yeah.
No,
it's
a
separate
there's
two,
two
virtual
machines,
one
for
the
shared,
the
shared
mounts
which
included,
build
traces
and
the
other
ones
dedicated
to
pages.
C
Okay,
great
the
only
other
thing.
Actually,
let
me
follow
up
as
well.
We've
got
three
or
four
sidekick
cues
that
we
don't
know
what
they're
doing
so.
Maybe
we
should
just
double
check
on
those
as
well.
A
Okay,
yeah,
I
mean
like,
if
you
look
at
this
thanos,
you
can
see
that
we
have
both
reads
well
reads:
going
from
website
kicking
api
rights
going
from
sidekick
and
api.
So
that's
interesting.
I'm
not
sure
why
we
have
rights
on
api
and
we
don't
have
rights
for
web.
A
C
Okay,
yeah
interesting,
okay
yeah.
So
if
we
just
let's
wrap
up
in
those
little
bits,
and
then
I
mean
maybe
if
we
unmounted
into
staging,
we
could
actually
find
out.
What's
what's
in
the
right.
A
A
I
did
notice
and
I'll
show
a
little
bit
later
that
we're
using
a
lot
of
compute
and
memory
for
nginx
ingress.
Like
it's
surprising,
I
was
expecting
this
to
not
have
so
many
pods
necessary
for
nginx,
but
there
were,
like
you,
know,
seven
or
eight.
I
think.
Last
time
I
looked.
B
B
B
This
as
an
ingredient,
an
engine
nginx
implements
it
some
way,
but
any
ingress
object.
It's
part
of
the
specification
to
do
that
for
the
for
kubernetes
ingress.
So
I'm
not
saying
we
don't
we
get
rid
of
it,
but
I'm
just
pointing
out
that
we
we
should
be
able
to
do
this
in
any
ingress.
That
confirms
to
the
specification,
I
think,
if
I'm
understanding
how
we're
doing
the
path-based
routing
there
yeah.
B
Yeah,
so
I
think
how
it's
actually
done
by
different
ingresses,
I'm
curious
too,
like
the
gcp
one
would
actually
create
a
gcp
https
load
balancer,
which
is
a
layer,
7
load
balancer
to
do
it.
You
know
if
we
go
down
the
path
of
eventually
wanting
to
look
at
other
proxies.
I
guess
those
other
proxies
that
implement
the
ingress
would
do
it
as
well.
The
question
is
here
so
that
that
block
you've
got
there.
B
A
B
B
B
Is
sorry,
so
are
we
saying
we
don't
actually
route
this
at
the
ingress
level?
Is
that
what
we're
saying
at
the
moment.
A
What
we're
saying
here
is
that,
right
now
we
have
these
apps
for
engineers
ingress.
Although
I'm
not
aware
this
is
the
conceptual
sorry
we
have
these
paths,
for,
I
think
no
actually
like
this
is
also
conceptual.
We
don't
have
this
right
now,
we're
not
doing
anything.
C
A
We
have
very
basic
configuration
now
and
the
idea
was
to
extend
it
to
have
different
pod
groups
for
different
request
paths
and
I'm
not
sure
whether
those
groups
will
be
like
totally
like
configurable.
So
you
can
like
add
more
or
whether
we're
going
to
hard
code.
A
Preset
paths
it'd
be
nice
if
it's
configurable,
but
I'm
not
sure
how
the
configuration
will
look
and
I
think
json
is
working.
That
out.
B
Okay,
so
that
makes
sense
to
me
so
yeah
at
the
moment,
our
ingress
object
is
very
simple:
it
just
routes
everything
to
one
backend
service.
We
want
to
make
exploded
out
for
different
endpoints,
and
that
makes
sense
and
that
in
theory,
if
done
right
in
the
ingress
object
should
be
independent
of
what
we
are.
What
ingress
tooling,
like
what
ingress
controller
we
use
now
in
the
future.
Once
again,
I'm
not
saying
we
move
away
from
ingress
and
genex.
You
know
if
it
works.
B
B
B
A
Yeah,
okay,
so
I
think
jason
was
planning
to
work
on
this.
This
week
I
haven't
heard
any
update
from
him.
I
don't
think
there's
been
an
update
on
the
issue,
so
I
doubt
anything
significant
is
going
to
be
done
this
week,
but
we'll
see
maybe
next
week.
A
Next
is
prometheus
metrics
for
gitlab
shell.
I
suggested
that
we
remove
this
from
the
blocker
list.
Amy.
Can
I
just
remove
this.
C
C
A
Okay,
that's
good
unified,
structured
logging
across
kit.
A
So
this
was
maybe
I
see
we
don't.
This
is
an
epic
and
we
decided
that,
like
this
was
going
to
be
a
blocker
for
moving
the
front
end,
but
not
a
blocker
yet
for
get
https
and
get
ssh.
C
I
haven't
actually,
but
I
can
definitely
do
that.
Do
we
need
or
do
we
need
this
whole
epic
or
the
certain
parts
that
are.
A
A
A
And
this
is
the
database
load
balancing
on
pod
start
so
we're
kind
of
like
we,
we
added.
We
made
a
configuration
change
last
week
that
delayed
our
readiness
check
by
60
seconds
so
that
we
don't
actually
start
saying
that
a
pod
is
ready
for
60
seconds
after
it
be.
You
know,
boots,
and
this
is
saving
us
from
this
issue,
because
by
the
time
we
wait
for
that
60
seconds,
usually
load
balancing
is
pretty
close
to
being
initialized
by
then.
A
If
we
want
to
reduce
that
time,
which
I
think
we
could
then
maybe
this
will
become
an
issue.
I
just
don't
know
what
it's
it's
kind
of
bad
to
be
using
the
primaries,
but
it
seems
like
we
would
only
be
using
the
primary
database
for
25
of
the
pods
that
are
surged
on
an
upgrade
and
then
once
once
those
pods
are
fully
booted
and
ready.
Then
they'll
you
know
switch
over
to
using
the
replicas.
A
I
think
we
should
just
keep
this
on
our
radar.
I
don't
know
I
probably
I
can
remove
it
from
the
blocking
list
for
now,
but
yeah.
A
Stan
had
a
comment
here
like
maybe
we
could
change
lower
the
interval,
but
he's
not
sure
like
what
the
effect
of
that
would
be.
B
So
I'm
interested
in
this
because
you
know
I
want
trying
to
push
forward
with
getting
auto
upgrades
enabled
in
production
for
gke
and
obviously
things
like
containers
taking
longer
to
boot
and
and
things
like
that.
Basically.
B
Blow
out
our
upgrade
times,
and
not
only
do
I
want
to
get
auto
upgrades-
enabled
we're
now
kubernetes
1.16,
which
we're
running
is
no
longer
actually
in
the
release
channel
wherein
they've
pushed
everyone
to
117..
So
we
continue,
we
continually
see
playing
catch
up,
which
is
fine,
that's
what
I
kind
of
always
expected,
but
so
so
I
don't.
I
don't
think
this
is
a
blocker,
but
I
I'm
I
do
you
agree.
This
needs
to
be
kept
on
the
right
anything
we
can
do
to
make
our
pod
shuffling
quicker.
A
Yeah
yeah,
I
think
what
I
would
like
to
do
is
try
to
do
what
stan
suggested
here,
which
is
to
increase
the
frequency
of
these
checks
when
the
replica
starts.
So
maybe
we
can
look
into
that,
but
I
think
I
I
can
I'm
going
to
take
this
off
the
blocker
list
for
now.
C
Yeah
yeah,
okay,
makes
sense
cool
and
then
the
pages
stuff
is
just
continuing
on.
So
that
seems
to
be
going
fine,
we're
still
hoping
we
may
have
some
stuff
in
around
december,
but
we'll
see
how
it.
A
Goes
for
other
labeled
issues
this!
I
think
we
can
just
remove
it
since
we
have
the
environment
variable
work
around.
So
I'm
going
to
take
the
blocker
label
off.
A
And
that's
pretty
much
it!
This
one
still
needs
investigation.
This
proxy
request
buffering
we've
turned
it
off
by
default,
and
this
is
one
thing
grain
that
we
that
I
was
thinking
of
like
it's
very
engine,
specific,
exactly
yeah,
so
we
turn
it
off
for
globally.
Right
now,
we
have
to
decide
at
some
point
whether
we
want
to
conditionally
enable
it
per
path.
I
think
for
self-managed
it's
more
important
because
they
don't
typically
have
a
cdn
in
front
like
we
do
right.
B
A
So
yeah
cloud
pages
native
support,
that's
also
on
the
blocker
list,
so
I
think
that's
pretty
much
it
so
the
list
is
pretty
small
right
now,
once
we
have
pages
off,
it's
going
to
be
great.
A
Great
great
so
demo.
A
We've
ever
seen
before
on
virtual
machines,
that's
because
we
were
pretty
under
provisioned,
we're
also
like
at
100
to
99.9,
even
during
deploys,
which
is
much
better
than
where
we
were
before
before
we
had
when
we
had
this
nginx
balancing
issue.
So
I
think,
like
the
replicas
setting
them
setting
the
max
unavailable
to
zero,
has
really
shown
that
it
does
work
and
like
we're,
seeing
like
basically
no
interruption
at
all
during
deployments.
That's
so
that's
really
cool.
A
We
have
right
now
about
6000
requests
per
second
on
the
main
stage
on
canary.
I
adjusted
this
recently.
So
this
these
bands
haven't
caught
up
yet,
but
now
we're
at
about
200
requests
per
second.
This
was
actually
set
incorrectly
for
a
little
bit.
A
If
I
move
this
to
like
two
days,
you
can
see
like
we
were
way
way
above
what
I,
what
we
probably
should
have
been
for
canary
and
and
the
cool
thing
about
this-
was
that
the
regional
cluster
just
scaled
up
appropriately,
and
we
didn't
really
even
notice
that
we
were
running
like
a
lot
more
traffic
to
canary
than
we
should
have
been,
which
shows
to
me
that,
like
I
think,
if
we
did
have
to
take
out
a
zone
and
set
it
to
like
drain,
I
think
like
we
could.
A
Probably,
although
pods
take
a
long
time
to
start,
I
think
there'd
only
be
a
small
window
of
interruption
and
then
we
can
have
the
other
zones
take
over
right.
Very
true,
let's
take
take
a
look.
I
think
there
was
like
a
deploy
happening,
so
we
might
see
some
like
churn
right
now
on
the
pods,
but
we
can
take
a
look.
A
Yeah,
so
so
these
these
messages,
kind
of
it
gives
you,
like.
The
console
at
least
gives
you
like
these
nasty
looking
red
error
messages,
but
this
always
happens
whenever
we
have
scaling
events
or
like
when
things
are
in
flux,
so
canary
web
service
is
running
at
12
pods
right
now,
I
lowered
the
minimum
to
5.
So
I
think
this
is
about
about
right.
A
A
I'm
setting
the
cpu
request
to
about
two.
This
also
takes
into
account,
for
course,
requests
as
well.
So
that's
why
it's
like
a
fractional
2.3
and
you
can
see
it.
It
seems
like
it
might
be
a
little
bit
high,
but
I
tried
to
lower
this
and
I
wasn't
happy
with
the
results.
So
I
think
what
I
think
I
may
do
is
actually
move
the
limit
down
closer
to
the
requests
and
leave
the
request.
A
Kind
of
high
memory
also
looks
kind
of
high,
but
I'm
very
nervous
about
making
this
lower,
especially
now
that
we're
going
to
enable
action,
cable
so
so
green.
What
do
you
think
like?
Do
you
think
we
should
just
like
maybe
bring
the
limit
down
a
bit
yeah?
Would
you
would
you
say,
like
it's
okay,
to
have
this
buffer?
I
mean
it
seems
like
it's
safer
right.
B
Yeah,
I
would
say
it's
probably
okay,
to
have
the
like
what
it's
like.
What
two
at
the
moment
right,
yeah
so,
and
you
really
wouldn't
want
to
go
much
lower.
You
could
probably
drop
it
to
1.5.
I
reckon
if
you,
if
you
bring
the
so
by
bringing
the
limit
down
you're,
going
to
make
it
easy
to
schedule
more
pods
on
the
node,
where
it's
running
right,
like
the
limit,
is
what
it
determines
whether
the
scheduler
can
fit
how
much
on
the
node.
So
I.
A
Thought
it
was
oh,
I
thought
it
was
request
that
it
took
into
account
for
that,
like
it,
takes
the
average
load
across
all
of
the
pods
and
see
how
it
matches,
requests
or.
B
Is
it?
Let
me
be
wrong,
I'm
I'm
pretty
sure
it's
limit,
but
I
could
be
wrong
and
that
would
be
something
to
check,
because
you
know
you
maybe
you're
right.
It's
it's
requests.
It
tries
it
schedules
what
it
can
request,
but
then,
when
things
go
up
to
their
limit,
that's
how
it
determines
who
to
evict.
Maybe
that's
maybe.
A
B
So
I
think
I
think
lowering
the
limit
is
probably
good
anyway,
just
because,
if
you
don't
think
that
any
reason
for
it
to
use
that
much
memory
or
cpu
beyond
like
like,
unless
it's
like
a
memory
leak
or
something
bad,
then
you
know
obviously
lowering
the
request
as
well.
If
you
make
the
requests
the
same
as
the
limit,
it'll
change
the
qrs
class
on
the
pods
as
well,
I'm
not
sure.
B
A
Yeah
I
mean
so
far.
We
haven't
been
seeing
a
lot
of
evictions
because
the
limits
are
so
high,
but
like
I'll,
maybe
I'll
start
by
bringing
the
cpu
limit
down
to
like
three
and
maybe
bring
the
memory
limit
down
a
little
bit
as
well
yeah
yeah,
but
so
so
that's
looking
good
for
cluster
sizes.
A
A
A
So
good
https,
this
is
for
the
canary.
We
currently
have
five
nodes:
let's
make
them
down
a
little
bit
more,
since
I
just
lowered
the
floor
for
the
number
of
pods,
whereas
before
on
canary,
we
had
about
three
so
but
then
again
on
canary
before
we
had
like
not
as
much
traffic.
So
it's
kind
of
hard
to
make
a
direct
comparison
for
the
zono
clusters
currently
get
https.
A
Here
we
have
nine
12,
so
we're
not
very
balanced
when
it
comes
to
traffic,
it's
a
little
bit,
maybe
a
little
bit
surprising,
but
I
don't
know
to
think
about
that.
A
So
so
what
like
23
so
so
about
30,
30
or
so
nodes
where
before
we
were
running
about
25,
but
then
again
we
were
way.
I
think
under
provisioned,
for.
A
We
were
way
under
provision
for
the
traffic
we
were
having
going
back
to
workloads.
Let
me
just
show
you
the
zonal
clusters.
A
So
the
floor
for
pods
is
at
30
and
you
can
see
like
we're
above
the
floor,
so
that's
good.
We
just,
I
think,
we're
either
doing
a
deploy
or
we
just
finished
the
deploy.
So
this
may
come
down
a
bit.
Let's
take
a
look
at
one
of
these.
A
So
similar
to
canary
seems
like
about
right,
like
I
said
like,
maybe
we
can
bring
down
the
limits
a
bit.
A
A
Yeah
so
yeah
I
mean
seems
about
right.
I
think
for
now
we're
going
to
be
doing
an
action,
cable
experiment
today,
so
we'll
see
what
that
is,
how
that
has
an
effect
on
it.
A
That's
pretty
much
that's
pretty
much.
It.
A
Next
things
that
we're
going
to
be
doing
is
the
biggest
problem
we
have
right
now.
Graeme.
Is
this
auto
deploy
and
configuration
changes
stomping
on
each
other,
so
I
need
to
figure
out
how
we
can
limit
or
put
a
blocker
in
for
auto
deploy,
so
that
if
there
are
pending
configuration
changes
we
we
spin
or
maybe
fail
right
now.
It
seems
like
we're
thinking
that
we'll
just
parse
the
diff
output,
which
sucks,
but
I
don't
know
how
else
to
do
it.
A
B
A
Cool
are
you
involved
with
the
tank
stuff
that
is
being
referring
yeah,
preferably
okay,.
B
No
honestly
I've,
I
I
wish
I'd
had
more
time
to
work
on
a
lot
of
the
kubernetes
stuff
at
the
moment,
but
I've
been
doing
this
trying
to
wrap
up
all
the
secrets
manager
stuff
just
so
I
can
not
have
to
think
about
secrets
ever
again
and
I've
just
been
looking
at
like
the
more
of
the
platform
side,
gk
upgrades
and
bits
and
pieces,
I'm
interested
in
I'm
definitely
interested
in
revisiting
talking
about
like
the
helm,
file,
changes
and
tanker
and
all
that
stuff.
B
B
I
think
it's
12.,
helm,
2
gets
finally
depreciated
and
all
the
helm
repo's
go
away,
so
I've
located
where
there's
some
they're
kind
of
getting
split
and
fractured
all
over
the
internet,
but
we're
going
to
have
a
lot
of
ci
jobs
break
on
that
day.
Unless
we
get
ahead
of
that,
which
I'm
trying
to
do
now
and
just
try
and
find
new
places
for
things
and
change,
chart,
locations
and
stuff,
it
won't
break
get
labcom
because
we
host
our
own
chart
and
that's.
B
A
Yes,
I
know
yeah,
I
think,
we're
I
think
we're
using
the
official
chart
for
that.
But
I'll
have
to
double
check,
but
do
you
can
do
you
have
an
issue
that
we
can
track.
B
No,
no,
I.
A
B
I
I
do
do
I.
A
Because
I
think
what
we
probably
should
do
is
just
look
through
gitlab
help
files,
since
that's
the
project,
that's
gonna,
be
affected
and
see
which
charts
are
impacted
and
make
sure
that
we
have
new
locations
for
them.
B
B
Anyway,
I
won't
type
people's
time
now
me
looking
through
emails,
but
yeah.
I
think
you
know
now
that
we've
got
the
multi-class
start
now
that
we've
got
we're
looking
at
trying
to
make
auto
deploy
block
and
we've
got
tanker.
I
do
I'm
doing
a
very
quick
re-review
now
of
everything
in
gitlab
helm
files-
and
you
know
making
sure-
is
this
right
location?
Is
this
working
for
us?
The
other
thing
I've
done,
which
I
will
put
up
for
review
tomorrow.
You
may
have
seen
the
comment
you
may
not.
B
Is
I've
done
the
tariff?
I've
done
the
work
to
make
the
bootstrap
bootstrapping
issue
like
pulling
all
that
out.
So
now
it
should
be.
You
add
a
new
cluster
to
terraform,
you
run
the
the
gitlab
com
job
and
it
should
do
the
whole
bootstrapping,
the
storage
class,
the
ci
jobs,
sorry,
the
k8s
workloads
user
and
all
the
other.
A
B
A
Okay,
yeah,
I
mean
like
I'm
looking
forward
to
seeing
it
because
when
I
did
it,
it
was
not
simple.
So.
A
B
A
C
A
Cool
do.
C
We
need
to
do
anything
on
our
side
like
are
we?
This
is,
I
suppose,
like
protecting
against
the
repo
going
away,
but
do
we
need
to
also
think
about
the
upgrade
to
helm
three.
B
So
that
is
the
other
thing
that
I
yeah.
That's.
The
other
kind
of
big
thing
that
I
would
like
to
kind
of
address
in
the
next
few
months
is
home.
Three
upgrades
I
I
know
we've
kind
of
circled
around
this
a
few
times.
I
know
starbucks
looked
at
it
a
few
times
and
I
think
we've
always
been
like.
B
B
One
cluster
I
have,
I
had
a
quick
look
and
I
think
we
can
believe
it
or
not.
Depending
on
how
we
set
things
up,
we
can
actually
run
two
versions
of
helm's
side,
but
I
don't
want
to
do
this,
but
we
can
run
two
versions
of
him
side
by
side
in
some
kind
of
small
intermediate
term,
which
may
make
change
require,
like
the
changeover
easier.
If
we
can
bite
it
off
piece
by
piece,
but
I
think
at
the
end
of
the
day
we
just
got
it.
A
Cool
with
regard
to
your
comment,
I
saw
about
spinning
up
a
new
cluster
in
a
different
region
for
registry.
B
B
Or
something
or
what
it
wasn't,
it
wasn't
like
a
definite
serious.
It
was
a
hypothetical
because
I
was
thinking
about
that
outage
and
I
was
like
in
theory
with
all
of
the
tooling
we
have
now
and
everything
we've
done
with
the
multi-zonal
and
everything
we
should
be
in
a
point
where
we
create
a
new
gk
cluster
in
terraform
in,
say
us
west,
somewhere
yeah.
A
B
That
comes
up,
we
run
the
gitlab,
helm
files
or
whatever
we
need
to
write,
gitlab
com
chart
and
that's
just
another
home
file
environment
that
it
just
talks
to
a
different
cluster
and
deploys
things,
and
we
can
say
please
only
deploy
the
registry
here.
So
now.
We've
got
some
registry
pods
running
in
us
west.
We
have
registry
pods
running
in
other
clusters.
Gcs
should
be
replicating
that
I'm
pretty
sure
gcs
replication
is
multi-regional.
B
Then
in
theory
we
have
another
kubernetes
clustering,
another
end
point
which
we
could
even
stick
behind
h,
a
proxy
like
as
a
fallback
endpoint,
so
that
should
we
ever
have
an
issue
with
one
region
and
assuming
that
it's
a
gcs
issue,
that's
only
isolated
to
that
region
and
not
everywhere
that
we
could
actually
route
traffic
over
and
it's
kind
of
the
first
experiment
of.
Could
we
actually
do
something?
That's
slightly
geo-redundant
is.
Is
that
interesting
to
people?
Is
there
a
redundancy
we're
aiming
for?
B
B
A
A
To
do
something
like
that,
so
I
think
the
first
step
for
us
would
be
probably
to
get
rid
of
hp
proxy
altogether
for
registry
and
then
and
then
and
then
after
we
have
something
else
in
front.
Maybe
we
switch
to
this
load
balancer
first
and
then
we
start
adding
more.
C
A
A
So
so
the
the
dependencies
there
are,
the
the
registry
service
itself
doesn't
go
out
to
the
api,
but
the
docker
daemon
on
your
laptop
on
your
workstation.
The
docker
client
gets
a
redirect
when
it's
not
authentic.
When
you
do
a
docker
pool
it
gets
this
redirection.
A
To
be
authorized
so
then
we
go
out
to
gitlab.com
to
get
authorization.
So
so,
if
there
was
a
regional
outage,
no
one
would
be
able
to
authorize
their
docker
connections.
B
A
A
product
well:
well,
there
is
a
push
to
get
geo
on
staging
that's
been
coming
from
the
higher
levels,
so
that's
that
might
be
the
story
for
geo.
Redundancy
is
just
to
have
a
cold
yeah.
B
B
A
I
don't
know
like
yeah,
I
think,
thinking
about
it
like
to
do
geo
for
what
we
have
in
kubernetes
now,
it's
wouldn't
it
be
too
bad
right.
We
would
just
create
another
cluster
and
then
we
would
have
which
would
be
another
environment
in
helm
files,
and
then
we
would
have
like
you
know,
configuration
that
says
the
geo
cluster
so
yeah.
It
could
possibly
work.
B
A
The
the
way
that
we
thought
that
would
work
would
be
that
you
have
a
replica
in
the
region
and
then
you
do
a
database
failover
to
the
replica
and
then
and
then
your
replicas
are
in
different
regions
at
that
point,
so
that
that
slows
things
down
a
bit.
But
yeah.
A
B
A
All
right:
well,
I
think
I
think
we've
done
everything
anything
else.
Amy.