►
From YouTube: 2021-03-18 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
B
Awesome
so
we're
all
here,
so
we
don't
have
anything
in
the
agenda,
but
what
would
be
useful
to
spend
this
time
on?
So
I
know
we
haven't
had
a
lot
of
chance
this
week
to
progress
on
things
with
registry
and
incidents,
but
you
know
we've
still
got
a
bunch
of
time
together.
So
what
would
be
a
useful
way
to
to
use
this
time.
A
I
have
an
open
question
kind
of
directed
at
andrew,
but
I
don't
know
if
that's
a
wise
choice
of
using
this
time.
A
Well
so
there's
been
an
issue:
that's
been
sitting
in
my
to-do
list
for
the
last
nine
years,
related
to
metrics
and
trying
to
sort
out
the
metrics
specific
for
node
pools.
I
yes.
D
A
Don't
have
a
really
good
method
of
validating
saturation
of
node
pools,
making
sure
we
have
yeah
enough
node
pull
nodes
in
a
node
pool,
available,
etc,
and
I
would
like
to
be
able
to
figure
that
out
because
at
some
point
yeah
we're
going
to
run
into
problems
with
this
as
we
expand
our
kubernetes
work.
And
I
don't
right
now.
This
is
difficult
for
the
uncle.
C
No,
I
think
that
the
service
should
be
kubernetes
and
node
pools
are
in
the
kubernetes
service
and
each
node
pool
is
what
we
call
a
shard
okay,
and
so
we've
got
that
child
label
that
we
already
aggregate
things
over.
So
we
might
have
like
from
yeah.
I
mean
we've
got
it,
we've
got
to
think
it
through
it.
You
know
the
saturation,
so
saturation
is
pretty
straightforward.
C
We
can
have
things
like
you
know
how
how
much
of
the
note
pool
is
or
how
many
nodes
you
know
the
maximum
that
we
can
expand
to
is
and
then,
if
we
stack
up
there,
you
know
we
can
treat
that
as
a
normal
saturation
metric.
We
can
treat,
I
don't
know,
there's
like
a
whole.
You
know
even
cpu
on
node
pools.
We
can
you
know
if
we
find
that
there's
nodes
in
the
node
pool
that
are
pinned
at
100
all
the
time.
That's
probably
something
that
we
treat
in
the
same
way.
C
C
Yeah
there's
another
part
of
it
that
I
think
is
really
important
and
came
out
in
the
the
incidents
we've
been
having
and
that's
that
we've
got
these
nice
graphs.
Now
that
show
us
and
in
fact
several
different
types
of
incidents
where
we've
seen
the
problems
with
the
crash
back
off
loop
and
the
the
other
one
during
you
know
with
the
the
oem
killer,
destroying
gitlab
shell.
During
that
incident,
and
as
far
as
I
know,
if
we
got
alerts,
they
were
to
like
hash
alerts,
which
is
like
not
getting
alerts.
C
So
we
need
to
have
those
as
proper
alerts,
and
you
know,
we've
got
them
attributed
to
the
service
that
they're
part
of
now,
and
so
we
should
be
sending
out
like
page
duties.
You
know
if,
if
we
have
serious
back
off
loops,
not
just
like
one
happening
here
or
there,
but,
like
you,
know
proper
spikes,
then
we
should
get
notified
about
that
properly.
And
we
can,
you
know,
direct
people
to
those
new
pages
that
we've
got
okay,
but.
D
A
C
It's
I
think
it's
really
important
and
it's
it's
a
really
good
thing
for
us
to
be
spending
our
time
doing,
because
you
know
at
the
moment
there's
there's
a
lot
of
that
stuff.
There's
there's
several
things.
We
actually
have
a
lot
of
the
old
alerts
that
are
just
going
to
hash
alerts
and
we
actually
need
to
tidy
those
up
and
then,
but
if
we
attributed
them
to
like
the
service
and
all
of
that,
then
I
think
it'd
be
much
tighter
anyway,
and
we
kind
of
we
need
to
do
that
like
soon
like.
A
C
A
The
one
thing
that
I
will
continue
to
find
missing
is
creating
a
saturation
metric
on
empty
der
space
because
it
doesn't
look
like
kubernetes
exposes
disk
usage
on
an
empty
dir
at
all,
which
is
unfortunate
because
now
we
don't
know,
if
say,
the
project.
Exports
are
going
to
use
up
too
much
disk
or
when
we
migrate
the
api
since
currently
we're
writing
data
to
disk,
and
we
probably
need
to
create
an
empty
dirt
for
that.
We
won't.
C
Does
does
that
is
that
empty,
like
its
own
mounts
on
the
on
the
node.
A
C
A
C
D
C
Yeah
but
but
it
is
kind
of
tangential,
it's
not
like
at
the
core.
You
know
we're
not
changing
like
the
scheduling
algorithm
here,
we're
adding
like
precisely
you
know:
it's
not
the
core
of
the
system,
and
it
does
seem
like
a
reasonable
thing
to
do
because,
like
you
say
like
especially
if
we
have
like
a
thousand
nodes
and
they're
all
kind
of
you
know,
we
don't
they're
all
the
same
and
it's
like.
Oh,
this
volume
is
full
that
will
that
won't
be
nice.
A
So
they
could
run
the
node
out
of
space
and
kubernetes
is
not
gonna
do
anything
about
it
potentially,
but
actually,
I
would
hope
in
that
particular
situation.
You
can't
schedule
a
node,
since
that
would
tend
to
run
at
this
space
or
you're
allocating
too
much
disk
space,
but.
C
Is
there
some
sort
of
over
commit
with
with
disk
space?
In
the
same
way,
there
is
I'm.
D
Hey
guys,
sorry
to
interrupt,
but
I
think
we're
like
about
to
see
a
big
incident
here.
Sidekick
is,
I
just
noticed
my
mr's
are
not
getting
updated
and
sidekick
attacks
like
equally
yeah.
E
I'll
do
the
same,
yeah
hold
hold
hold
hold
before
everyone
leaves
there
are
enough
people
over
there.
Don't
forget
that,
so
only
when.
E
Pole,
marin,
you
you
want
more
incidents
andrew.
No,
no.
We
can
invite
some
external
consultants
to
help
us
with
certain
parts
scarborough
before
you
run
away
before
everyone
runs
away.
I
I
read
that
we
just
got
the
charts
change
merged,
for
that
was
blocking
us
from
continuing
with
the
api
migration
forgot
what
it
was
exactly.
That's
my.
A
Brain
there
was
a
change
for
the
container
registry
missing
a
configuration
item
that
was
populating
our
rails.
E
B
It's
well
it's
really.
This
week
we
have
not
made
much
progress.
We
know
what
the
next
tasks
are,
which
is
testing
deployments
and
continuing
the
readiness
review.
We
haven't
progressed
much
this
week
because
of
incidents
and
registry,
but
as
soon
as
well,
assuming
no
more
incidents.
Then
next
week,
scarbeck
and
graham
at
least,
will
have
some
capacity
there.
I'm
guessing
henry
you're
still
fighting
with
the
database
on
pre
right
like
getting
that
connected.
G
The
databases
fight
just
that
we
need
to
solve
some
issues
also
with
a
chart
and
registry
and
then
find
a
plan
to
or
we
need
to
figure
out.
If
registry
is
working
as
expected,
now
with
api
being
enabled
in
pre
that
we
have
observability
and
then
we
can
work
on
a
plan
to
switch
over
and
do
the
migration
as
a
test
together
with
the
registry
team.
E
Okay,
I
just
want
to
make
sure
that
we
don't
have
too
much
insulation
between
people
on
the
work
that
is
happening
for
registry
and
api,
like
I
know
that
there
needs
to
be
one
right
like
you
need
to
have
focus
to
actually
get
the
work
done.
But
my
main
concern
is
that
henry
you,
for
example,
go
too
far
off
in
in
registry
and
just
like
api
becomes
a
spec
that
that
that
sets
us
up
for
failure
in
long
term.
E
So
I
don't
know
how.
So
this
is
for
you
and
me
amy
to
figure
out
how
to
actually
not
let
that
happen,
but.
G
G
G
A
Blocking
something
in
the
pre-prod,
because
the
container
registry
and
rails
is
not
working
properly,
so
I'll
try
to
get
that
updated.
Today
I
didn't
see
that
was
merged
until
you
mentioned
it,
so
I'll
be
able
to
get
the
chart
upgraded,
which
should
unblock
pre,
and
then
that
will
also
unblock
staging.
So
we
could
then
proceed
to
start
taking
traffic
on
staging
as
well.
Theoretically,
the
other
thing.
D
A
Plate
is
validating
our
logging.
I've
got
a
merge,
request,
ready
to
go,
and
that
will
hopefully
allow
me
to
finish
up
the
logging
work
that
has
historically
been
a
concern
for
us
and
if
that
works
out
well,
we
could
push
that
for
to
production,
and
then
that's
the
only
blocker.
I
know
that's
preventing
us
from
going
to
canary.
G
A
Well,
we
still
have
other
work
that
we
need
to
accomplish,
so
I
think
we
could
go
to
staging
finish
up
any
other
validation.
We
need
to
accomplish,
there's
a
configuration
issue.
I've
got
open
and
then
there's
just
testing
to
make
sure
that
we're
not
going
to
have
any
awkwardness
with
deploys
and
500
messages
when
we
do
a
deploy
or
pod
pods
get
ripped
out
from
under
us
that
kind
of
thing,
but
we
could
go
to
canarium
parallel
with
that
work.
A
E
We
could
yeah.
We
could
certainly
do
that.
What
will
traffic
running
in
staging
for
a
few
days
give
us
is
a
data
point
validation
that
it
works?
How
are
we
validating
that?
I
we're
looking
for
some
synthetic.
E
E
A
I
don't
think
it's
going
to
provide
us
any
new
and
exciting
information.
If
anything,
it's
just
validation
that
the
api
service
is
working
as
it
should
be,
with
their
entertaining
with
our
virtual
machines.
B
A
I
think
we'll
end
up
leaving
we'll
end
up
taking
traffic
in
staging
for
at
least
one
day.
While
I
complete
the
logging
work,
because
I
I
want
to
make
sure
I
don't
break
logs
in
production,
so
I'm
going
to
more
carefully
roll
that
out
so
that'll
probably
be
a
nice,
solid
day's
worth
of
work
and
validation
prior
to
doing
that.
So
inherently
due
to
time
constraints
we'll
end
up
seeing
traffic
and
staging
for
a
few
days
in
kubernetes.
B
Oh,
no,
that's
gone.
That's
a
shame!
That
was
probably
a
good
question.
We
also
have
a
question
that
we'll
need
to
resolve.
Maybe
it
will
be
on
monday
around
the
registry,
so
the
other
thing
about
registered
work.
I
think
the
work
is
reasonably
stop
start
so
right
now
we're
kind
of
unblocking
package
team
giving
getting
pre-set
up
for
them.
B
Then
my
guess
is
they'll,
be
busy
testing
for
some
weeks,
maybe
before
the
kind
of
next
round
of
work,
but
we
do
need
to
also
finish
making
decisions
and
validating
our
assumptions
on
where
we
deploy
the
secondary
registry
instance.
So
I'm
just
putting
stuff
that
stuff
together
but
might
actually
have
the
right
people
here.
Like
one
of
the
big
assumptions
we
have
well
one
of
the
big
unanswered
questions
we
have
is
around
canary.
B
So
right
now
we
have
registry
does
run
in
canary
and
the
question
really,
I
suppose,
is,
do
we
want
to
run
the
setup
that
we
need
for
the
gradual
migration,
which
is
a
second
instance
of
registry,
same
version,
but
configured
differently?
Do
we
want
to
run
that
configuration
in
canary
as
well
as
on
pre-staging
and.
G
B
Don't
think
there
was
a
conclusion
I
think
there
was
a
there
are
pros
and
cons
both
ways.
It
was
definitely
an
open
question,
so
I
think
from
my
side
I
think
the
the
one
of
the
things
I
think
is
going
to
be
interesting
is
registry
is
under
active
development.
B
A
So
I'm
going
to
say
no
to
canary
and
registry
version
2..
The
reason
for
this
is
because
we
proxy
our
traffic
going
to
canary
at
aha
proxy
and
that's
going
to
be
coming
from
or
all
that
traffic
is
going
to
go
to
the
first
registry
before
that
registry
talks
to
the
other
registry
for
its
new
information.
A
A
A
B
So,
just
for
those
of
you
who
haven't
dug
through
this
epic,
so
this
is
the
current
registry
setup.
We
have.
We
run
it
on
our
zonal
clusters.
So
each
of
our
three
zonal
clusters
is
running
registry
and
it
has
its
own
s3
bucket.
B
We
need
to
move
to
a
setup
like
this,
where
the
existing
registry
instance
will
have
its
s3
bucket,
then
there's
a
secondary
version
of
registry,
same
version
with
a
different
config
that
comes
with
the
metadata
database.
B
So
we've
got
a
couple
of
options.
We've
been
debating
here,
there's
two
that
stand
out
and
I
think
I
think
the
canary
is
a
big
question.
I
think
that
determines
it
so
one
is.
We
could
run
registry
two
in
the
regional
cluster
and
the
three
zonal
clusters.
Would
these
would
talk
up
to
the
regional
cluster?
B
G
Is
the
canary
registry
using
a
different
bucket
than
the
g
product
canary?
I
don't.
G
Yeah,
because
if
you,
if
you
just
want
to
do
the
migration,
I
don't
see
a
reason
why
we
should
do
a
special
thing
for
canary
then,
because
if
it's
the
same
bucket,
it's
it's
one
migration
to
be
done
right
and
for
gpro,
and
then
that
we
have
a
second
canary
there's.
I
think
more
about,
like
I
don't
see
if
we
get
into
issues
if
we
deployed
a
new
registered
version
right,
but
it
will
not
help
us
in
confirming
that
the
migration,
the
long-running
migration
will
work
any
better
or
test
that
right.
No.
B
I
totally
agree.
I
don't
think
this
is
about
the
migration.
I
think
the
migration
will
need
to
be
validated
on
staging.
This
will
be
about,
will
get
up
stay
online,
so
are
we
potentially
deploying,
I
think
the
most
risky
configuration
is
where
we
have
staging
set
up
with
this?
Where
we
have
the
two
registry.
Setups
production
will
just
have
the
single
so
as
we're
kind
of
in
the
testing
phase,
which
means
that,
as
we're
doing
deployments
we'll
be,
we
lose
some
validation
on
staging
right
because
we're
running
a
different
configuration.
B
F
F
So
I
have
a
question
because
I'm
not
really
up
to
speed
on
this
part
of
the
register
migration.
So
my
question
here
is
this
double
instance:
registry
and
registry
to
configuration
is,
let
me
rephrase
so
we
do
the
we
are
now
we're
working
on
pre.
Then
we
move
to
staging
and
we
run
the
migration
staging.
F
So
is
there
are
we
planning
to
have
a
period
of
time
where
we
are
running
this
shootings
in
canary
or
production
whatever,
but
without
running
the
migration?
So
the
question
is
yeah.
B
F
Yeah
but
so
my
question
is
more
about.
Is
this
one
of
those
epic
migrations
when
everything
can
fail
and
will
fail
on
today
of
the
migration,
or
are
we
doing
something
smart,
like
the
system?
Can
shadow
the
online
system
and
just
work
on
a
subset
of
data
so
that
we
migrate
a
subset
of
data
and
we
can
validate
on
that
instead
of
just
hoping
that
it
works
at
first
and
the
first
time.
G
That's
how
it's
working
is
that
it's
gradually
migrating
things
over
to
the
new
registry.
The
second
registry
and
all
requests
to
these
new
repositories
on
the
new
registry
will
be
proxied
by
the
old
one.
G
So
if
you
would
just
move,
I
don't
know
one
repository
over
there
like
like
the
gitlab.com,
something
like
that,
then
it
would
just
proxy
all
requests
for
this
repository
over
to
the
new
registry,
and
we
could
just
stay
like
this
for
a
few
days
and
see
how
it
works
right
and
if
it
breaks,
then
we
just
would
break
this
one
repository
okay,
so
basically
with
the
migration
which
will
take
months
to
continue
to
finish.
F
F
F
Yeah,
but
will
they
be
misaligned?
I
suppose
yes,
because
if
you
add
a
new
layer,
and
so
when
so,
this
would
be
kind
of
no
longer
aligned.
Okay
and
how
are
we
routing
to
the
registry
to
the
canary
registry
still
by
past
orders.
A
A
I
don't
know
how
the
registry
is
going
to
send
traffic
to
registry
2
and
if
registry
2
is
not
going
to
see
the
path
names
but
instead
they're
going
to
say,
hey,
here's
a
blob,
give
me
that
blob,
for
example,
we're
not
going
to
be
able
to
route
traffic
via
canary
unless
we
do
like
a
percentage
based
type
of
routing,
which
I
don't
think
would
is
easily
possible.
Inside
of
h,
a
proxy
yeah.
A
F
You
may
end
up
having
blobs
that
are
so
you
may
have
a
blobs
that
is
partially
in
new
state
and
partially
in
individual,
precisely
because,
unless
they
completely
change
and
it's
possible
the
way
registry
stores
data.
Basically,
you
have
registry
is
a
database
which
is
based,
which
is
path
based.
So
you
can
ask
tell
me
the
layers
for
this
project
and
it
gives
you
a
list
of
layers
that
are
just
blob.
F
G
G
E
So
so.
G
F
Yeah
I
mean
the
old
one
should
always
drive
in
his
own
bucket
and
have
a
mirrored
copy
and
and
if
some
and
also
you
should
check.
If
but
basically
you
you
need
a
completely
duplicated
system,
because
if
for
any
reason,
registry
2
is
down
and
register
one
serves
something.
Then
you
have
to
mark
as
stale
the
migration
for
the
project
or
whatever,
because
it's
no
longer
in
line.
E
Okay,
I
understand
what
was
said
amy,
that
we
are
going
to
duplicate
the
data
right
like
we're
going
to
duplicate
the
buckets.
Basically
right,
that's
right.
B
E
B
E
B
E
B
G
G
B
E
So
it's
like
because
saying
that
it's
private
right,
meaning
like
users
don't
get
to
it
directly-
implies
that
it's
okay,
because
registry
one
is
still
there
it's
public
right,
so
you
generally
get
access,
but
actually
what
this
is
saying
is
registry.
Two
now
becomes
a
database
for
registry,
one
like
as
a
whole
registry
2
becomes
a
whole
new
set
of
database.
So
it's
not
only
the
blob
storage
right,
so
the
bucket.
B
I'm
not
totally
sure
I
think
it
might
be
able
to.
Certainly
I
there
is
definitely
something
that
says
it's
fine
for
it
to
go
down.
I
can't
recall
there:
okay,
full
description,
but
my
understanding
right
now
is
yes,
it's
fine
for
registry
2
to
go
down.
There
are
some
things
that
will
break
as
a
result
of
that
or
some
things
that
will
fail.
But
apparently
that's
fine
and
keeping
registry
on
online
is
the
main
requirement.
F
In
read
only
mode-
because
I
see
this
there's
this
link
between
the
two-
the
image
that
you
are
sharing
right
now
there
is
this
link
between
the
two
brackets.
So
maybe
it
is
possible
that,
in
case
of
of
right
on
the
bucket
too,
the
variety
itself
is
replicated
on
bucket
one
and
because
the
bucket
itself
is
the
only
data
source
for
for
registry
one.
It
means
that
it
should
be
able
to
serve
it
unless
we
change
the
backup
format,
which
I
don't
think
we
are
so.
My
assumption
here
is
that
the
fail.
G
Good
and
bad
thing
about
it
because
they
have
written
out
as
such
detailed
plans
and
several
epics,
but
it's
great
to
have
all
of
this,
but
I
guess
that
not
too
many
people
were
reviewing
this
in
total
because
it
was
just
too
detailed
for
somebody
not
being
totally
into
it.
So
that's
why
I
think
we
are
a
little
bit
unsure
of
the
details
here.
Still.
B
So,
just
on
the
kind
of
coming
back
to
the
canary
thing,
so
it
sounds
like
there's
some
definite
risks
to
us
having
canary.
If
we
don't
have
registry
running
on
canary,
what
do
we
lose
like?
How
risky
is
this
going
to
be
that
changes
like
the
registry
version
is
going
to
be
under
active
development,
we'll
be
moving
from
staging
direct
into
production?
A
B
A
F
F
F
A
We
should
like,
historically,
we
wanted
to
try
to
figure
out
a
way
to
deploy
our
components
using
that
precise
method
similar
to
gitlab.com,
but
that
got
sidetracked
and
then
now
we've
got
this
migration
occurring.
So
at
this
point
we
could
either
put
a
pause
on
this
and
try
to
figure
out
how
we
want
to
deploy
appropriately.
That
way,
we
could
leverage
canary
appropriately,
which
is
going
to
take
some
time
and
effort
and
a
lot
of
mucking
with
our
ci
pipelines
to
make
it
work
as
we
desire
or
we
could
keep
the
registry
migration
unblocked.
A
A
Yeah
we'd
have
to
visit
that
in
some
other
fashion.
If
you
want
to
do
canary
for
us
youtube.
F
A
B
B
Was
just
gonna
say
I
don't
think
we
should
add
extra
complexities,
this
migration
by
changing
the
way
canary
is
right.
I
think,
if
we're
not
using
canary
right
now
for
registry
deployment,
that
we
shouldn't
try
and
subtest
this
stuff
on
canary
but
yeah.
We
should
be
well
aware
that
that's
a
risk
that
we
should
cover
on
staging.
F
I
want
to
say
something
which
is
so
your
question
about:
are
we
risking
to
blow
up
production
with?
This
is
actually
true
risk,
because
here
we
are
discussing
about
two
things,
but
indeed
there
are
three
things.
F
B
I
guess
a
couple
of
things
we
could
do
right
so
for
that
is
we
have
we
haven't
discussed
and
it
may
be
as
an
option
is
getting
staging
running
with
the
two
registries
and
then
moving
that
through
to
production
and
then
getting
staging
running
with
the
database
and
then
moving
that
through
to
production
so
doing
a
much
more
incremental,
rather
than
getting
staging
set
up
with
two
instances
and
a
database
and
a
migration
before
we
go
to
production,
we
actually
break
this
down
further
and
that
could
de-risk
it
right
because
I
don't
think
having
two.
B
B
The
other
big
question
around
this
whole
thing,
I'm
putting
together
an
issue
which
covers
this
all
off
and
the
in-depth
issue
which
talks
about
this.
How
this
the
proxying
works
will
probably
answer
some
of
this,
but
we've
got
an
assumption
that
we're
not
going
to
massively
increase
network
traffic.
Now,
if
we
have
three
we'll
have
these
three
registry
instances
running
on
the
three
zone
or
clusters
and
we'll
have
registry
2
running
on
the
regional
cluster,
it's
most
likely
the
setup
we
go
and
everything
talking
between
them.
G
Yeah
and
the
hope
is
that
the
majority
of
traffic
will
be
directly
clients
going
to
the
buckets
right,
because
this
is
where,
where
most
traffic
is
happening,
and
it
should
stay
like
this
so
registry,
one
is
proxying
api
proxy
api
requests
to
registry
too,
but
in
the
end
it
will
return
something
back
to
the
clients
to
download
or
upload
images
right,
and
this
will
be
a
direct
connection
to
the
bucket
that
should
stay
like
this.
I
think
we
should
confirm
this,
but
this
is,
I
think,
the
majority
of
traffic.
G
Think
this
is
this:
is
the
migration
thing
right?
This
would
be,
then
I
I
assume
that
this
is
the
thing
when
we
start
the
migration
that
the
registry
one
would
slowly
start
moving
stuff
over
to
the
other
bucket,
so
it's
bucket
to
bucket
transfer,
I
don't
know
how
they
implement
this.
B
But
these
buckets
have
to
sit
in
the
we'll
sit
in
the
in
the
clusters.
Won't
they
so
this
this
is
this
bucket
over
here
is
on
zonal
cluster,
and
this
bucket
over
here
is
on
the
regional
cluster.
F
F
Right
yeah
so
because
think
about
uploading,
something
or
downloading
is
the
same
right.
So
you
have
a
big
body
that,
if
you're
ada
pushing
to
the
cloud
or
downloading
to
the
cloud
so
in
a
regular
configuration
of
a
registry,
basically
assuming
download
right.
So
the
user
asks
for
something
registry
search
this
on
on
the
bucket,
then
basically
downloads
from
the
bucket,
and
it
streams
the
body
as
you
receive
it
to
the
user.
So
you
pay
from
your
instance
to
the
bucket
and
then
you
have
the
egress
cost
to
the
user.
F
Now
you
have
another
system
on
a
side
which
may
be
proxied
so
long
term.
Everything
will
be
proxied,
so
this
means
that
also
the
body
of
the
of
the
blob.
Basically,
so
the
stream
of
data
will
cross
from
public
to
private,
and
this
really
depends
on
how
we
configure
it,
because
if
we
have
an
internal
load
balancer
in
between,
we
pay
for
it
and
we
pay
for
traffic,
and
if
we
are
moving
from
one
cluster
to
another
one,
I
think
we
pay
as
well,
because
we
we
cross
the
border
of
the
of
the
zone.
G
A
A
So
we
don't
see
the
egress
traffic
in
our
networks
when
a
person
downloads
the
information
uploads
operate
in
kind
of
a
different
fashion
where
it
goes
to
the
registry
and
the
registry
will
send
it
up
to
object,
storage
on
behalf
of
the
user,
so
uploads
are
where
the
network
calls
are
concerning,
but
because
uploads
happen
far
far
less
than
downloads
happen,
because
downloads
will
happen,
you
know
5x
more
than
uploads.
Will
we're
not
really
concerned
about
that?
A
Joao
already
confirmed
this
in
some
issues
somewhere,
where
that
precise
question
was
asked
amy
going
back
to
where
the
buckets
live.
We
the
buckets,
are
separate
from
clusters.
The
separate
entity
different
configuration
different
like
thing
entirely,
so
we
use
multi-region
buckets
everywhere.
B
Okay,
okay,
good
stuff,
cool
okay,
so
it
sounds
like
from
what
we
have
so
far.
We
can
clarify
with
a
few
extra
points,
but
from
what
we
have
so
far
putting
registry
2
on
the
regional
cluster
is
low,
setup
shouldn't
add
any
costs.
We
won't
try
and
get
canary
working.
So
we
don't
have
the
extra
complexity
there.
A
That
makes
sense
to
me.
The
one
thing
I
do
I
am
curious
about
is:
as
registry
2
gets
ramped
up
like
let's
say,
we've
completed
one
third
of
the
migrations
should
we
consider
like
this
is
a
long-term
question
that
we
handle.
You
know
further
down
the
road.
Should
we
consider
migrating
one
of
the
zonal
clusters
into
registry
two
or
not.
A
I
would
imagine
because
register
one
continues
to
be
a
proxy,
that's,
probably
not
possible,
and
that
we'd
have
to
wait
until
the
migration
is
entirely
considered
complete
at
that
point,
but
I
don't
think
we've
had
a
discussion
as
to
what
traffic
at
that
point
is
going
to
start
looking
like.
A
F
Will
yeah
so
the
question
is
there
will
be
just
one
single,
I'm
not
counting
on
replicas
and
things
like
that
there
will
be
a
patrony
cluster
yeah,
but
it's
still
one
I
mean
there's
one
data
store,
whatever
it's
replicated,
whatever
you
have,
but
every
in
the
end,
every
registry
instance
we
write
on
the
same
on
the
same
data
store
right.
Yes,
so
this
means
that
there's
when
we
do
the
rollout,
we
should
do
this
all
together.
F
We
can't
do
this
partially
because
think
about
this,
so
you
roll
out
just
one
part
of
the
cluster
and
that
thing
starts
serving
version
two.
Let's
call
it
version
two
registry,
two.
This
means
that
those
data
have
metadata
now
and
then,
if,
for
some
reason
you
reach
another
cluster,
that
has
no
idea
that
there
is
a
version
two
of
it.
How
can
it
handle
your
requests?.
A
F
A
F
A
A
G
B
B
F
Only
one
zone
so
that
zone
c
has
no
companion
registry
too,
so
he
thinks
to
be
I'm
the
only
truth.
Source
of
truth
for
for
the
registry.
G
G
And
then
I
need
to
ask
the
other
one
yeah,
that's
especially
the
thing
we
have
all
metadata
in
the
buckets
right
now
and
that's
way
too
slow
to
be
able
to
clean
up,
not
needed
data
anymore.
G
That's
why
we
need
to
move
it
to
a
database,
because
you
can't
search
in
the
bucket
efficiently
enough
with
we
have
millions
of
entries
and
and
that's
why
we
need
to
move
it
out
the
metadata
out
of
the
bucket,
so
the
data
will
still
there
be
there,
but
the
metadata
needs
to
move
out
into
a
real
database.
That's
the
whole
thing!
G
Okay!
So
it's
it's
not
about
moving
blobs!
It's
it's
about!
Moving
the
metadata
describing
the
block,
the
blobs,
the
meter,
that
of
the
blobs
moving
it
out
of
the
bucket
for
performance
reasons,
and
I
think
there
was
some
idea.
I
don't
have
the
details
here,
but
the
idea
was
that
there's
some
information
in
the
blog
telling
you
okay,
this
is
hosted
here
or
no.
This
has
metadata
on
the
other
side
or
it's
not
not
found
here.
G
So
I
need
to
ask
there
if
it's
there
and
that's
how
it's
working,
and
so
I
think,
moving
over
a
different
cluster
over
to
to
be
a
registry.
Two
using
the
databases
should
be
possible
by
by
I
don't
know
putting
it
into
maintenance,
switching
the
configuration
to
use
the
postgres
database
and
then
turning
it
turning
it
on
again.
That's
how
I
would
envision
it
should
work,
but
maybe
we
really
should
have
someone
from
the
package
team
here,
because
they
know
all
the
details.
We
are
just
guessing
all
the
time
right
so.
F
Yeah
yeah,
my
point
is
that
you
can't
start
migrating
data
until
every
installation
of
registry
you
have
has
is
a
registry
to
companion.
B
G
Need
production
to
use
maintenance
mode,
and
there
was
also
some
other
configurations
which
which
I
forgot,
the
name
about
disabling
some
file
systems,
lookup,
or
something
like
that.
So
I
think
they
have
some
mechanisms
built
in
to
do
this
transition,
but
I'm
not
enough
into
the
details.
You
really
should
talk
with
the
package
team
here,
but
I
mentioned.
F
F
In
that
case
it
makes
sense,
but
still
as
soon
as
you
do,
the
you,
the
first
one
that
you
put
in
the
in
the
payers,
configuration
or
the
other
one,
must
be
in
maintenance
mode,
because
you
you
you
don't
know
where
the
request
can
can
go
right.
So
would
be
better
if
it's
possible
to
say
this
is
having
both
of
them,
but
just
in
both
in
read-only
mode
right,
sorry,
but
registered
to
just
in
read-only
mode.
B
And
also
I'm
assuming,
as
we
do
deployments,
that
it
doesn't
matter
if
these
end
up
on
different
versions
for
a
period
of
time
as
they
will
right,
depending
on
how
we
deploy
this
out.
One
of
the
two
will
be
upgraded.
First,
cool.
Okay,
great,
that's
helpful!
Thank
you.
I'm
putting
put
together
an
issue
which
has
these
things
laid
out
and
sort
of
open
questions,
so
I'll
ping,
you
all
on
that
and
also
share
the
longer
write-ups
that
hopefully
answer
more
of
these
questions.
So
we
can
move
that
forward,
but.
B
Yes,
that's
a
good
point.
Yes
I'll!
Do
that
super
thanks,
very
much
everyone.
Is
there
anything
else?
Anyone
wants
to
talk
about
today.
B
Cool
we
clear
on
kind
of
what
next
steps
we've
got
for:
api
service,
cool
nice,
good
stuff
thanks.
Everyone,
thanks
for
the
discussion
here
and
thanks
for
joining
us,
take
care
bye.