►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello,
everyone.
We
are
here
to
sync
on
the
deployment
plan
for
the
updated
registry
for
gitlab.com
in
production
and
looking
at
the
agenda,
I
see
that
amy
left
a
couple
of
questions,
so
maybe
you
can
start
with
that.
A
One
of
them
is
asking
if
the
registry
will
be
the
same
version
or
with
different
configurations
or
a
different
version.
The
answer
is
that
it
will
be
the
exact
same
version.
Actually,
the
version
version
running
right
now
in
production
for
github.com
and
also
for
self-managed
instances
that
are
up
to
dates,
do
have
all
of
the
database
metadata
functionality
built
in,
but
it's
all
disabled
by
defaults.
A
A
And
then
any
also
asks
if
the
issue
that
we
created
several
months
ago
in
the
delivery
issue,
tracker
is
still
accurate
or
need
to
be
reviewed.
This
issue
was
mostly
to
look
up
for
tips
and
ideas
on
how
to
approach
the
database,
migrations
and
and
the
deployment
process.
A
B
Sure
so
I'm
just
curious
as
to
what
the
face
scenario
is
going
to
be
like
since
we're
going
to
have
two
registries
in
operation
and
since
they're
going
to
be
talking
to
each
other
to
some
extent.
I
want
to
make
sure
that
when
one
fails
we
still
we
have
some
method
of
maybe
not
blocking
traffic.
I
don't
want
to
have
a
situ
if
possible.
Like
I
don't
know
what
the
face
scenario
looks
like
today,
but
say
the
new
registry
fails,
but
the
old
one
is
still
in
operation.
What
happens?
A
So
the
basic
idea
before
behind
having
the
two
registries
is
that
the
current
one
will
continue
to
be
the
main
one
and
the
single
entry
points,
and
it
will
serve
requests
for
all
repositories
that
exists
until
the
date
of
deploying
this
side-by-side
registry
to
production.
A
So
requests
for
repo
repositories
that
exist
will
be
served
by
the
the
current
registry.
They
won't
go
to
the
to
the
new
one,
but
let's
say
that
a
user
wants
to
push
an
image
to
repositories
that
didn't
exist.
A
A
Those
that
want
to
push
or
pull
images
for
existing
repositories
will
not
be
affected,
so
those
trying
to
push
new
images
will
see
a
service
and
available
response
getting
back
to
them.
Something
that
we
can
do
is
that
we
can
try
to
proxy,
and
if
it
fails
with
a
service
unavailable
response,
we
can
serve
the
requests
on
the
current
registry.
A
C
Yeah,
I'm
I'm
not
a
huge
fan
of
the
idea
of
serving
falling
back
for
new
registries.
It
should
be
served
by
registry
2,
I'm
not
a
big
fan
of
falling
back
to
registry
1,
because
I
don't
think
we
have
a
way
to
reconcile
tags
which
are
going
to
change
over
time
with
that
as
well
as
since
most
of
the
registered
traffic
is
right,
traffic,
it
seems
like
we're
going
to
give
ourselves
a
you
know
this.
This
data
migrate,
this
data
divergence,
headache
for
not
a
lot
of
gains.
C
Since
most
people
are
reading
and
we
wouldn't
get
those
reads
back
even
if
we
fell
back
to
the
registry
of
one
in
this
case.
B
I
don't
think
I
have
anything
else
for
that
particular
question,
so
I
will
move
on
to
the
next.
One
is.
D
B
Any
need
for
us
to
consider
having
a
canary
deployment
specific
to
the
new
registry.
Currently,
we
already
have
a
canary
deployment
for
production,
as
is
where
I
think
jarvie
could
correct
me,
but
I
think
we
serve
five
percent
of
the
traffic
to
the
container
registry
right
now.
What
or
do
we
need
to
consider
a
canary
for
this
new
registry
configuration
I'm
leaning
towards
no
based
on
the
fact
that
this
is
specifically
for
migration
purpose
and
not
necessarily
serving
traffic,
but
clarification
is
always
useful.
A
Yeah,
I
think
was
that
was
one
of
the
options
that
you
mentioned
in
the
issue.
Right,
I
think,
that's
a
possibility
is
something
that
char
mentioned.
That
is
that
we
could
use
just
one
of
the
zones
for
the
new
registry.
A
I
think
from
a
functional
perspective.
I
think
that
could
be
useful
because,
as
we
start
moving
more
and
more
repositories
from
the
existing
registry
to
the
new
one,
we
will
see
the
traffic
on
the
new
one
increase
and
the
traffic
served
by
the
existing
one
will
continue
the
same
because
everything
flows
through
it,
but
the
traffic
that
it
actually
serves.
A
The
contents
will
decrease
right,
so
we
could
perhaps
adapt
and
when
there
is
a
significant
portion
of
the
traffic
being
served
by
the
new
registry,
we
could,
for
example,
have
one
more
zone
for
it,
but
then
it
would
leave
the
the
existing
one
with
just
one
zone
which
might
not
be
ideal,
given
that
all
traffic
will
continue
to
flow
through
that
cluster,
even
if
not
served.
B
To
be
clear,
when
I
talk
about
the
zono
configuration
I'm
just
talking
about
the
fact
that
we've
got
two
stages,
we've
got
the
main
stage
and
the
canary
stage
the
goal
for
canary
being
that
it
receives
the
newer
version
of
the
registry
for
some
period
of
time
prior
than
the
main
stage
does.
A
A
D
Yeah,
I
think
this
could
be
problematic
right
because,
let's
say
we
have.
D
Registry
yeah,
I
mean
I
yeah,
I'm
not
sure
how
this
so
we
so
the
way
it
would
work
is
that
we
would
have
registry
to
to
put.
We
have
we'd
have
registry
two
deployed
in
the
canary
name
space
so
that
five
percent
of
traffic
going
to
register.gitlab.com
gets
routed
to
canary,
which
would
then
be
proxy
to
the
registry
2
installed
in
canary.
D
D
You
know,
with
the
configuration
options
enabled
we
would
deploy
that
to
canary
first
watch,
metrics
and
kind
of
see
how
it's
behaving
before
we
graduate
that
change
over
to
the
zonal
clusters.
So
I
guess
maybe
this
is
more
of
an
infrastructure
question
than
an
application
question,
because
this
is
really
just
about
improving
the
safety
of
of
deployments.
D
A
A
Odd
thing
is
that
we
won't
be
able
to
do
a
gradual
increment
by
percentage.
So
when
we
proxy
request
from
the
new
registry
to
the
old
one,
we
can't
do
it
on
a
percentage
base.
Why
is
this
because,
for
example,
image
uploads
are
concurrent,
so
if
you
are
pushing
an
image,
there
are
multiple
requests:
writing
different
blobs
to
the
registry.
A
If
we
did
the
percentage
based
routing,
we
would
probably
end
up
with
one
block
being
sent
to
the
new
registry
and
the
other
one
sent
to
the
old
registry,
and
that
would
be
a
major
problem.
So
the
way
that
we
thought
about
it
is
that
we
are
going
to
do
and
moving
forward
with
this
plan.
We
intend
to
do
it
based
on
the
your
uri
of
the
request,
because
the
uri
has
the
path
to
the
repository,
so
we
can
do
it
using
the
path
of
the
repository.
A
D
Okay,
so
if
that's
that
that
helps
with
the
gradual,
phased
rollout-
and
I
think
what
we'll
probably
do
is
want
to
do
like
gitlab
or
right
as
maybe
gitlab
org-
would
be
one
of
the
first
things
we
try
for
internal
or
maybe
even
more
restricted
than
that.
But
I
I
assume
that
would
only
be
for
the
initial
rollout
right
like
you
would
you
would
start
with
that
and
then
very
soon,
after
that,
you
would
use
registry
2
for
all
new
for
all
new
projects
and
then.
A
Small,
as
you
said,
with
our
own
repositories,
and
just
one
or
two
of
them,
and
we
will
have
to
wait
like
a
month,
for
example,
to
make
sure
everything
is
stable
and
then
we
can,
for
example,
start
moving
to
other
repositories
and
start
deciding
new
repositories
outside
of
the
vlab
or
group.
A
We
also
have
a
feature
flag
which
will
enable
us
to
continue
to
write
that
metadata
to
the
to
the
bucket
as
well.
In
parallel,
the
new
registry
won't
use
it,
but
if
we
need
to
sync
data
bike,
that
means
that
we
can
just
sync
the
lighting
in
the
bucket
into
the
old
one
and
have
the
the
old
registry
server
all
requests
for
repositories
that
were
created
in
the
in
the
new
registry.
A
And,
of
course,
it's
easier
to
do
that.
If
we
don't
have
that
much
data
to
move
around.
D
D
Skybacker
is
that
all
you
have
for
five?
Yes,
okay,
honda.
Six
question
here
is
about
database
metrics,
since
we're
going
to
be
interacting
with
the
database,
whether
we're
going
to
have
something
in
prometheus,
we
can
use
logs.
Specifically,
we
we're
always
interested
in
seeing
db
durations,
for
you
know
for
any
interaction
you
guys
have
with
the
database.
We
get
this
out
of
rails
and
it'd
be
nice
to
see
this
from
the
registry
as
well.
D
D
Number
seven
question
about
bandwidth:
cost
just
wondering
if
there
are
any
changes
in
the
way
that
we
use
bandwidth
with
the
new
registry.
A
It
should
it
should
be
the
same
so
right
now,
if,
if,
if
we
have
the
two
side
by
side-
and
there
is
a
request
to
download
a
blob
that
exists
in
a
node
repository,
the
old
registry
will
proxy
that
to
the
old
bucket
in
gcs.
F
D
Sounds
good
number
eight.
I
think
the
way
we
have
things
configured
now
will
work
the
best.
So
the
way
deployments
will
work
if
we
put
registry
2
in
the
regional
cluster.
D
D
D
So
my
question
here
is:
do
we
need
to
be
concerned
about,
and
we
often
will
like
deploy
to
the
regional
cluster
or
we
may
do
them
all
simultaneously,
but
it
really
depends
like.
I
don't
think
we
can
really
be
guaranteed
about
the
order
of
things,
and
that
sounds
like
it
could
be
a
problem
right.
I
think
the
worst
case
scenario
would
be
like
we
deployed
to
the
zonal
clusters,
which
has
registered
the
current
registry
and
it's
configured
to
point
to
registry
2
which
doesn't
yet
exist
right.
That
would
be
a
problem.
D
D
A
B
D
Okay,
henry
you
have
the
next
one.
F
Yeah,
please
tell
me
if
my
internet
connection
is
breaking
up,
because
I
have
some
issues
right
now,
so
the
current
state
in
pre
for
registry
right
now
is
that
we
prepared
database
for
it,
and
I
also
set
up
a
console
node
so
that
we
could
give
access
to
developers
to
the
database
as
soon
as
we
add
them.
The
right
groups
and
the
question
would
be
which
developers
should
have
access
to
the
database
there.
F
But
today
I
just
tried
to
test
registry
and
pre,
and
I
saw
some
issues
there,
because
I
can
push
images,
but
I
don't
get
an
image
list,
for
instance
back,
so
maybe
you
should
first
make
sure
if
registry
in
3
is
working
at
all
before
we
switch
it
over
to
using
a
database
right,
and
so
the
question
is:
how
can
we
get
on
with
that?
And
also
when
should
we
do
the
switch,
because,
I
think,
just
talking
to
an
empty
database,
we'll
just
break
a
registry
right.
A
F
F
That
seems
to
be
maybe
a
different
problem.
So
if
one
of
your
team,
maybe
could
also
have
a
look
for
what's
going
on
there,
because
I
didn't
figure
out
what's
what's
not
working
there.
A
I
I
I
kind
of
I
can
have
a
look
at
it.
I
think
what
that
configuration
for
the
notifications
are
so
probably
it's
willing
to
ping
the
the
github
rails
api
for
some
reason,
but
we
can
turn
that
off
for
pre-production,
but
I'll
have
a
look
at
it
edits
and
then
about
the
other
ways.
A
I
think,
to
start
with
just
myself
and
early
having
access
to
to
the
database
instance
would
be
good
and
and
yeah
it's
not
useful
if
it
is
empty,
I
raised,
I
don't
know
if
you
saw
it
already,
but
inside
the
pre-prod
epic.
There
is
an
issue.
Let
me
share
my
screen.
A
Okay,
so
inside
get
back
to
it,.
A
So
in
the
in
the
pre-pro
epic,
that
is
an
issue
to
import
the
data
from
the
existing
container
registry
into
the
warning
pre-production.
So
this
has
all
the
required
steps
to
do
the
first
deployment,
unlike
with
production,
the
pre-protection
registry,
is
tiny.
It's
like
one
gigabyte.
I
think
something
like
that.
So
we
can
just
import
the
light
in
a
one-off
operation
with
the
registry
cli.
A
So
the
only
thing
that
we
need
to
do
is
either
put
the
registry
in
gradle
only
mode,
remove
it
from
the
load
balancer
and
then
run
the
import
command
to
move
the
data
from
the
current
bucket
to
the
new
one
and
also
inside
the
metadata
database,
and
then
we
can
configure
it
to
to
use
the
metadata
database
and
and
to
use
the
the
new
buckets
so
before
this
we
will
have
also
to
create
a
new
bucket
for
for
the
registry.
I
don't
know
if
this
is
something
that
delivery
will
do.
F
B
A
We
can
also,
if,
if
it
takes
time
to
do
the
side-by-side
deployment,
we
can
also
do
the
same
one-off
migration,
offline
migration
in
staging-
if
we
want
to-
but
this
is
only
mean
for
us
to
be
able
to
start
testing
and
creating
the
dashboards
with
all
of
the
metrics
based
on
on
the
pre-production
deployment.
So
we
don't
need
the
side-by-side
deployment
to
to
do
that.
We
just
need
the
new
registry
to
be
updated
and
to
have
the
new
bucket
and
the
database
in
place
and
field
yeah.
B
B
I
think
what
you're
doing
a
pre-prod
will
probably
be
a
good
thing
to
send
over
to
distribution.
That
way
when
customers
need
to
do
this
for
themselves.
They've
got
kind
of
these
same
steps
out
outlined
for
themselves.
A
A
So
that's
a
concern,
which
means
we
won't
be
able
to
see
much
more
than
we
are
already
able
to
simulate
locally,
which
is
us
doing
some
manual
tests.
So
I
was
wondering
if
we
could
somehow
fork
some
of
the
traffic
that
goes,
for
example,
to
the
dive
registry
into
the
staging
one.
For
example,
during
that
phase
I
know
we
have
a
lot
of
pipelines
building
against
the
dev
registry,
so
perhaps
we
could
have
a
period
of
time
where
we
would
switch
those
to
use
the
staging
one.
B
We
could
probably
try
to
figure
something
out.
Maybe
I
think
we'll
have
a
few
options,
whether
it
be
traffic
or
maybe
we
could
set
up
a
synthetic
like
ci
project
that
just
does
random
stuff
against
a
registry.
At
some
point,
if
we
don't
already,
we
should.
We
should
probably
have
an
issue
to
address
that
concern.
B
B
I
just
have
a
clarification
question
for,
and
I'm
sure
this
is
probably
already
documenting
the
issue,
but
for
the
second
registry,
that's
being
brought
online,
it's
only
taking
traffic
for
new
tags.
It's
not
project
specific,
correct,
it's
project
yeah,
it
is
project.
Okay,.
B
A
A
Yeah,
so
I
think
the
other
big
thing
that
we
need
to
to
do
is
is
figure
out
what
can
go
wrong
and
how
to
act
on
that.
A
So
I
think
we
should
probably
also
use
staging
to
test
those
failure
scenarios
and
I'm
thinking
in
stuff
like
just
the
database,
failing
because
even
this
is
a
new
component,
there
are
no
run
books
or
any
instructions
on
how
to
troubleshoot
to
fix
anything
to
recover
anything.
So
I
think
we'll.
B
A
Okay,
I'll
create
I'll,
create
an
issue
for
that
under
the
staging
epic.
That's
probably
about
it
to
do
in
staging
then
pre-prod,
because
there
are
no
custom
database,
clustering,
pre-pro,
precisely
yeah.
A
Okay,
so
I'll
open
up
a
few
issues
and
we
can
get
the
registry
operated
in
pre-prod
and
then
we
will
continue
discussing
whatever
is
needed
to
do
the
side-by-side
deployment
in
staging,
but
before
we
we
transition,
we
want
to
transition
from
pre-power
to
staging.
That
may
be
too
early
to
do
that.
Do
you
have
any
kind
of
estimates
on
how
long
it
will
take
us
to
get
the
side-by-side
deployment
working
once
we
are
ready
in
staging,
of
course,.
B
I
don't
personally
have
an
estimate,
but
you
know
this
is
relatively
decent
priority
for
our
team,
so
I
think
it's
once
we
get
pre-prod
going
staging
will
be
our
next
target.
So
as
soon
as
we
get
pre-pride
running
and
working
as
desired,
we'll
probably
just
make
sure
that
we've
got
all
any
closed
loops.
You
know
metrics
and
testing
procedures
nailed
down
and
issues
written
up.
I
think
we
should
be
able
to
start
that
work
pretty
much
immediately
after
that.
A
F
F
F
A
Yeah
one
of
our
biggest
focusing
in
pre-prod
will
be
really
matrix
because
we
can
do
that
work
and
then
it
will
just
be
reused
for
staging
and
production.