►
From YouTube: Discuss Staging timeline for Sharding team (2021-10-05)
Description
https://docs.google.com/document/d/1O-ykLHFybv-JapZRRNy0FNqr-WXwP58aWiAMgLiegq8/edit# (internal only)
A
So,
item
number
one,
please
record
we're
recording
we'll
put
this
on
the
youtube
playlist
for
db,
provisioning
and
I'll
include
a
link
here
when
we're
done.
Number
two
is
mine,
which
is
just
to
discuss
the
timeline.
Craig
had
the
first
comment
here.
He
just,
I
think,
just
wanted
to
state
the
things
that
he
thinks
are
important
and
I
think
I
pretty
much
agree
here.
The
immediate
need
is
to
make
sure
that
we
have
this
patronic
cluster
configured
as
soon
as
possible.
A
On
staging,
I
don't
know,
and
maybe
fabian
I
guess
since
you're,
just
coming
back
from
vacation,
maybe
you're
not
fully
caught
up,
I'm
not
sure
what
the
urgency
is.
Maybe
dylan.
Maybe
you
know
like
how
soon
like?
Are
you?
How
or
you
know
to
to
what
extent
are
we
blocking
the
sharding
team
right
now
and
what
is
the
timeline
for
the
sharding
team
for
getting
this
cluster
available
on
staging.
A
C
All
right,
I
think
I
got
my
stuff
sorted.
Yes,
I
was
preparing
for
my
statement.
The
I
mean
it's
hard
to
say
we're
blocked
or
not,
but
we
have
a
rollout
plan
where
the
sooner
we
get
this
stuff
on
there,
the
sooner
we
will
be
able
to
start
testing
some
of
the
application
changes
we
yeah
getting.
Basically
getting
a
cluster
up
and
running
on.
C
So
we're
happy
for
this
work
to
start
today
or
to
be
deployed
to
staging
today,
for
example,
if
that
helps,
but
if
I
was
to
say
when
you
might
be
blocking
out
our
ability
to
actually
test
some
of
the
application
stuff
that
we
want
to
get
out
to
staging,
it
would
probably
be
on
the
few
weeks
time
scale
so.
C
A
Okay
and
yeah-
and
I
understand
this
first
step-
that
really
has
nothing
to
do
with
the
sharding
team
or
the
application,
because
we're
just
deploying
the
cluster
and
expanding
the
replica
pool
like
it's
not
like
you're,
going
to
be
connecting
to
a
new
database
endpoint.
Yet
in
fact
we
don't
have
the
changes,
I
think
in
omnibus
or
the
charts
to
support
that
anyway.
So
so,
I
guess
I.
D
A
The
priority
here
right
is,
I
think,
to
get
the
cluster
deployed
into
staging
just
to
make
sure
that
can
be
configured
properly
monitored
properly,
with
the
idea
that
we'll
soon
like
in
the
next
couple
weeks,
hopefully
be
able
to
give
you
guys
a
database
endpoint
that
you
can
use.
You
know
that
the
application
can
use
for
the
ci
table
right.
C
Yeah
and
like
even
additionally
to
the
other
running
and
monitoring
like
there's
the
setting
up
a
standby,
petronee
cluster
connection
between
this
new
cluster
and
the
existing
ones.
So
that's
also
something
that,
like
you,
know,
needs
to
be
tested,
can
be
configured
and
that
you
know
it
doesn't
double
the
latency
or
something,
and
that
might
be
an
interesting
fact
for
us
to
learn
that
a
standby
cluster
has
double
the
latency
of
the
main
cluster,
consistently
and
or
worse.
For
example.
C
B
So
I
just
tasted
this
from
the
agenda
from
yesterday.
I
was
not
in
that
meeting,
but
I
think
this
is
sort
of
the
high
level
goals
that
we're
hoping
to
accomplish
in
october,
which
is
provision
the
ci
cluster,
deploy
to
staging,
get
the
replication
going
and
then
test
our
application
changes,
and
so
I
think,
in
terms
of
timing,
I
you
know
if
we
want
to
account
a
couple
of
weeks
or
so
for
us
to
actually
figure
out
how
how
to
configure
the
application.
There
will
be
issues
with
this.
B
You
know
that
means
that
this
cluster
needs
to
be
provisioned.
You
know
within
the
next
week-ish
or
so
right,
so
we're
not
talking
a
month
tomorrow.
C
Yeah
and
phase
three
you
may
be
able
to
like.
We
may
be
able
to
hack
the
application
to
do
phase
three.
Even
before
there's
some
small
change
we
need
to
get.
But
then
you
know
we
could
probably
deploy
phase
three
on
the
application
for
a
subset
of
the
gitlab
deployments
in
some
way
to
read
from
cr
replicas.
So
all
these
things
we
can
do
to
take
smaller
steps,
but
yeah.
A
C
I
wish
camille
was
here
to
answer
this
question,
because
I
was
kind
of
the
under
the
impression
that
we
have
done
the
omnibus
stuff
and
that
we're
ready
to
go
to
configure
omnibus.
Do
you
know
fabian.
B
A
It's
all
we
use.
Actually
we
don't
use
omnibus
at
all
anymore
for
rails,
because
all
rails
deployments
have
been
moved
to
kubernetes.
The
only
thing
we
use
omnibus
for
are
pages
which
doesn't
connect
to
the
database
and
gitaly
which
doesn't
connect
directly
to
the
database.
So
we
need
to
make
sure
the
chart's
changing.
C
Yeah
tomorrow,
maybe
there's
a
misunderstanding:
we
had
on
what
we
thought
we
needed
for
this
sure
we
can
check
if
cloud
native
is
actually
done
or
gonna
require
more
to.
A
Me
because
I
think
it's
really
like
a
simple
yeah
it'll
be
a
simple
thing
to
do.
You
know,
so
I
don't
think
it'll
probably
won't
be
a
blocker,
but
just
something
we
need
to
make
sure
we
have
covered
okay.
So
on
to
item
2b,
I
just
wanted
to
cover
or
kind
of
summarize
with
the
work
we
have
remaining
for
the
repeatable
database
provisioning.
A
But
it
sounds
to
me
based
off
of
the
conversation
we
just
had,
that
we
are
going
to
do
the
chef
database
cluster
and
maybe
do
the
repeatable
database
cluster
provisioning
in
parallel,
at
least
like
I
don't
think
the
items
in
2b
aren't
going
to
be
able
to
be
completed.
A
B
To
interrupt,
I
think,
from
my
perspective-
and
I
hope
that's
understood
is
like
I
I
think
for
us
the
main.
The
main
concern
from
the
sharding
group
is
really
about
the
like.
So
when
can
these
things
happen
in
a
way
that
makes
the
infrastructure
team
comfortable?
Overall,
I
don't
think
there's
any
requirement
that
that
we
use
ansible
or
chef
as
long
as
the
database
is
is
there.
A
Yeah,
I
guess
the
the
only
issue
I
see
potentially
is.
I
don't
know
how
long
we're
going
to
be
in
staging
before
we
move
to
production,
and
my
hope
was
that
we
could
be
using
the
repeatable
database.
Provisioning
thing.
You
know
soon
and
you
know
obviously
like.
If
we
do
it
in
parallel,
then
I
guess
we
can
switch
at
some
point
when
we
feel
comfortable
with
the
ansible
step.
B
And
my
understanding
was,
the
main
concern
raised
from
jose
was
the
having
two
different
systems
in
in
use
at
the
same
time,
and
so
I
think
maybe
if
we
have
both
in
chef
and
then
at
some
point,
you
know,
there's
a
desire
to
move
to
the
repeatable
database
creation
moved
and
that
would
entail.
A
Yeah,
I
think,
that's
possible
okay,
so
then,
on
to
item
number
c
so
alejandro,
I
think
you're,
probably
the
closest
to
this
right
to
the
registry
dbwork.
Does
this
like?
Does
it
hurt
you
to
kind
of
think
about
doing
this
again
for
a
new
petroni
cluster
and
what
is
your
best
approximation
for
the
number
of
days
it
would
take
to
just
do
staging
and
you
know
spin
up
another
set
of
vms.
You
know.
A
D
A
D
D
A
I
I
think
what
we'll
likely
do
is
just
switch
100
on
this
until
it's
done,
and
so
maybe
like,
I
will
definitely
be
able
to
help
as
well.
You
know
if
it,
if
it
like
makes
sense
to
have
more
than
one
person
for
the
registry
stuff.
Did
you
like
do
most
of
that
work
yourself,
or
did
you.
D
No,
I
it
will
definitely
go
faster
if
there's
more
than
one
person
what
I'm
thinking
of
doing
is
going
back
to
what
I
had
to
do
for
a
strategy
staging
and
then
just
make
a
list
of
what
needs
to
be
done,
and
then
we
can
estimate
the
days
and
maybe
even
divide
the
work.
At
that
point.
A
C
Is
there
is
there
a
summary
you
could
give
me
on
what's
different
about
the
registry
petroni
cluster
and
our
main
patrony
cluster,
because
I'm
familiar
with
the
architecture
we
have.
You
know
a
pg
bouncer
dedicated
host
that
talks
to
the
primary
and
then
the
pg
bounces
on
the
data
nodes
themselves,
I'm
familiar
with
that
for
the
main
database.
But
is
there
any
differences
with
registry
that
will
be
important
here
to
highlight.
D
The
same
as
the
main
cluster,
we
have
pg
bouncers,
that
connect
to
the
to
the
main
patone
house,
and
each
patron
host
has
pg
bowsers
in
it
local
processes,
so
yeah.
C
B
D
A
A
I
can
help
on
this
as
well,
and
what
I
would
really
love
is
to
like
get
this
done
this
week,
like
for
the
next
couple
couple
days,
even,
but
I
think
that
depends
on
the
scope
of
the
work,
I'm
I'm
really
hoping
it's
just
a
copy
and
paste
of
what
we
already
did.
So
it's
going
to
be
pretty
simple.
A
Too,
okay,
great
okay,
is
there
anything
there's
something
left
on
the
agenda.
Is
there
anything
else
anyone
here
wants
to
cover
before
we
end
the
meeting.