►
From YouTube: 2022-05-04 Sharding Group sync EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
we
are
in
the
sharding
group
sync
wednesday
may
4th.
B
Cool,
I
have
the
first
one
sort
of
summarizing
some
bits
in
the
from
the
morning.
So
dylan
and
nicola
worked
on
some
disaster
recovery,
bits
and
pieces,
and
I
think
they
became
sort
of
came
to
the
conclusion
that
if
we
take
downtime
setting
up
logical
replication,
it
was
not
going
to
be
worth
it.
I
think
there
were
benchmarking
concerns
complexity
concerns.
B
So
I
think
that
you
know
if
that
holds,
I
think
that's
good.
It
eliminates
an
entirely
new
thing
like
new
class
of
errors,
and
I
think
we're
coming
to
a
place
where
a
rather
boring
plan
is
forming
for
for
this,
which
you
know,
I
think
jose
can
let
us
know
how
insane
or
good
this
is.
But
it's
essentially,
I
think
what
we.
What
we
will
aim
to
do
is
we
are
going
to
we're
going
to
adopt
the
database
upgrade
run
book.
B
You
know
which
already
blocks
all
of
the
rights
on
production
until
the
point
where
the
database
upgrade
is
essentially
starting
right
and
we
have
to
insert
our
own
failover
mechanics
at
this
bit
and
then
once
that
is
done,
let's
say
in
a
happy
path
right.
We
then
execute
the
rest
of
the
database,
upgrade
notebook
to
return
it
into
production.
The
advantage
of
this
is
it's
been
done
before
you
know.
People
have
tested
it.
I
think
we
have
some
confidence
in
it.
B
Pretty
boring
then.
Yes,.
C
There's
one
point
here:
the
platform
changes
what
I'm
saying
like
what
we
all
configurations
of
nodes
with
cloudflare
or
what
is
in
kubernetes
or
not
can
have
changed
since
they
upgrade.
But
I
agree
we
have
this
automated
and
it's
something
that
we
are
looking
as
well
for
the
os
upgrade.
C
B
B
B
We
can
install
triggers
to
block
rights
on
the
main
database
to
be
fair,
I'm
not
exactly
sure
what
that
would
all
entail
and
we
should
write
that
down,
but
we
should
be
very
clear
like
this
is
what
we
can
put
in
place
here
temporarily
to
make
sure
this
does
not
happen
right
and
so,
but
I
think
that
is
that's
good
and
then
I
think
an
important
thing
in
consideration
is
if
we
fail
over
to
our
like,
and
by
failover,
I
mean
essentially
promoting
the
ci
standby
cluster
to
become
recritable
right.
B
If
that
fails,
somehow
you
know-
which
I
think
is
very
unlikely,
but
if
it
goes
wrong,
I
think
what
we
need
is
a
good
way
to
roll
back,
quickly,
right
and
ideally
not
to
like
phase
two,
but
to
exactly
the
same
way
that
it
was
before
you
know.
We
executed
the
failover
and
that's
in
point
c
below
we
can
talk
about
the
mechanics
of
that.
C
Yeah,
I
understand
better
your
question
now.
The
whole
point
is
at
the
moment
we
are
having
traffic
in
this
re,
read
only
traffic
on
the
ci
cluster.
If
we
promote,
we
cannot
go
back
to
this
state.
What
we
could
eventually
do
is
roll
back
everything
to
the
main
cluster
and
recreate
the
ci
cluster,
based
it
on
dcs
disk
snapshots.
B
B
I
think
another
idea
which
is
costly
right,
but
may
actually
be
something
that
we
could
do,
and
I
know
we're
not
going
to
make
it
very
happy
is
we
could
have
a
standby
standby
cluster,
a
second
a
second
cluster
that
is
replicating
right,
that
we're
not
touching,
and
then,
if
we
burn
down
our
first
standby
cluster,
we're
just
going
to
re-point
to
that
one
right,
because
that
should
be
very
fast
right
and
that
could
be
also
tested
in
staging
before,
and
that
would
be
a
you
know.
B
One
thing
we
would
have
to
create
another
ci
environment,
but
at
that
point
that
would
give
us
a
lot
of
confidence
that
we
could
essentially
say.
Okay,
this
is
catastrophically
failed,
we're
pointing
over
here
and
we
pretty
much
immediately
in
the
state
that
we
were
beforehand
so
that
may
be.
You
know
a
boring,
a
boring
path
forward.
C
Yes-
and
I
think
we
could-
we
have
this
backup
cluster
with
less
nodes,
because
if
you
are
attempting
to
just
surf
the
read-only
traffic
that
we
have
today
and
for
my
understanding,
the
boxes
are
pretty
idle,
perhaps
with
four
or
five
notes:
we're
fine.
B
B
So
maybe
that's
the
path
to
to
go
forward,
because
I
think
something
that
camille
was
concerned
about
is,
if
we're
not
confident
that
we
can
quickly
roll
back
right,
then
we're
a
little
bit
in
the
pickle,
because
then
we
have
to
make
trade-off
decisions
and
in
the
moment-
and
ideally
we
have
this
all
pre-recorded
and
can
say
if
this
step
fails,
we
will
do
x
right
and
then
that's
a
lot
easier,
yeah
cool!
Well,
that
sounds
good
yeah.
B
The
other
thing
is
that
if
we
like
we're,
not
going
to
bother
with
logical
replication,
very
likely-
and
so
the
the
last
thing
is
that
we
complete
this
entire
process
right,
we
we
have
two
databases
now
and
then
a
couple
of
hours
later
something
breaks
right
or
like
we
discover
a
problem.
You
know.
I
think
that
the
consensus
from
the
team
is
the
the
the
ship.
B
The
train
has
sailed
right
at
that
point.
We
can
only
roll
forward
because
I
think
then,
like
going
back,
would
be
so
difficult
that
it
is
much
less
disruptive
to
say,
for
whatever
reason,
let's
say
some
like,
I
don't
know,
I'm
making
up
things
here,
because
it's
really
unknown
right.
It's
like
some
customers,
can't
schedule
cia
pipelines
anymore,
and
we
don't
know
why.
But
we
will
have
to
fix
that
and
roll
forward
and
I
think
that's
kind
of
the
the
way
to
approach
it.
I
think
this
is
sort
of
the
plan.
A
I'm
curious
because
we
did
outline
some
potential
scenarios,
and
so
it
sounds
to
me
like
these
are
still
scenarios
that
we're
worried
about,
but
we
think
yeah
if
they
happen
a
few
hours
or
we
notice
them
a
day
or
something
later,
like
specifically
the
split
brain
scenario
yeah,
it
may
just
be
it's
not
really
worth
it
to
try
to
roll
back
to
some
previous
state
and
maybe
not
worth
necessarily
trying
to
to
reproduce
that,
if
I'm
understanding
but
maybe
just
roll
forward
and
try
to
address
those
problems
there.
A
B
Like
reading
I
haven't
spoken
with
dylan,
he
was
not
there
in
the
morning,
but
I
think
the
I
think
this
is
broadly
correct.
I
think
what
we
can
say
is,
let's
say
the
scenario
is
okay.
We
we
will
have
rights
on
the
main
database
for
ci.
You
know
what
can
we
actually
do
to
prevent
this
right
and
you
know
maybe
we
can
install
triggers
you
know
and
say
like
okay,
we,
we
are
now
very,
very
certain
that
this
is
not
the
case
and
then
because
we
have
downtime.
B
We
can
also
monitor
this
and
say
like.
Are
we
actually
like
when
we
run
qa,
you
know?
Are
we
getting
any
rights
here
and
we
can
say
like
no,
you
know
and
then
we
can
focus
a
lot
on
prevention
and
monitoring
during
the
the
process.
To
say,
like
these
known
scenarios
are
not
happening
right
and
then,
if
something
really
slips
through
that
we
don't
know
right.
B
D
D
I'm
kind
of
like
more
worried
not
about
the
application,
but
about
everything
that
is
built
around
application
outside
of
the
application
that
may
be
consuming
this
data
or,
like
I
don't
know,
reading
or
maybe
inserting
data
because,
like
application,
this
is
something
that
we
see
and
we
control,
and
we
can
reason
about
that,
but
we
built
so
much
to
link
around
like
outside
of
the
application.
D
D
Because,
like
like,
let's
say
like
we
have
pg
bus
rci,
we
see
the
ci
traffic
goes
to
the
pg
bouncer.
We
application
analyzes
all
queries.
We
know
that
they
are
properly
placed,
but,
like
I
know
that
in
the
past
we
connected
to
the
primaries
directly
to
do
some
operations.
We
are
doing
a
lot
of
things
outside
of
the
main
access
pattern
established
for
the
application.
So
I
I
think
this
is
really
my
question
like
what
else
is
there
that
we
are
not
aware?
D
There
was
like
this
data
warehouse
that
you
said
if
they
are
using
cfs
snapshots,
there
is
database
lab
that
they
are
using
also
snapshots.
They
will
also
need
to
be
adapted.
D
So
what
else
is
there
that
is
maybe
consuming
data,
or
maybe
it's
inserting
data
into
database,
because
we're
gonna
install
triggers
that
are
gonna
prevent
inserting
data
into
tables
on
the
wrong
database?
This
is
something
that
we
actually
tested
and
it
seems
to
work.
Well,
it's
just
gonna
raise
an
exception
on
the
query,
so
this
is
like
the
our
way
to
ensure
that
there
is
no
speed
brain
for
sprites.
D
B
I
think
that's
a
valid
concern.
I
think
my
and
I
I'd
hope,
we'd
notice,
some
of
this
in
staging
already.
We
may
want
to
reach
out
to
these
teams
proactively,
but
is
it
fair
to
say
that
these
are
mostly
our
internal
tools
for
various
purposes?
This
is
not
customer
facing
impact
right.
If
our
I
don't
know
you
know,
our
analysis
scraping
breaks,
that's
bad,
but
this
is
not
going
to
have
an
impact
on
our
github.com
customers.
D
B
B
You
know
please
reach
out,
because
we
need
you
to
be
aware
of
like
forthcoming
changes
in
the
next
couple
of
months,
where
we
can
do
a
little
bit
of
outreach
and
comms
and
try
to
like
find
these
things
because
yeah,
I
I'm
not
sure
we
can
easily
find
a
list
of
those
which
then,
by
the
way,
we
should
create
and
say,
like
these
are
applications
like
things
that
we
have
internal,
that
directly
access
the
database.
Maybe
that
just
exists.
I
don't
know.
D
A
Sounds
like
we
in
addition
to
just
kind
of
running
this
question
by
infrastructure.
Maybe
we
need
to
also
just
make
sure,
like
the
data
team
is
aware,
and
maybe
just
communicate
it
more
more
widely
and
just
see
what
kind
of
what
we
get.
B
Okay,
but
keeping
all
of
this
in
mind,
I
I
tend
to
agree
with
nick
that
I
feel
the
like
a
plan
is
solidifying.
You
know
with
people
not
raising.
You
know
many
hands
on
saying
hey.
This
is
too
risky
or
or
crazy
right.
I
think,
of
course,
there's
still
unknowns
right.
We
have
to
update
things
to
make
sure
this
all
gets
executed
and
what
not,
but
that's,
maybe
also
question
for
you
jose.
C
Makes
sense
I
understand
the
concerns
here
with
the
external
applications
are
like,
for
example,
that
our
housing
as
well.
That
would
be
great
to
talk
with
them
and
as
well.
I
agree
with
your
point
that
some
of
the
tools
that
you
are
having
here
as
a
bit
internal,
it's
a
lot
of
monitoring,
but
already
we
are
looking
to
it
with
the
ci
cluster
and
so
on.
So
I
believe
if
we
have,
for
example,
a
problem
some
metric
regarding
the
database
we
could
fix,
or
the
impact
would
be
not
so
high
but
as
well.
B
I
also
think
that,
oh
sorry,
if
no,
this
may
be
a
a
general
thing,
I
don't
think
we
need
to
have
answers
to
all
of
those
things
before
we,
for
example,
move
to
actually
trying
this
in
staging,
but
I
think
in
order
to
have
that,
we
need
to
have
a
clear
rollback
scenario
right.
It's
like
hey.
We
can
we're
very
confident
we
can
roll
back,
but
I
I
think
this
is
something
where
I
would
want
to
like
gather
all
of
those
things
and
then
actually
try
and
go
to
staging
pretty.
B
I
mean
quickly
right,
like
sensibly
with
testing
and
understanding,
but
I
think
we,
you
know
these
things
are
most
relevant.
You
know
for
for
for
production,
but
I
I
think
we
can
probably
gain
a
lot
more
confidence.
If
we,
you
know
hammer
that
out
and
move
to
staging.
D
Yes,
I
I
think
it's
super
important
for
us
to
like
focus
on
staging
and
rollback.
Probably
the
first
run
of
the
staging
will
be
because,
like
there's
also
like
the
the
problem
with
staging
that,
like
our
process,
means
data
loss,
so
very
likely,
the
first
staging
approaches,
we
will
be
simply.
D
We
execute
our
rollout
plan
up
to
the
same
point
till
like
we
enable
like
the
user
facing
traffic
and
we
simply
roll
back
to
kind
of,
and,
like
probably
replicate
I
mean
like,
try
this
process
like
two
times
before.
We
fully
roll
out
staging
to
use
two
databases
because,
like
the
tricky
part
is
like,
since
we
don't
have
like
a
way
to
let's
say
rollback
staging
after
it's
running
for
the
longer
period,
without
data
loss
very
likely,
we
need
to
start
with
the
failure
instead
of
the
happy
path.
D
B
Yeah,
I
I
have
to
think
about
it
a
little
bit
more
in
detail,
but
I
tend
to
agree
with
you
that
you
know
the
like
testing.
The
the
rollback
is
a
really
important
thing
that
I'd
like
to
maybe
do
first
right,
and
you
know,
then,
if
we
say
like
yeah
we,
we
can
actually
roll
back
in
this
fashion
and
we're
happy
with
this
right.
D
I
I
mean
like
when
we're
gonna
be
running
out
staging.
I.
I
think
this
is
my
best
like
understanding
we
just
roll
roll
it
this
out
after
the
moment
that
we
run
qa
and
the
qa
did
succeed
yeah.
So
this
is
so.
This
is
like
in
our
upgrade
procedure.
It's
like
a
decision
point.
Do
we
open
a
traffic
or
do
we
roll
back
and
basically,
we
actually
tested
like
the
full
happy
path?
Each
time.
D
B
I
think
we
should
set
a
target
date
for
when
we
want
to
do
this
on
staging
and
I
think
that
needs
to
take
into
account
15.00.
But
I
think
we
should
make
a
a
date
and
say
like
this
is
what
we
are
aiming
for
and
then
use
that
as
a
forcing
function
to
say
like
this
is
what
we
all
need
in
place
at
that
point
and
we
always
slip,
but
I
think
it
would
be
useful
to
say
like
let's
try
to
get
it
done
in.
I
don't
know
two
weeks
three
weeks
whatever.
D
I
I
think,
like
we
like,
even
like
we're
looking
at
the
migration
stuff,
we
don't
need
migration
stuff.
If
our
purpose
of
running
out
the
staging
is
to
perform
a
rollback.
Yes,
like
the
regular
application
qa,
we
just
work
fine
without
like
what
is
still
like
in
the
development
pipeline.
D
We
need
migration
stuff
when
we
perform
like
the
final
rollout
to
keep
staging
as
the
compost.
So,
at
least
from
my
perspective,
I
don't
see
like
development
blockers.
B
C
B
Cool
like
let's
find
a
date
and
say
like
let's
target
staging
as
then,
because
I
think
two
weeks
is
not
a
bad
idea,
because
I
think
in
two
weeks
is
also
when
most
of
the
15.00
stuff
should
be
done
right
and
then-
and
I
think
that's
that's
a
good
time
point
but
like
maybe
after
the
release
15.0
has
shipped
right.
We
can.
We
can
do
this.
D
I
mean,
like
two
weeks,
seems
like
pretty
challenging
because
we'll
be
actually
like
preparing
this
uncerebral
scripts.
Till
that
moment,
like
you
mentioned
that,
like
database
benchmark
testing
environment,
we
likely
would
have
to
execute
that
before.
C
D
C
B
B
I'm
not
saying
you
have
to
do
all
of
this
in
two
weeks
right,
but
I'm
saying
it's
like.
Maybe
we
should
aim
for
like
two
and
two
and
a
half
weeks
to
do
this
and
then
actually
review
with
the
infrastructure
team
and
say
like
who
would
need
to
be
able
to
help
with
this,
because
I
think
what
we
like,
I
think
the
provisioning
will
take
some
time,
but
I
think
it
will
be
a
challenge,
but
I
think
if
we
can
say
like
look,
we
have
a
we
have
here
a
plan
right.
B
These
are
the
changes
that
we
still
need
to
make
right.
This
is
the
work
that
needs
to
happen,
but
we
want
to
actually
move
relatively
quickly
on
this
because
that's
going
to
be
important
for
us,
then
you
know
that's
that
puts
I
think
everybody
in
a
position
where
we
can
say
like
okay,
maybe
raphael
can
help
with
this
as
well
right,
maybe
someone
else.
I
think
these
discussions
are
happening
right
and
then
some
of
that
work
can
maybe
be
parallelized
right,
I'm
not
proposing
you
do
all
of
it.
B
D
I
mean
like
if
you
would
be
able
to
test
like
the
first
roll
back
to
staging
by
end
of
this
month.
It
would
be,
I
I
think,
desire
to
like
be
sticking
with
our
timeline
when
we
want
to
deal
with
that.
I
agree
because
if,
if
we
still
pass
this
month,
it's
actually
I'm
pretty
sure
that,
like
the
people
are
gonna
sleep
as
well
to
the
production.
D
Yes,
because
that,
because
there
is
simply
not
enough
time
for
us
to
run
staging
long
enough
with
the
two
databases
I
I'm
kind
of
like,
I
still
anticipating
that
for
this
major
piece,
we
need
probably
like
four
weeks
of
the
full
run
time
this.
This
would
be
like
for
me,
like
me
personally,
like
a
comfortable
buffer.