►
From YouTube: 2022-02-23 Sharding Group Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Okay,
so
the
quick
summary
from
the
morning
is
we.
The
first
thing
is:
there
were
some
concerns
highlighted
by
adam
regarding
a
phase
six
issue
regarding
the
performance
on
some
queries,
so
for
maybe
pat
and
douglas.
It
may
make
sense
to
read
up
on
the
context
if
you're
interested.
B
In
essence,
we
had
a
bit
of
a
discussion
how
to
approach
this.
We
came
up
with
a
course
of
action
to
address
this.
There
are
essentially
two
two
things
that
we
can
definitely
do
so
we're
facing
performance
issues
with
a
query
when
there
are
many
many
groups,
inco
and
many
runners
involved,
and
so
one
of
the
big
culprits
of
that
are
qa
groups
and
that's
sort
of
self-inflicted.
B
So
my
question
was:
do
we
have
to
solve
this
problem
of
making
this
a
very
performant
query,
or
is
this
actually
a
thing
that
doesn't
happen
as
often
for
our
real
customers
and
can
we
not
apply
limits
and
then
avoid
doing
this
in
the
first
place,
and
so
adam
is,
is
looking
into
servicing
some
of
the
data?
B
I
think
he
posted
in
general
recently-
and
I
think,
some
of
the
very
large
so
thousands
of
group
hierarchies
under
a
maintainer
to
me
look
a
bit
like
spam,
so
it
may
may
actually
be
avoidable
and
then
the
second
thing
is
to
actually
reapproach
the
the
problem
a
little
bit
by
splitting
it
into
smaller,
more
composable
problems,
saying
that
maybe
we
can
fix
the
first
issue
and
then
maybe
it
is
more
acceptable
that
specifically
returning
a
list
of
runners
for
like
tens
of
thousands
of
groups,
if
that
times
out,
you
know
it's
a
very
rare,
potentially,
and
that
is
maybe
actually
acceptable,
rather
than
spending
a
long
time
perfecting
this.
B
So
the
team
will
update
on
it.
That's
it
I
think
we'll
know
more,
but
there
is
a
concern,
because
the
current
implementation
has
issues
and
we
need
to
find
ways
to
addressing
it.
I
think
from
a
product
standpoint.
For
me,
the
best
way
of
solving
this
would
be
to
not
allow
users
to
do
some
of
those
things
in
the
first
place,
but
we'll
see.
B
We
have
a
merge
request
to
run
a
fully
decomposed
ci
pipeline
across
two
databases,
which
essentially
tells
us
if
we're
ready
for
phase
four,
but
also
for
phase
six
is
kind
of
testing
the
end
state
of
it.
The
application
running
two
completely
different
databases,
and
it's
really
an
interesting
pattern,
because
it
the
changes,
are
now
four
lines.
B
It
looks
like
we
didn't
do
any
work,
but
the
reality
is
that
over
quite
a
few
months,
the
we've
extracted
work
from
this,
mr
reduced
it
further
and
further
until
this
is
what's
left
and
it's
very
exciting
because
it
is
now
in
a
merge-ready
state.
C
B
News
yeah,
it
was
really
cool
yeah,
so
we'll
hopefully
get
that
merged,
and
then
lastly,
there
was
just
a
note,
there's
already
a
little
bit
of
additional
context
around
formulating
a
disaster
recovery
plan.
I
have
not
done
work
on
that.
I
will
do
more
work
on
it.
I
think
the
main
thing
for
us
to
do
is
to
clearly
outline
what
we
intend
to
do
in
production.
B
B
For
for
that,
jose
has,
I
think,
more
experience
with
some
of
these
sort
of
change
requests
and
I
think,
ultimately,
I
think
we'll
take
the
form
of
a
this
is
what
we're
going
to
do,
and
somebody
needs
to
actually
improve
it
in
the
issue
before
we
do
it.
A
Yeah,
I'm
also
bringing
this
topic
up
in
standing
up
shortly
today,
just
being
about
one
hour
probably-
and
I
think
the
two
teams,
the
infrastructure
team
and
there's
a
training
group
that
needs
to
work
on
this
disaster
recovery
plan
together,
because
it's
not
only
for
our
facebook
deployment
but
also
for
ongoing
operations.
D
C
C
Like
outcome
in
terms
of
the
day
of
the
data
loss
in
the
scenarios,
from
no
data
loss
at
all,
but
like
very
development,
heavy
to
some
data
loss,
but
very
fast
to
execute,
so
I
I
think
they're
gonna
be
an
interesting
challenge
because
at
some
point
we're
gonna
pick
a
scenario
and
we
have
gonna
have
to
run
this
scenario
a
few
times
on
staging
or
some
environment
to
ensure
that
if
we
have
to
do
it,
we
have
we
actually
validated
that
all
of
these
things
are
covered.
C
So
I
think,
however,
this
is
probably
gonna-
be
coming
your
way
at
some
point
to
also
write
the
complexities
of
validating
this,
but
also
executing
that
an
actual
life
environment
like
production.
If
we
have
to
do
it.
C
So
I
I
think
this
is
like
a
not
to
use
about
what
may
be
coming
your
way
very
soon.
On
understanding
these
implications
because,
like
we
gonna,
be
balancing
between
no
data
laws
but
very
long
and
very
heavy
development
to
some
data
loss,
but
very
roast
to
execute,
and
we
need
to
figure
out
our
threshold
and
our
matrix.
C
And
what
is
time
frame
that
we
accept
and
we
make
a
decision
on
on
executing
disaster
overran,
because
I
think
in
some
cases
we
may
simply
say
we
are
past
point
of
no
return
and
we
rather
go
to
fixing
instead
of
reverting.
D
So
then
we
have
to
have
a
discussion
that
I
want
to
follow
up
with
fabium,
because,
like
the
problem
in
the
benchmark
environments
like
we
don't
have
the
application
itself
running,
so
we
have
just
synthetic
load,
but
we
will
need
an
environment
with
load
and
application.
Then
we
can
test
the
failover
itself
or
the
whole
process
and
see
if
you
are
having
this
kind
of
errors.
Sorry,
we.
B
I
mean
we
have
a
50k
reference
architecture
that
is
running
on
a
decomposed
database
or,
like
you
know,
it's
now
at
phase
four
right,
so
I
think
that
would
be
something
where
we
have
an
application
right
and
we
can
generate
some
load,
but
it's
still
not
exactly
gitlab.com,
but
that
may
be
something
where
we
actually
have
the
application
we
can.
We
can
do
something
with
it
yeah
it's
a
great
point.
B
This
is
what
we
think
we
should
do
right,
and
these
are
the
alternatives.
You
know
if,
for
business
reasons,
we
can't
do
it,
but
here
are
the
downsides
right,
because
it
will
always
be
trade-offs.
We
will
not
be
in
a
position
where
we'll
have
no
data
loss
and
perfect
recovery
within
like
30
seconds
or
something
that's
just
that's
not
going
to
to
be
the
reality
right.
So
I
think
that's
how
I'm
thinking
about
it
and
I
don't
think
we
know
exactly
yet.
You
know
what
all
of
these
scenarios
are.
C
If
we
discover
something
that
breaks,
what
we
actually
do
then-
and
this
is
the
recovery
is
that
we
are
we
were
thinking-
is
about.
Like
the
second
scenario
we
successfully
executed
failover,
we
started
writing,
but
we
actually
discovered
critical
problem
for
whatever
reason
and
and
what
this
reason
is
like
is
it
to
define.
C
B
We
can't
know
ahead
of
time
what
problems
we
will
have,
and
I
think
this
is
the
I
mean
that's
the
discussion
to
have,
and
I
need
to
think
about
it
a
little
bit
more,
but
I
I
I
think
this
is
the
most
tricky
thing
you
know
there's
in
my
disaster
recovery
brain.
A
Also,
if
we,
if
we
do
like,
if
we
decide
to
roll
back
to
the
original
database,
as
kamin
said,
we
are
not
back
to
stage
six
and
back
to
stage
one
or
stage
four.
Maybe.
A
Okay,
my
question:
do
we
actually
need
to
identify
the
drive
from
each
functional
group,
but
I
think
within
this
group
I
know
that
the
people
I
suggested
the
people
of
those
names
to
work
on
this
dr
plan
fabian,
of
course,
and
jose
and
I'm
developing,
who
will
be
the
best
to
partner
with
fabian
jose.
A
B
I
also
would
like
to
highlight
that
these
are
actually
really
good
discussions
to
have
right.
It's
like
we're
now
in
the
point
where
we
we
think
we're
going
to
do
this,
and
now
we
need
to
like
figure
out
what
happens
if
things
go
wrong,
but
it's
a
much
better
problem
than
we
don't
even
know.
If
we're
going
to
do
it
or
not
right,
so
I'm
happy
about
it.
C
C
So
I
I
hope
we
just
have
like
we
don't
have
2000
like
on
on
doing
quick
queen,
but
rather
we
actually
can
evaluate
a
bad
option
that
give
us,
like
the
amount
of
the
stress
associated,
because,
like
it's
gonna,
be
pretty
like
the
moment
when
we
click
this
like
deep,
like
button
like,
I
love
our
button.
C
And
I
I
really
liked
you
like,
when
you
were
doing
like
this
pg
upgrade.
You
were
kind
of
like
saying,
I'm
gonna
do
that.
Yes,
I'm
gonna
do
that.
Yes,
in
the
end,
it
was
so
boring
to
do.
Yes,
it
just
worked,
and
I
was
like
I
I
would
really
like
do
whatever
we
can
do
to
make
this
as
boring
as
it
was
before.
B
I
mean,
I
think,
that
we
we
shouldn't
underestimate
like
well.