►
From YouTube: 2020 03 23 Database Sharding Working Group
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Like
let
me
explain
or
give
a
bit
of
context
here,
we
repeat
concerns
over
the
time
of
the
maintenance,
for
that
was
great,
because
we
should
do
it
in
two
hours
and
if
the
database
will
be
shortly
and
smaller
or
not
a
lot,
let's
say
having
this
migration.
All
tasks
will
be
fast.
What
I'm
saying
with
that
nursing
process
is
checking
indexes
of
so
on,
like
statistics
and
so
on.
For
this
we
were
checking
this
and,
of
course,
as
I
mentioned
as
well
in
the
last
infrastructure
call
we
can
execute.
B
A
C
C
A
A
Stunned
silence,
so
the
quick
summary
is.
This
is
a
massive
table.
It's
what
about
30
percent
of
the
database
at
the
moment,
and
if
we
can
push
that
migration
off
to
external
storage,
then
it
won't
have
to
be
part
of
the
Postgres
11
upgrade,
but
there
are
concerns
that
that
actual
migration
can
be
done
in
time
to
get
the
upgrade
on
time
as
well.
So
you
got
kind
of
a
circular
reference
of
dependencies
there
so
I
know.
A
B
Okay,
no
I'm
working
on
this
tree.
If
I'm
more
detailed
explanation
on
the
time,
you
should
give
an
overview
the
last
time
when
we
did
the
migration
from
atrani
migration.
Yes,
the
maintenance
was
like
one
hour
and
a
half
and
a
patron
itself
setup
was
recessed,
was
just
rebound
each
one
of
the
posts
but
take
down
the
platform,
stopping
the
whole
traffic
and
restoring
the
draft
takes
a
lot
of
time.
This
is.
D
Perhaps
is
not
worth
to
note
that
the
migration
doesn't
have
to
be
fully
completely
right,
so
we
can
stop
this
migration
at
any
time
and
the
sooner
we
start
doing
it
more
of
an
impact
that's
going
to
have
on
sites
right.
Have
you
considered
running
it
until
we
do
the
upgrade
and
then
live
with
with
what
we
have
at
that
time?.
B
B
Is
another
issue
that
the
development
one
I
will
I
will
link
here?
That
is
the
rollback
processes
that
sre
measure
they
were
not
comfortable
because
they
do
not
have
a
rollback
process
in
case
of
something
goes
wrong.
Like
mr.
rains
tell
the
things
in
the
datums
not
understand.
I
will
not
hear
the
investment.
A
All
right,
all
right,
I'm
gonna
jump
around
a
little
bit
so
couple
things
that
have
been
done.
I
think
we're
gonna,
spend
the
majority
of
our
time
on
the
incidents
lists
and
talk
about
that.
I
want
to
talk
about,
what's
been
done
and
what's
happening
next,
so
we
can
spend
the
remainder
of
the
time
on
the
incidents
list
and
get
kind
of
a
shared
understanding.
So
what's
been
done,
tendency
model
we
identified
top-level
namespace
is
a
target
for
a
tendency
model
to
build
partitioning
and
sharding.
On
top
of
that,
Andreas
is
finished.
With
using
structure.
A
A
There's
some
follow-up
items
so
under
what's
happening
next,
there's
a
follow-up
item
on
the
same
issue:
we're
looking
moving
forward
on
partitioning
patience,
Oh
doing
some
investigation
and
the
planning
time
increase,
setting
up
a
situs
cluster
and
I
think
I
saw
Anthony
on
here.
We
King
to
you
on
an
issue
with
it
were
locked
on,
so
I'll
follow
up
with
them
to
make
sure
that
we
get
some
movement
on
that
I'm
setting
up
beside
his
cluster
and
then
mech.
You
added
something
for
what's
up
next.
E
Yes,
so
wanted
to
close
the
loop
on
testing
on
staging,
and
it
seems
like
from
my
FIBA
a
lot
of
times
thanks
as
a
split
up
the
the
old
issue
issue,
eight
into
two
issues.
Now
they
should
not
only
for
for
staging
correct
as
we're
tracking
straight
staging
differently.
So
this
is
stating
and
then
I
want
to
close
the
loop
on.
Thank
you
for
fixing
the
type
of
work
issue,
nine,
five
ninety-two
and
we
need
to
run
the
tests
which
part
of
this
step
fits
in
issue.
Nine.
E
Let's
make
it
clear
and
I'm
typing
my
response
to
issues
not
right
now.
I
think
we
need
to
make
sure
that
the
intestines
the
reproduction,
what's
the
staging,
are
passing
any
known
issues
and
I'm
proposing
that
we
do
have
engineers
on
call
during
the
time.
What
nearly
should
be
doing
is
just
one
point
of
contact
and
be
act
as
I
always
hear
on
what
tests
is
known,
failure
check
it
off
and
what
tests
are
passing
if
they're
new
tested
are
failing.
E
B
E
Engineers,
the
measures
on
a
call
this
working
group
now
Italian
earlier
here:
let's,
let's,
let's
work
a
Singh
and
just
clearly
put
it
in
the
issue.
What
are
the?
What
is
the
time
that
we
need
to
run
the
tests
and
then
we'll
run
a
seeing
on
our
part
to
coordinate
I
think
we
all
need
to
also
corner
with
the
delivery
team
to
make
sure
that
I'm
they're
aware
that
laughing
like
staging
assume.
B
B
B
Here's
just
one
point
with
the
test:
reenact.
Our
intention
here
is
to
migrate
upgrade
to
the
new
version
of
politics
and
after
restore
because
most
we
want
to
exhibit
more
time
discrete
and
see
which
point
we
want
to
improve,
and
so
on.
So
for
this
we
couldn't
do
the
upgrade
execute
the
tests
required
and
later
on.
Have
the
rollback
step
yeah.
E
A
All
right,
so
this
is
something
that
was
brought
up
early
on
in
this
working
group,
so
Hosea
compiled
the
list
of
database
incidents
that
happen
within
the
last
year.
We
added
severity
and
title
here
we're
asking
how
sharding
could
have
helped
and
then
just
this
last
week
we
asked
Alvaro
and
Jose
to
look
at
it
through
the
lens
of
situs
and
this
one
to
run
through
those
real,
quick
and
sorry.
This
sheet
is
super
wide.
So
it's
hard
to
see
everything
we
want
to
talk
about
here.
A
F
The
answer
of
how
scientists
would
help
is
quite
similar
actually
to
the
answer
of
Charlie
would
help,
because
Sikes
is
just
an
implementation
of
showing
anything
right.
So
we
should
imagine
we
should
look
a
little
bit
separately
of
whether
Charlene
would
help
here,
which
the
answer
is
in
Maori,
PS
and
psychosis
about
a
certain
solution
which
is
a
subject
of
the
other
issues
that
have
been
given.
F
It
could
have
two
potential
problems,
one
could
be
a
bottleneck,
it
could
be
point
of
saturation
and
you
could
also
add,
or
adds
actually
some
latency
so
leaving
latency
aside
for
a
moment,
which
I
think
is
an
issue
to
be
analyzed
anyway.
It
could
be
a
bottleneck.
How
to
solve
this
just
to
scale
out
the
Cordillera
to
this
is
possible,
both
sizes.
It
is
something
actually
it's
a
big.
F
Do
it
yourself,
but
you
know
you
basically
set
up
a
load
balancer
on
top
and
then
IOB
and
then
below
several
coordinators,
and
then
they
can
either
release
the
streaming
replication
or
another
form
of
replication.
This
is
documentation.
This
is
up
to
you.
Typically,
we
do
deploy
a
streaming
replication
and
then
you
know
you
will
not
be
a
bottleneck.
So
with
these
mentioned,
which
is
what's
also
out
there.
If
there's
more
than
coordinator
charts
in
meaning
more
than
one
sorry,
then
sytrus
would
be
a
solution
here.
A
A
F
So
the
dogs
actually
said
yes
to
options.
One
is
to
use
this
term
in
replication.
It
is
the
most
natural
one.
Another
one
is
that
from
time
to
time,
your
copy
money,
only
data
from
one
coordinator
to
the
other
ones,
just
bear
in
mind
that
the
coordinator
is
follow
bandwidth
and
small
databases
just
metadata
about
the
petitions,
the
charts
themselves.
So
you
actually
know
when
you're
changing
the
data
is
separate
from
data.
F
Streaming
replication
is
a
much
better
solution
for
this
anyway,
but
it
is
still
do
it
yourself.
It's
not
provided
out
of
the
box
by
by
Cypress
itself
and
if
it
would
be
situs
with
a
single
coordinator.
We
don't
know
for
sure
whether
most
of
these
problems
have
been
prevented
or
not,
because
we
haven't
measured
the
performance
of
this
coordinator,
how
much
work
we
can
handle,
but.
F
Deeper
into
there's
issue
yeah
the
rule
783
it
handles
with
the
situs
test
and
I
rest
raised
some
comments
there
and
some
questions.
One
of
them
is
it's
actually
there's
a
lot
of
do-it-yourself
points
like
high
available.
It
also
is
Twitter,
so
there's
actually
two
solutions
for
situs
a
change.
One
is
provided
by
situs
itself,
it's
called
internal
replication,
but
it's
only
recommended
for
a
pin,
mostly
earth
and
only
pattern,
which
is
by
far
enough
to
heat
up
space.
F
The
other
alternative
is:
do
it
yourself
with
straight
application,
which
is
the
way
we
should
have
to
go
and
is
really
do
it.
Yourself
is
yes,
you
know
make
sure
that
it
all
works.
The
discharge
works
and
data
centers
will
be
happy.
The
port
in
error
needs
to
find
a
stable,
IP
or
DNS
name,
and
that's
something
we
can
provide
with
Patronus
similarly
to
what
we
do
right
now
in
feedback
itself,
the
main
database,
but
it's
it's
something
that
it's
a
mess.
A
F
General,
what
I
would
say
is
that
this
is
a
nice
summary.
I
would
encourage
those
who
want
to
go
to
this
issue
there
207
83
and
just
check
the
car
I
left
to
comment
there,
one
more
specific
about
the
topics
being
discussed
so
far
part
of
this
issue.
This
is
the
first
one.
Yet
the
second
one
is
about
more
general
or
fundamental
questions,
and-
and
we
can
definitely
follow
up
there
or
I-
can
answer
any
questions
right
now.
What
I
use
here,
but.
F
A
A
F
F
F
A
F
H
A
A
F
F
For
example,
one
of
the
concerns
that
I
raised
on
the
psychic
issues
that
they
are
there
documentation
says
that
psychosis
design
for
K+,
tenants,
I,
don't
know
exactly
the
number
of
tenants
that
there
are
right
now
on
people
out
here
we
consider
two
tenant
as
the
namespace,
which
sounds
like
a
reasonable
first
approach.
I,
don't
know
we're
talking
about
lessor
that
for
orders
of
magnitude
more
than
that
million
sir.
F
F
F
First
protest
understand
what
we're
looking
at
then
go
table
by
table
and
check
whether
Dante
will
need
to
be
Charlotte
or
will
need
to
be
at
replicated
table
by
replicated
means
that
there's
no
tenant,
it's
not
related
to
any
tenant,
but
potentially
all
of
them,
and
then
also
apart
from
this,
compile
the
distribution
of
tenants
and
by
distribution
I
mean
how
big,
how
many
big
tenants
we
have
for
a
small
tenant.
We
have.
F
We
can
either
beam
them
and
say:
okay,
tenants
between
this
amount
of
this
amount
of,
let's
say,
10,
100
hundred
thousand
thousand
and
10k
and
so
forth,
and
and
beam
them
to
understand
what
is
the
distribution.
So,
with
these
numbers
we
can
actually
understand
more
the
landscape.
We
can
understand
how
many
shards
we
might
need
to
want
to
create
bear
in
mind
that
the
the
number
of
charts
is
not
the
number
of
notes.
F
D
F
F
So
it's
using
logical
application,
which
in
turns
is
subject
to
the
limitations
of
phosphorus
technological
replication,
which
are
not
very
significant
or
different
from
the
ones
that
already
have
an
HIV
solution,
but
basically
requires
priority
per
table
and
some
all
the
minor
considerations
about
data
types
and
sequences,
but
they
are
also
like
to
general
shrugging.
So
it's
nothing
really
special
for
that.
Now,
a
bowl
large
volume
data
table
the
table
that
is
replicated
may
have
some
lagging
and
that
may
make
create
some
consistency.