►
From YouTube: 2020 04 20 Database Team Weekly Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
That's
pretty
interesting.
I
think
where
it
makes.
The
most
sense
is
at
the
epic
view
where
you
can
see
the
work
that
you're
doing
on
the
epic.
How
much
is
at
risk
and
actually
pretty
interesting
to
see
on
the
Postgres
11
upgrade
in
epic
they're,
starting
to
use
that,
though
it's
a
cool
visualization,
we
should
start
using
it
where
it
makes
sense,
didn't
affect
this
team
so
much.
But
last
month
a
couple
of
teams
had
a
difference
between
em
ours.
Mr
counts,
based
on
team
members
and
mr
accounts
based
on
labels.
A
So
just
make
sure
when
we're
writing
a
Mars
for
our
team,
that
we
have
the
group
database
label
on
them.
So
we
have
consistency
across
the
different
metrics
and
I'll.
Take
easily
review
labels
on
Mondays
I'll.
Take
a
look
at
them
later
today,
see
for
Co
we're
doing
on-call
rotation.
For
me,
we
still
need
some
slots.
Pat
you're
not
quite
there,
yet
you
don't
have
to
sign
up
for
on
call
until
you've
been
at
the
company
for
3
months.
So
you
still
have
oh
actually
there.
Yes,.
A
You
are
eligible
next
month
to
start
doing
an
on
call
and
you
can
read
up
on
it,
okay
and
then
just
wanted
to
call
out
I
think
everybody
on
this
call
knows
that
Postgres
11
there's
an
upgrade
delay,
trying
to
figure
out
when
we
can
ship
what
the
trade-offs
are.
The
both
you
know.
What's
going
on
there
do
I
need
to
repeat
any
other
details
behind
the
scenes.
C
A
A
The
infrastructure
team
is
looking
into
alternatives
if
we
can
narrow
the
outage
window
and
stay
with
the
current
deployment
path
or
if
we
need
to
move
to
a
different
upgrade
path,
where
we
do
a
switchover
where
we
build
up
the
same
infrastructure
with
Postgres
11,
then
just
switch
it
over.
So
we
have
a
minimal
outage
window,
they're
still
working
on
the
details,
I'm
not
sure
what
the
outcome
is
yet
and
they
will
talk
about
it
on
the
Tuesday
performance
and
availability
meeting.
So
it's
it's
a
weekly
topic.
A
That's
what
they're
guessing
at
this
point.
So
if
they
had
to
go
with
the
outage
window,
we
need
to
give
customers
more
time
to
let
them
know
that
we're
gonna
house.
You
know,
however
long
the
windows
going
to
be,
and
if
we
have
to
go
to
the
switchover
methodology,
then
the
estimates
are
it's
going
to
take
two
months
to
implement
and
test
that
plan
before
we
can
roll
it
out.
So
that's
why
it
seems
like
a
lot
of
things
are
coming
down
the
13.2
transitions.
A
C
There's
currently
a
discussion
going
on
about
CI,
because
we,
you
know,
we
stopped
doing
mine
6
on
CI
and
I'm
kind
of
concerned.
We
should
be
running
those
tests
for
the
database
that
we're
running
good
luck.
There
is
a
quick
discussion
happening
currently
about
like
getting
this
back
where,
if
the
the
mythili
or
the
for
hourly
builds
or
not
to
catch
that
there.
C
A
C
A
Okay
but
yeah
it's
a
topic.
It's
one
of
the
first
topics
in
the
charting
your
working
group
session
to
make
sure
that
we
bring
back
testing
and
assign
a
DRI
so
that
someone
follows
up
on
it
and
make
sure
that
it
happens
and
I
volunteered
Mac
on
that
one.
Since
he's
the
quality
represented
in
that
meeting,
mm-hm.
B
So,
if
the,
if
the
upgrade
gets
pushed
to
13.2,
if
that
has
to
happen,
is
that
gonna
have
so,
there
was
also
a
sort
of
a
discussion
that
was
floated
of
an
idea
of
whether
it
at
a
certain
point.
It
makes
sense
to
leapfrog
eleven
and
go
straight
to
twelve.
Is
that
something
that
I
mean
it
seems
like
we're
pretty
far
along
in
the
process,
but
if
we're
talking
about
pushing
it
out
to
milestones,
is
that
something
that
we
should
really
probably.
A
Not
the
reason
we
had
discussion
about
it
a
couple
months
ago
about
why
we're
not
going
straight
to
12,
and
there
were
some
four
self-managed.
There
were
customers
that
couldn't
upgrade
to
12
because
they
were
on
the
AWS,
doesn't
support
Postgres
12.
Yet
there's
one
thing
and
there
are
some
other
customers
that
just
didn't
want
to
jump,
that
many
versions
and
at
the
time
that
we
talked
about
it.
It
was
only
twelve
point.
Oh
and
they
didn't
want
to
move
to
a
point
o
release
right.
A
They
wanted
a
much
more
stable
one
which
I
think
twelve
is
now
on
the
twelve
two
or
two
three
at
this
point
in
time,
but
regardless
at
the
time
we
talked
about
11
or
12,
those
were
the
reasons
not
to
go
to
12
and
since
then
I
don't
know
how
much
of
our
infrastructure
we
built
up
to
start
testing
against
12.
So
there's
probably
still
some
competence
issues
with
moving
to
12,
so
it's
probably
too
late
to
move
to
12.
But,
yes,
sorry,
no!
A
What's
going
on
with
the
birds
over
there
yeah,
there
are
folks
who
want
to
move
to
12,
we'll
try
to
will
get
a
better
cadence
going
forward.
I
mean
this
is
a
big
leap
in
something
that
was
I.
Think
like
didn't.
We
probably
should
have
done
better
John
that,
but
for
now
we
need
to
stick
with
11,
because
that's
the
tested
path,
where
we
have
confidence,
I.
C
Was
just
trying
to
find
the
the
timeline,
because
we
also
had
a
step
in
that
time
line
saying
that
we
make
post
goes
12
an
opt-in,
it
was
basically
start
using
boosts,
was
12
from
that
point
in
time.
I
don't
find
our
timeline
anymore.
Maybe
I
could
find
later,
but
I
recall
it
was
in
the
third
release.
So
13.3
was
one.
A
A
C
A
C
So
that
there
were
there's
two
perspectives
from
that,
one
is
from
gridlock
and
as
soon
as
get
lab
itself
supports
poster
stroll.
Additionally,
I,
don't
see
why
we
wouldn't
be
using
that,
on
the
other
hand,
I
think
the
the
from
a
product
perspective
we
wouldn't
want
to
skip
11
entirely
because
that's
too
fast
for
most
of
our
or
many
our
customers,
I
guess
so.
From
an
application
perspective,
you
would
still
say
you
have
to
be
at
post
11
for
a
minimal
version
that
doesn't
help
us
with
partitioning,
because
we
can't
be
leveraging
12
features.
B
It's
still
I
mean
it's
still
a
pretty
decent
amount
of
work,
at
least
if
we
want
to
do
any
kind
of
I
mean
if
we
just
want
to
do
a
side
by
side
up
like
a
couple,
queries
and
call
it
a
day
tonight,
maybe
that's
not
more
than
a
few
days.
Even
I,
don't
know
how
much?
U
s--,
that's
really
going
to
be
to
do
at
that
point
and
any
kind
of
real
testing
I
think
is
going
to
be.
C
A
C
B
A
Why
I'm
trying
to
get
so?
If
we,
you
know,
we've
spent
a
good
amount
of
time
with
situs
setting
up
the
environment,
setting
starting
to
work
on
the
data
migration.
Can
we
get
some
early
indicators
on
what
sharding
will
get
us,
because
that
could
still
be
an
issue
in
the
future
that
hey
you
know?
Even
if
we
go
the
service
extraction
router,
if
we
don't,
we
have
some
indication
of
what
shard
it
will
get
us.
Can
we
get
any
kind
of
meaningful
measurements
based
on
the
work
that
we've
done
so
far.
B
I
think
you
know
I
think
most
of
the
things
that
we
were
concerned
about
with
siteĆs
more
so
were
issues
that
if
we
went
there
round
of
service
extraction,
they
wouldn't
be
issues
like
the
you
know,
replication
of
data
having
cross
shard
data,
Rochford
queries
like
that's
one
of
the
things
that
we
would
help
to
alleviate
by
doing
by
breaking
up
the
database
first,
so
I
think
of
you're
just
looking
at
like.
Oh
can
we
run
this
query
and
it
runs
on
one
node
and
it's
roughly
the
same
speed
as
running
it.
B
C
A
Okay,
so
what
I'm
hearing
is
we
shouldn't
move
forward
with
performance
testing
on
this?
We
can
just
pull
the
plugs
out
side
us
all
together
and
switch
over
to
the
maintenance
work
that
we
have
lined
up
for
13.0
and
the
reasons
are
it's
gonna
be
pretty
time-consuming
to
get
a
comprehensive
suite
of
tests
and
the
data
will
be
somewhat
inconclusive
because
it's
a
fairly
small
set
of
data
and
that
we
expect
that
we're
gonna
get
some
query
performance
because
it's
charted
data
and
we
know
where
the
bottlenecks
are
gonna
be
said
about
right.
A
So
jump
over
you,
thirteen
dotto
under
SES
or
to
expect
to
deliver
for
the
repository,
mirror,
scheduling,
pull,
mirror
scheduling
that
was
so
when
Josh
was
reviewing
the
issues
for
what
we
should
do
for
thirteen
point
O.
He
thought
that
could
have
a
big
impact
on
performance
if
we
were
gonna
we're
moving
away
from
sharding
as
our
focus
for
now.
A
C
Now
that
make
sense,
I
think
it
needs
a
bit
of
work
to
figure
out
what
we
want
to
do
without
because
what
the
issue
outlines
is
basically
the
design
problem
and
it
makes
a
proposal
auto
fixed
up,
but
the
implementation
details
are
unclear
and
also
if
we,
if
that
fits
the
bigger
picture,
that's
pretty
unclear
at
this
point.
You
know
we
would
be
we're
talking.
The
proposal
itself
talks
about
extracting
the
scheduling
into
into
a
standalone
brokered
process,
and
that's
nothing
that
we've
been
doing
before.
A
A
You
know
5x
overhead
was
the
was
the
example
that
he
used
and
if
so
that,
like
a
hard
limit
to
the
point
where
once
we
hit
that
once
we've
consumed
all
that
growth,
do
we
then
have
to
shard
at
that
point,
I
think
the
answer
is
no
probably
look
at
further
decomposing
services
at
that
point
in
time,
but
it's
a
topic
he
wants
to
discuss.
What
is
a
service
extraction
get
us
so
as
we're
thinking
about
the
repository
pull
mirror
extraction?
A
A
C
So
what
we
would
we
discussed
last
week
was
we
or
it
was
sort
of
acknowledge
that
it's
it's
problematic
to
partition
or
to
redesign
the
database
with
partitioning
for
existing
features.
That's
I
think
that
was
sort
of
the
gist
about
me.
What
we
talked
about
last
week
and
that's
why
we
want
to
the
service
extraction,
because
that
may
gives
us
a
more
more
greenfield
type
of
work
where
we
can
introduce
it
early.
C
That
sort
of
goes
back
to
issues
as
well,
because
you
know
we
were
planning
and
have
those
tasks
ahead
of
us
for
partitioning
issues
and
we
wanted.
We
wanted
to
use
it
as
an
example
for
petitioning
general
and
I
wonder
if
it
was
still
one
to
do
that
or
if
we
perhaps
take
prefect
and
container
registry,
which
is
rather
a
green
field,
work
or
kind
of
and
review
that
and
start
with
partitioning
implementations
and
the
tooling
and
the
getting
that
kind
of
experience
from
those
projects.
A
So
my
initial
thoughts
on
that
are
a
few
and
they're
disorganized,
but
on
prefect
in
container
registry,
if
we're
already
talking
about
partitioning
it
seems
my
first
thought
isn't
seems
like
a
premature
optimization
and
my
second
thought
is:
it
doesn't
buy
us
any
real
performance
gains
right
now.
If
we're
looking
at
those
for
the
future,
however
its
it
might
be
safer
because
we're
not
actually
running
it
yet,
so
we
can
work
out
some
of
the
kinks
of
partitioning,
since
we
haven't
actually
done
that
in
production.
A
C
A
A
C
B
Why
I'm
worried
about
question
with
this?
Isn't
me
not
know
neither
anything
really,
but
either
one
of
those,
but
so,
but
that
would
mean
the
tooling
for
creating
partition
tables
would
have
to
live
outside.
The
main
gate
lab
get
lab
app
right
and
then
we're
talking
about
like
the
tooling
right
now
that
I'm
building
is
in
kind
of
the
app
migration
hoppers,
but
we
would
really
need
to
keep
that
I
guess
in
general
is
moving
towards.
If
we're
going
to
be
moving
towards
service
approach.
C
C
Yeah
I
mean
first
artists.
We
could
probably
duplicate
that
for
a
while,
until
they
realize
that
it's
working
or
that
it's
worth
the
effort
to
combine
them,
they
could
also
work
for
first
art
and
they're,
also
like
where's,
the
ID
men,
for
example.
They're
also
consider
go
so
that
might
also
need
it's
very
own
implementation
of.
A
B
C
I
mean
why
don't
we
we
don't
have
to
do
it,
make
a
decision
regarding
the
what
we
take
it
as
an
example.
Yet
why
don't
we
start
with
getting
more
familiar
with
perfect
and
container
registry
and
during
those
database
related
reviews,
perhaps
that
gets
us
into
a
better
spot
making
that
decision?
What
to
techno?
First.
A
A
It
needs
to
run
through
this
team
to
make
sure
that
we've
reviewed
it
and
make
sure
that
it
is
good
structure
and
follows
all
the
right
patterns
I'm,
not
sure
how
to
codify
that.
Yet
it
kind
of
falls
under
the
database
strategy.
Mrr
that's
been
sitting
for
a
couple
weeks
now,
as
we
figure
out
if
we're
gonna
go,
do
charting
or
service
extraction,
but
any
any
new
database
I
think
should
run
through
this
team
to
make
sure
it
gets
a
thumbs
up.
A
All
right,
Sondra's
create
a
couple
issues,
we'll
talk
about
the
pulled,
mirror
scheduling
tomorrow
during
office
hours
and
then,
if
we
need
to,
if
they
have
time-
and
we
need
to
talk
more
about
the
database
reviews,
we
can
talk
about
them
them
or
just
talk
about
them.
Asynchronously
issues
are
covered.