►
From YouTube: 2020 10 06 Database Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
to
this
edition
of
the
database
team
meeting
starting
from
the
top
of
the
agenda.
So
there
is
a
an
mr
out
there
about
decentralized
ownership
and
it
discusses
the
database.
It
uses
database
as
one
example,
and
it's
it's
on
pause
right
now,
they're
going
to
talk
about
it
in
engineering
staff
meeting.
There
was
some
it's
an
interesting
discussion.
I
would
recommend
reading
the
merge
request.
A
Ultimately,
I
think
the
intention
will
still
remain
the
same.
Database
team
doesn't
own
all
of
the
merge
requests.
It's
not
possible.
It's
not
scalable,
it's
not
something
we
should
own.
That
would
be
like
the
old
days
of
throwing
qa
over
the
wall.
Right
developers
write
their
code
and
then
somebody
else
tests
it.
So
ownership
of
individual
migration
still
remains
with
teams,
and
I'm
not
sure
what
the
further
discussion
of
this,
mr
will
be
so
just
keep
an
eye
on
it.
A
A
C
B
I
can
probably
I
mean
I'm
working
on
the
scheme
of
validation,
but
I
think
that's
an
ongoing
issue.
There's
gonna
be
a
lot
of
mrs
coming
for
that,
so
that
I
mean
they're
both
a
priority,
but
the
partitioning
is
to
finish
wrap
that
up,
I
think,
is
a
priority
that
the
schema
validation-
maybe
can
you
know,
wait
for
a
week
or
so
until
the
partitioning
is
ready,
yeah.
A
Then
all
right,
postgres,
ai,
so
having
weekly
syncs
with
nick
the
first
one
was
on
friday
and
I've
actually
asked
for
a
different
date
so
that
we
can
continue
to
support
the
no
meetings
fridays
as
much
as
possible.
So
I
suggested
either
monday
or
wednesday
at
about
8
30
pacific
time.
So
once
we
have
a
set
schedule,
I
will
invite
everybody
else
to
that
as
well
be
honest,
came
to
the
first
one.
A
Our
trial
with
postgres
ai
ends
in
november
end
of
november,
and
I
asked
him
you
know
if
we
haven't
come
to
an
agreement.
Yet
are
you
going
to
cut
us
off
and
he
said
no,
so
we're
still
going
through
all
the
features
we'd
like
to
see
what
functionality
we'd
like
to
see
what
kind
of
pricing
model
we're
going
to
have
and
what
kind
of
timeline
that's
going
to
take
so
more
to
come
there
mostly,
it
was
brainstorming
last
friday
and
there
were
no
major
follow-up
items
from
that.
A
He
was
going
to
think
about
a
few
things
and
the
agenda
is
there:
if
anybody
wants
to
read
it,
but
again,
once
we
get
a
settled
schedule,
I
will
invite
members
of
this
team,
so
they
can
join
and
give
their
ideas
and
number
four
is
just
probably
from
the
agenda:
testing
migrations
with
postgres,
ai
and
he's
gonna
think
about
different
ways.
We
can
do
that
and
you
know
if
there
needs
to
be
an
interim
solution
of
actually
having
maintainers
manually
test
that
which
is
a
solution.
A
It
may
not
be
the
greatest
one,
but
it's
something
that
we
can
talk
about
in
the
next
meeting.
Any
questions
on
postgres,
ai.
A
Let's
see
database
aptx,
I
know
giannis
had
made
some
progress
on
it,
creating
the
the
dashboard
and
periscope
last
week
I
think
end
of
the
week
or
I
was
over
the
weekend.
Actually
right
and
josh
asked
about
target.
You
honestly
want
to
verbalize
your
answer.
C
D
So
I
was
thinking
that
already
in
the
updates,
we
have
the
target
the
100
millisecond
target
and
the
dollar
of
250
milliseconds.
So
in
order
to
generate
the
updates,
we
do
you
know
we
count
how
many
queries
are
at
below
100
milliseconds
and
how
many
are
below
250
milliseconds,
and
then
we
have
a
reds.
So
we
already
have
the
tolerable
part
in
the
updates.
D
So
from
one
point
of
view,
99
percent
means
seems
reasonable.
On
the
other
hand,
I
don't
know-
and
I
wanted
to
ask
there-
what
are
we
doing?
You
know
with
those
are
self-hosted
instances.
So
I
don't
know
if
that
was
a
gitlab.com.
I
would,
I
would
say,
yeah
that's
super
reasonable.
D
I
I'm
not
sure
about
how
we
would
treat
the
self-hosted
instances
from
that
respect,
and
I
want
to
discuss
with
with
you
in
this
call
about
this.
D
And
we
don't
have
enough
data,
so
we
have
data
only
from
13
instances.
12
instances
out
of
13
are
above
99.
There
is
one
instance
that
is
below
99,
for
example,
but
I
don't
know
how
this
will
work.
D
So
one
thing
would
be
to
let's
wait
for
two
three
weeks
and
see
the
data
yeah,
but
I'm
not
sure
how
many
outliers
we
have
so,
for
example,
my
example.
There
is
what
happens
so,
for
example,
somebody
may
set
up
and
gitlab
with
256
megabytes
for
postgres.
A
That's
interesting:
is
there
a
way
to
know
that
the
results
you're
getting
back
are
under
under
provision
on
memory,
and
can
you
tie
those
two
things
together.
D
We
have
the
data
from
the
air
from
topology,
so.
A
A
D
Then
there
is
the
thing
that
I'm
not
sure
if
those,
if
the
same
instances,
we
have
both
data
most
probably
if
we
have
our
data.
B
B
D
We
are
gathering
that
data
from
inside
gitlab.
So
if,
if
an
instance
has
prometheus
running,
we
will
have
a
query
updates,
I'm
not
sure
if
we
will,
if
the
same
instances
will
also
have
you
know
a
memory
return,
then
disk
etc,
and
what
else
we
gather
from
memory
that's
something
to
to
check
and
we
need
data
to
be
able
to
the
moment.
I
have
data
we'll
be
able
to
answer
all
those
questions.
A
Yeah,
I
was
just
wondering
if
there
was
some
kind
of
identifier
in
the
import,
so
we
could
tie
the
two
together
between
what
memory
collects
and
what
we
collect,
but
based
on
an
old
import
I
have
oh
there
is.
There
is
an
identifier
nevermind
that
looks
like
a
self
incrementing
identifier,
so
I
don't
think
there
is.
D
D
A
D
D
Not
don't
count
anything.
A
D
Okay,
okay,
but
we
will
start
with.
I
think
that,
starting
with
a
simple
metric
like
we
have
done
is
nice:
I'm
not
100.
If
99
is
the
way
to
go
but
yeah
we
can
okay,
we
could
think
about
it,
and
maybe
it
is
98
or
maybe
we
can
keep
it
at
99
percent
and
then
refine
it
using.
You
know
throwing
away
outliers
because
that's
more
prepared.
A
D
A
All
right
so
database
erd
has
been
requested
a
couple
times
in
the
last
couple
weeks
and
he
had
an
example
here,
something
that
andreas
and
I
talked
about
before.
We
even
had
a
database
team.
If
there's
like
an
extra
step,
we
can
add
into
ci
somewhere
to
just
kick
it
out
on
a
regular
basis,
so
that
it's
regularly
available
something
to
think
about.
I
will
create
an
issue
for
this
and
it's
not
super
urgent,
because
we
have
a
dated
erd
out
there.
A
C
Hello,
craig
and
team,
so
I'm
showing
here
the
design
document
blueprint
that
I
created
for
the
explaining
the
logical
application.
Basically,
why
or
what
are
the
limitations
that
we
have
with
stream
replication
and
when
we
could
use
logic
at
the
end
of
the
day?
Well,
initially,
we
thought
to
use
logical
to
enable
checksums,
but
unfortunately
the
path
is
not
so
easy.
Things
will
require
a
lot
of
time
and
adding
a
lot
of
load
on
the
primary.
C
So
you
can
have
a
look
on
that
more
in
detail,
but
what
we
find
out
the
best
use
case
for
using
logical
is
for
the
following
process.
The
goal
would
be
for
a
make
a
major
upgrade
version
without
down
timer
with
near
zero
downtime
resuming
the
whole
process.
We
will
create
a
second
clustering
path,
running
with
string
replication
break
this
replication
during
a
low
peak
time
during
the
weekend,
for
example,
execute
the
upgrade
on
this
new
cluster
and.
B
C
Execute
the
logical
just
to
sync
this
data
during
this
delt
of
time
from
when
we
broke
the
logical,
the
streaming
replication
until
the
upgrade
we
tested
the
whole
process
in
staging
and
works.
The
two
points
here
that
we
would
like
to
understand
was
looking
for
the
limitations
of
logic
and
the
environment.
C
B
D
Three,
mrs,
are
in
one
mr
reverted,
so
that's
an
interesting
one
did
not
cause
any
incident.
We
called
it
the
moment
we
sent
it
to
master,
so
we
are
not
running
tests
with
the
pg-12
in
the
pipelines
inside
the
mrs,
so
we
run
postgres
12
tests
after
we
send
to
master.
So
I
had
a
an
update
where
we
were
taking
the
pg
constraints
table
and
there
is
a
a
column
there
for
getting
the
constraint
definition,
which
is
there
forever.
D
As
far
as
I
know
it
since
postgres7
and
this
was
removed
in
postgres
12,
so
the
update
was
working
perfectly
fine.
My
environment
was
passing
all
pipelines
in
the
mr,
the
moment
we
merged
the
with
master
it.
The
pipelines
failed
on
master
and
that's
one
thing
that
we
are
not
running
so
we
could
not
catch
it
in
the
in
the
mr,
I'm
not
sure
if
we
want
to
also
run
pipeline
tests
for
pc
12
in
inside.
D
C
D
We
recognize
or
second
strange,
for
example,
and
we
change
the
existing
helpers
to
also
copy
constraints,
and
that
broke
some
migrations
migration
tests.
We
also
could
not
see
it
because
those
tests
did
not
run
on
on
their
mr
pipelines.
There
is
an
issue
that
we
are
going
to
fix
it,
so
we
are
going
forward.
We
are
going
to
kick
off
migration
pipelines.
We
never
have
any
changes
in
in
the
library
in
lib
database
under
the
normal
lib
or
the
enterprise
edition
loop.
So.
A
D
It's
just
sending
merging
something
in
master
and
the
moment
it
was
merged
breaking
the
pipeline
master.
So
we
will
move
this
one
step
earlier
so
that
we
can
see
that
in
a
in
in
the
mr
okay.
D
So
this
will
take
me
a
day
back,
so
this
is
already
fixed
and
I
have
to
resend
this
fix
this
semr
yeah,
that's
it,
and
so
I
I
did
not
have
the
time
to
to
move
forward
with
the.
D
What
else
did
the
I
wanted
to
do
with
the
partitioning,
but
I
will
going
forward.
I
will
start
with
the
partitioning
events:
partition.
D
A
So
sorry,
what's
the
the
headline
for
the
mr
that
you're
working
on
this
week
to
fix?
Is
it
sorry,
testing
pg-12
earlier
in
the
pipeline.
D
Oh,
no,
no!
It's
not
important.
It's
a
redo,
the
check
constraints
helpers,
but
it's
it's
just.
I
will
resubmit
it
with
using
something
that
is
compatible
with
postgres,
12
and
13..
It's
not
it's
not
important.
Okay,.
B
Yeah,
the
docs
are
done.
They
were
merged
in
the
other
thing
that
I
had
been
working
on
really
for
last
week
was
finishing
up
identifying
all
the
schema
differences
between
structure.sequent
production,
which
is
done.
There's
that
spreadsheet.
Now
that
lays
that
all
out
so
for
this
week
I
guess
there's
a
couple,
mrs,
that
I
started
last
week
to
start
addressing
some
of
the
schema
issues.
B
C
A
B
Yeah
then
the
mrs
at
least
one
of
the
mrs
is
ready
more
or
less.
I
just
want
to
we're
going
to
do
like
a
trial
run
of
the
migration
and
just
do
because,
there's
probably
I
don't
know,
50
columns,
maybe
that
need
to
be
fixed
and
we're
just
going
to
do
a
single
migration
first
to
make
sure
that
the
approach
works
in
production.
There's.
No,
so
at
least
do
that,
and
then
we
can
be
pretty
confident
that
it's
going
to
work
right.
A
A
All
right,
that's
it
for
the.
D
D
On
behalf
of
andreas,
most
of
the
ray
index
in
the
mars
are
in.
There
is
only
one
smaller,
mr
as
far
as
you
can
see
that
is
pending,
but
the
core
mars
are
in
have
been
already
merged.
A
Coming
soon
there
were
some
exercises
I
learned
from
the
manager
challenge.
It
should
be
interesting.
I've
added
them
to
our
the
database
team
issues-
probably
do
them
later
on
this
month,
so
but
manager
challenging
was
interesting.
I
know
they're
going
to
post
all
the
videos
and
materials
out
there
and
they'll
do
a
couple
more
rounds
of
it
later
on
for
other
managers
or
maybe
aspiring
managers.
A
So
it's
good
training
found
it
interesting,
learned
some
new
things
and
there's
a
refresher
on
a
bunch
of
things
that
I've
done
before
so,
but
it's
always
good
to
have
that.
D
Yeah,
if
you
can
share
the
best
of
or
things
that
you
found
interesting
with
us,
it
will
be
great.