►
From YouTube: 2020 02 10 Database Team Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Few
quick
updates,
so
during
the
last
company,
retro
I,
don't
know
if
you
all
watched
it
or
not,
but
there
was
a
goal
in
there
for
reducing
review
to
merge
Time
by
a
day
not
from
when
the
Mr
was
committed
to
merge
time,
but
in
between
review
and
merge
time.
It
was
mentioned
that
Sid
would
like
to
see
that
reduced
by
a
day
and
I.
Don't
know
if
there's
an
issue
related
to
it,
but
just
something
to
keep
in
mind
they're
trying
to
get
the
overall
merge
time
down.
A
So
that's
a
goal.
I
would
expect
to
see
an
issue
out
there
at
some
point
in
time.
Other
things
to
read
up
on
there's
a
new
priority
severity
scheme,
they're
just
the
the
numbers,
don't
change!
So
it's
still
P
and
S
one
through
four,
but
there's
some
pairing
that's
going
to
be
enforced
in
there,
so
you
can't
have
a
P1
and
call
it
an
S4
right.
So
just
read
up
on
the
new
rules
that
are
out
there.
A
In
the
last
engineering
VP
office
hours
Eric
once
again
encouraged
everyone
to
nominate
for
discretionary
bonuses.
If
you're
working
with
someone
that
you
think
is
doing
great
work
and
does
deserving
of
a
discretionary
bonus,
anybody
can
nominate
someone
for
it.
So
read
up.
He
mentioned
that
when
you
we
do,
have
some
graphs
on
how
much
is
being
spent
on
discretionary
bonuses
and
it's
the
kind
of
Peaks
and
valleys.
Every
time
you
remind
folks
it
spikes
back
up
and
then
it
kind
of
drops
off
until
they're
reminded
again.
A
We
have
an
overall
company
goal
to
spend
10
on
everybody.
So
if
you
see
someone
doing
great
work
feel
free
to
nominate
them
for
a
discretionary
bonus.
Any
questions
on
that
all
right.
There
was
a
request
for
contributing
to
development
boot
camp
video
series.
So
if
there's
anything,
you
want
to
contribute
to
or
volunteer
for
there's
the
issue.
A
There's
a
new
Quad
planning
process
and
this
is
to
get
a
quality
involved
earlier
in
the
process.
I
think
it's
going
to
affect
database
a
little
bit
less
because
we're
not,
as
involved
in
spinning
up
new
features
and
that's
typically
where
or
that
that's
the
initial
Gap
that
was
identified
is
that
quality
wasn't
getting
involved
in
the
planning
process
there.
So,
but
something
to
keep
in
mind
as
we're
going
through
breaking
down
issues
and
planning
for
Milestones
make
sure
that
we
try
to
get
them
involved.
A
I
think
actually
I
need
to
talk
to
Tanya,
to
see
who
our
counterpart
from
quality
is
here.
Tanya
is
the
enablement
quality
manager,
so
she
would
ideally
assign
folks
to
our
team
and
I
will
find
out
who
that
is.
A
Let's
see
the
database
sharding
working
group
is
going
to
start
today
that
you
are
optional.
If
you
don't
want
to
attend,
but
it
should
be
pretty
interesting
and
probably
need
your
feedback
in
there
from
time
to
time.
So,
if
you
can
join,
please
do
figure
out
today
is
going
to
be
the
first
one,
so
there's
some
level
set
of
expectations
that
we
need
to
figure
out,
because
I
know
some
of
our
stakeholders
have
been
asking
for
sharding
for
a
while,
but
we
need
to
find
out
what
they
expect
from
that.
A
What
they're
looking
for
what
they
even
understand,
what
charting
or
even
partitioning
is
so
I
think
today
will
be
a
lot
of
Discovery
and
just
understanding
what
everybody
is
expecting
out
of
this
working
group
so
should
be
entertaining
and
then
security
training.
So
Pat.
You
have
to
take
this
part
of
your
onboarding
issue,
you're,
probably
getting
through
it
now
Andreas.
Have
you
already
taken
that
one?
It
was
a
live
session
that
was
offered.
A
It
was
spun
up
since
I've
been
here,
so
it's
been
in
the
last
two
months
and
I
think
everybody
from
engineering
was
invited
and
the
only
the
only
requirement
is
that
you've
read
through
and
the
materials
and
even
watch
the
video
videos.
Actually
it's
two
days
worth
of
videos.
If
you
didn't
attend
the
live
session,
you
might
have
been
in
the
infrastructure
at
the
time.
So
maybe
you
didn't
take
it.
Oh.
A
A
A
So
I
would
read
down
to
this
comment
here,
thanks
and
if
you
so
you
said,
you
attended
some
of
it.
If
you've
already
covered
these
sections,
then
just
let
me
know
we'll
mark
it
off.
Basically,
what
I
need
to
do
is
fill
out
this
top
row,
my
name
to
say
who's
taking
it
and
I
think
we're
at
about
a
half
capacity.
So
far
of
folks,
that
report
to
me.
A
B
You
know
some
currently
mostly
working
on
the
exploring
the
purchasing
options.
It
is
sort
of
coming
out
of
the
first
issue
that
is
in
there
and
I've
also
for
the
next
Milestone
I've
created
another
one,
but
I
think
I'm
already
working
on
how
to
make
a
push
that
back
into
the
current
Milestone.
B
So
that's
that's
about
exploring
issue
search
in
context
of
partitioning
I've
upgraded
a
production
instance
I'm.
Sorry,
you
know
my
database
lab
instance
to
book.
B
I'm
playing
around
with
that
and
the
goal
is
to
just
figure
out
what
the
next
steps
are
create.
Issues
for
those
and
yeah
see
what
we
can
learn
from
from
that.
B
B
Sure
no
problem,
that's
this
I.E
Community
contribution
that
included
marginalia.
That
is
a
gem
that
you
can
use
to.
Annotate
SQL
queries
with
comments,
and
those
comments
include
details
for
even
even
tracing
IDs
was
built
in
the
code
base.
B
That's
right
that
good,
contributed
I
think
to
two
months
back
I.
Think
it's
really
helpful
for
us
to
have
that
on
github.com,
since
it
was
a
community
contribution,
it
slipped
a
bit
off
our
radar
to
to
enable
that
I
think
I
forgot
about
it.
I
wanted
to
do
that.
Only
got
a
reminder
right
now,
but
that's
what
I'm
doing
there?
B
It's
enabling
staging
and
I'm
I
ran
into
an
issue
where
it's
not
fully
working,
so
I'm
I
still
need
to
look
into
that
before
I
can
and
it
worked
on
corruption.
C
Yeah,
so
the
first
one
there
is
already
I
think
I
think
I'm
good
there.
It's
the
code
is
done.
I'm
just
gonna
probably
put
put
that
for
review
today.
The
second
one
actually
was
actually
that
was
merged
already.
C
A
Once
it's
validated
on
Productions,
typically,
when
we
close
it
so
I
can
there's
a
workflow
link.
I
will
send
to
you
to.
C
And
then
search
yeah,
the
search.
You
know
we
kind
of
discussed
this
last
week,
I
I
had
to
fix
that
I
thought
would
at
least
alleviate
some
of
the
issues,
but
from
some
more
testing
last
week,
I
think
it's
kind
of
it
maybe
speeds
up
in
some
instances,
but
makes
it
slower
in
others
depending
so
I'm,
not
really
sure.
That's
a
fix.
That's
worth
basically,
once
the
number
of
issues
that
you're
searching
gets
above
a
certain
threshold,
then
doing
the
optimization
just
really
makes
it
slower.
C
A
Probably
just
need
to
read
your
comments
on
there,
but
do
you
know
what
the
threshold
was?
For?
You
said
it
was
after
a
certain
number
of
issues
as
you're
paging
through
it.
Is
that
what
it
was
well.
C
So
it's
yeah,
so
the
the
fix
that
was
suggested
was
to
use
that
Common
Table
expression
to
kind
of
build
the
issues
and
then
search
through
those
because
it
was
using
the
wrong
index
and
it
was
falling
back
and
scanning
the
whole
table.
Looking
for
the
the
text
match,
but
once
the
number
of
issues
reaches
somewhere,
you
know
maybe
10
000
issues,
20
000
issues,
then
it
takes
more
time
to
build
that.
You
know
materialize
that
and
then
scan
through
that.
Then
it
does
just
to
read
the
table
using
the
index.
C
So
I
don't
have
an
exact
number,
but
I
just
saw
that,
depending
on
the
exact
project
you're
looking
at
you're,
seeing
different
results,
One
Way
versus
the
other,
so
I
don't
know
that
there's
a
clear
cut
I
mean
short
of
having
some
heuristic
in
the
code
that
says
like
if
it's
less
than
this
do
this.
If
it's
more
than
this,
do
this
there's
really
a
clear-cut
answer?
I,
don't
think.
C
I
mean
it
was
taking
long
enough,
at
least
in
a
database
lab
that
it
would
be
timing
out
in
production.
So
I
think
it's
at
a
point
where
it
wouldn't
really.
A
C
I
mean
there's,
certainly
quite
a
bit
that
it
seemed
to
be
timing
out
yeah,
primarily
when
you're
searching
issues
either
well,
either
when
it's
a
global
search
or
when
it's
even
specific
to
a
group,
a
group
that
has
large
enough
issues
or
a
large
enough
number
of
issues.
And
it's
those
are
the
two
most
common
scenarios.
I
saw
where
it's
timing
out
yeah,
but
also
sometimes
even
in
the
context
of
a
project.
C
It's
picking
the
wrong
index
and
then
that's
slowing
down
too,
but
there's
not
necessarily
a
way
to
speed
that
up,
because
there's
no
way
to
really
force
it
to
use
the
right
index
short
of
like
hacking,
something
like
wrapping
it
in
a
function,
call
or
something.
So
it
will
not
use
that
index,
which
is
pretty
Panic.
So.
A
B
I,
don't
know
I
think
we've
we're
sort
of.
We
can
be
a
bit
more,
a
bit
smarter
here
with
like
suggesting
maybe
a
particular
execution
plan
and
forcing
up,
but
then,
on
the
other
hand,
it's
really
hard
to
get
it
right
for
all
those
cases
and
ultimately,
I
think
it's
we're
going
to
benefit
a
lot
from
producing
to
keep
the
or
make
the
problem
more
manageable.
And
then
we
don't
have
to
have
this
cleverness
on
the
application
to
to
decipher
One
path
or
the
other.
B
It's
just
that.
The
data
that
we're
going
to
scan
for
a
group
search,
for
example,
is,
is
going
to
be
much
less
than
it
is
today.
So
we're
not
we're
not
sitting
all
the
data
anymore,
but
we're
narrowing
down
narrowing
it
on
before
we
actually
start
planning
that
create
and
usage
patterns.
B
I
think
this
is
very
complicated
to
to
to
get
right
or
to
Target
a
particular
usage
pattern.
A
Yeah
for
sure,
but
I'm,
just
wondering
I
mean
partitioning
still
we're
still
months
away
from
being
able
to
roll
that
out
on.com.
So
if
we
can't
come
up
with
some
kind
of
relief
through
usage
patterns,
is
there
I
don't
know,
maybe
there's
something
else.
We
can
improve
elsewhere,
I'm
just
trying
to
figure
out
where
we
can
get
some
relief
on.com
for
some
of
these
timeouts
or
some
of
these
at
least
perceived
performance
degradations
from
the
user's
perspective
on
search
in
particular,.
B
A
Yeah
was
it
always
just
the
the
number
of
rows
being
returned
so
Patrick
you
mentioned
like
around
10
000.,
so
I
I,
don't
imagine
anything
other
than
Bots
would
be
going
that
many
pages
deep.
That's
why
I
was
asking
about
usage
patterns.
Oh.
C
Well,
so
that's
the
10
000
is
more
so
kind
of
once
you
apply
this
optimization
to
filter
the
issues.
Then
it
can't
really
rely
on
the
index
to
pull
like
the
first
page.
Essentially,
so
it
has
to
build
the
entire
set,
and
then
you
know,
sort
and
pull
the
first
100
or
whatever.
So,
if
there's
more
than
say,
10
000
issues
in
a
particular
group
that
they're
searching
under
until
it
materializes
all
that
and
finds
the
ones
that
it's
looking
for.
That
takes
enough
time
that
it
would
still
be
timing
out.
A
I
think
I
understand
all
right.
So
should
we
just
shelf
this
for
now
and
then
find
something
else
to
work
on
Andreas?
Is
that
what
you're
recommending.
A
A
So
this
tiebreaker
sort
direction
is
that
gonna?
Is
that
any
kind
of
performance
Improvement
or
does
it
just
fix
some
known
issues.
B
Yeah,
it
removes
the
Nova
hat
that
we
that
we
didn't
meet.
It's
a
behavioral
change,
but
it's
very
you
have
to
look
very
closely
to
detector.
Okay,
I,
don't
think
it's
it's
a
release,
boost.
A
A
Yes,
but
then
there's
also
a
section
for
performance
improvements
where
we
just
list
out
the
either
the
issues
of
the
merge
requests,
and
we
literally
just
link
to
that.
So
you
should
see
some
performance
improvements,
like
the
memory
team,
will
probably
link
out
some
of
the
import
performance
improvements
that
we've
done.