►
From YouTube: Database Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
So
we're
looking
into
step
two,
the
cleanup
web
hook
logs
backfill,
why
it
failed
this
morning
in
production
and
about
halfway
through
the
conversation
because,
I
said,
didn't
start
recording.
So
anyway,
honestly
we
were
saying.
E
We
look
at
the
created
at
time
stamp
for
those
records
just
roughly
where
that
is.
I
think
the
the
retention
policy
would
expire
anything
older
than
three
months.
If
I'm
not
mistaken.
Oh
no,.
C
No
yeah
they're
close
to
to
the
to
the
retention,
but
far
also
so
they
are
away
by
70
million
from
the
minimum
id.
C
So
the
minimum
id
is
534
million
and
those
jobs
that
I
can
see
here,
the
last
ones
that
we
processed
were
around
the
all
over
the
place
of
500
million
yeah
somewhere
close
to
the
minimum
somewhere
above
the
minimum
by
50
60
70
million.
E
Can
you
share
that
data
somewhere?
If
you
have
it
like
the.
C
Arrangements
for
the
jobs
yeah
just
do
it.
Okay,
I
will
update
the
issue
so
that
we
can
discuss
there,
but
I
agree-
maybe
it's
like
that.
Maybe
it's
not
it's
strange
that
we
had
twice
failures
on
those
ones
to
tell
you
the
truth.
C
D
C
Webhook
logs,
if
the
pruneworker
was
processing
those
ids
more
than
a
week
ago,
processing
them
again
this
week,
it's
today,
it's
pretty
weird,
isn't
it.
E
D
C
D
C
There
was
an
another
issue
that
so
I
bet
that
there
was
something
else
and
this
one
and
hopefully
hopefully
on
our
side.
It
was
what
andrea
says
or
maybe
not,
but
we
can
clearly
see
also
from
what
I
from
the
kibana
graph.
How
are
the
time
was
increasing
during
big
times.
F
F
C
E
C
Check
yeah
so
yeah,
the
earliest
records
on
webco
clocks
are
from
the
start
of
august
august,
the
3rd
and,
for
example,
those
are
and
the
one
that
patrick
has
talks
about
his
way
of
that
and
the
one
that
you
are
discussing
about
address.
When
was
it.
D
C
Yeah,
thank
you
yeah.
It's
still
weird,
even
even
if
we
are
at
the
peak
and
we
have
issues
those
issues
affecting
the
web
clubs
copying
from
the
web
book
logs
and
causing
them
to
to
take
15
seconds
with
the
exception.
If
we
are
completely
strained,
so
we
were
yeah.
D
C
C
E
Sorry,
but
from
from
the
approach,
are
we
all
all
in
agreement
about
this
like?
If
we
don't
see
any
suspicious,
I
o
saturation,
cpu
saturation
or
anything
like
that?
E
C
If
it
is
a
web
hook
law
the
web
hook
log
prone
service,
how
do
we
call
it?
I
think
that
we
should
turn
it
off
for
self-hosted,
so
we
should
push
an
update
to
turn
it
off
so
that
we
don't
have
the
same
issue
for
self-hosted
instances.
If
you
want
my
opinion.
C
C
For
sure
I
don't
think
that
so
another
discussion,
there
is
whether
we
should
update
our
helpers
for
partitioning
and
other
helpers
as
well
to
receive
a
different
sub
ads
so
that
we
run
those
with
a
sub
batch
of
500
instead
of
2
500..
I.
E
I
think
before
we
jump
to
those
those
solutions,
let's
make
sure
we
understand
why
why
this
happened
right
I
mean
I,
I
think
it's
likely
that
it's
a
locking
situation
somewhere
if
we
don't
see
any
any
saturation,
but
let's,
let's
first
figure
that
out
right.
F
Asked
so
kind
of
this
could
be
nothing
maybe,
but
the
fact
that
you
were
seeing
records
from
august
in
the
in
the
table.
So
looking
back,
I
don't
see
any
recent
timeouts
for
the
prune
web
hooks,
but
there's
a
bunch
from
right
around
august
time
frame.
So
it's
possible
that
the
the
prune.
C
F
Was
running
then,
was
timing
out?
Wasn't
cleaning
up
those
records,
so
those
records
from
that
august
time
frame
just
sort
of
got
left
ever
since?
Somehow,
maybe
I
don't-
I
don't
know,
I
mean
that's
sort
of
conjecture,
but
it's
interesting
that
those
the
last
time
else
are
all
from
that
the
same
time
as
the
data
we're
seeing
in
the
database
other
than
the
ones
that
we're
seeing
from
late
december,
which
would
be
cleaned
up
normally,
because
they're
hit
the
90-day
threshold.
B
All
right
jose
is
there
anything
else
you
need
from
us
at
the
moment,
I
didn't
see
anything
on
the
infradev
issues
that
were
not
assigned.
D
D
Like
awesome,
craig,
like
the
main
thing
I'm
working
now,
is
on
the
upgrade.
We
are
getting
data
there
and
trying
to
make
it
happen
like
it
was
postponed.
It
will
be
this
sunday
and
they
postponed
it
to
the
17th.
So
we
have
two
weeks
more,
I'm
still
working
some
slow
quests
with
providing
some
inputs.
When
I
can
okay.
B
Yeah,
so
we'll
need
some
on-call
coverage
during
the
upgrade
they've
asked
for
people
to
be
available,
while
the
upgrades
happening
so
probably
follow
this
on
have
andreas
around
and
then
pat
or
giannis.
However,
however,
we
need
to
coordinate
it,
but
it's
the
weekend
of
the
17th
and
18th
and,
as
I
get
more
details
on
when
they
want
people
available
I'll,
let
you
all
know
I
haven't
heard
specifics.
Yet
they
just
want
to
know
that
someone's
available
in
case
something
goes
wrong.
B
All
right,
retro
is
due
this
week.
I
think
some
people
have
already
started
contributing
to
it.
Thanks
I
talked
across
last
night,
it's
official,
so
he's
going
to
be
working
with
us
through
1311
at
50.
He's
got
some
work
that
he
needs
to
finish
up
with
his
team
and
then
after
1311
he'll
be
with
us
full
time.
B
I've
asked
him
to
start
getting
up
to
speed
on
the
primary
key
migration
efforts.
He
was
concerned
when
I
put
on
there
that
he'll,
when
we
put
on
there
that
he'll
also
be
working.
Anybody
that
helps
with
this
head
count
reset
will
be
working
on
operational
issues,
and
he
said
you
know
he
was
concerned.
He
said
I
don't
have
permissions
on
those
servers
or
anything,
and
I
clarified
that
operational
was
more
inbound
requests.
B
I
gave
them
some
examples
that
pat's
worked
on
in
the
past,
where
a
customer
tried
to
upgrade
from
one
version
to
another,
and
there
was
some
issues
that
he
had
to
investigate
because
they
dropped
a
column
or
they
renamed
an
index
or
something
I
can't
remember
so.
I
gave
him
some
operational
examples
that
we've
encountered
in
the
past
and
he
was
relieved
on
that
one.
B
So
maybe
we
need
to
change
the
wording,
so
it
doesn't
say
operational,
because
that
means
different
things
of
different
people,
but
he's
going
to
catch
up
on
the
primary
key
work
that
pat's
doing
instead
of
reach
out
to
pat
at
some
point
in
time,
pat
and
cross.
You
do
you.
Two
have
probably
like
a
two
hour
overlap
in
krause's
morning
and
at
the
very
end
of
pat's
day,
so
you
should
be
able
to
sync
up
over
the
next
few
days.
B
And
if
any
issues
come
in
feel
free
to
start
assigning
to
him
where
we
need
some
help
from
him.
B
G
C
C
B
E
Yes,
I'm
getting
getting
a
bit
more
feedback
and
there
is
a
follow-up
issue
still
with
the
container
registry:
that's
not
cleaning
up
automatically
and
that's
the
left
side
somewhere,
which
is
a
dependency
for
this
to
roll
out
wider,
basically,
okay,
next
one
yeah.
This
is
something
I
pushed
another
change
which
hopefully
fixes
the
mr.
So
we
can
drop
the
model
and
then
a
second
change,
dropping
the
table
doing
that
on
the
sides.
Basically,.
G
B
C
G
D
A
C
C
G
C
Were
not
on
on
the
staging
servers,
those
issues
were
on
the
product
move,
my
move
their
web
hook
logs
also
to
production,
and
we
will
have
to
make
sure
that
everything
worked.
Okay,.
F
A
E
Yeah
that's
dependency.
I
was
talking
about
that's
still
not
working.
C
Sent
it
to
him,
it's
it's.
We
have
a
proposal
there
and
I
was
working
on
a
lot
of
me
and
pat.
So
this
is
just
wrapping
the
discussion
and
he.
D
C
Yeah
I
have
gone
through
that
we
are
pretty
pretty
fine
one
question
I
have
there:
do
we
leave
those
issues
at
the
database
triage
on
purpose,
or
should
they
remove
them?.
C
We
have
some
query
performance
investigation,
so
I
I
removed
as
many
as
possible,
but
there
are
still
some
should
we
remove
them
now
that
they
are
assigned
to
people
yep.
C
C
G
A
I
had
one
question
so,
given
that
I'm
back
from
vacation,
I'm
still
catching
up,
is
there
anything
that
I
can
currently
do?
That
would
support
you
in
your
ongoing
efforts
to
to
the
shoulders
of
the
load.
A
E
B
All
right
thanks,
everybody,
one
thing
I'd
say:
is
I'm
getting
peeing
down
for
on
the
incident,
so
just
keep
updating
the
issue,
because
I've
just
pointed
everybody
there
so.
G
One
on
that
I
was
having
a
look
around
in
cabana.
While
the
meeting
was
ongoing.
It
looks
to
me
like
the
pruning
job
is
failing.
Yeah.
Is
that
known.