►
From YouTube: 2020 06 23 Database Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
So
there
is
a
rake
task,
doing
the
backup,
so
basically
just
talking
to
PG
dump
and
in
certain
cases
it
is
being
explicit
about
the
schema
that
are
being
dumped
and
obviously
when
we
create
a
new
schema
that
we
also
need
to
be
explicit
about
that.
That
could
reverted
last
week
and
I'm
just
sending
a
fix
for
that.
B
Perhaps
I
forgot.
All
of
you
have
time.
I
would
be
great
if
you
could
take
a
look
at
that.
A
more
I'm
just
I
just
pushed
it
I'm
waiting
for
the
pipeline,
perhaps
all
ping
you
later
on
that
there
and
then
believe
this
is
sort
of
the
last
thing
that
we
have
to
do
is
be
implementing
the
automatic
partition
creation
for
the
time
partitions,
and
then
we
can
actually
ship
the
migration
to
create
the
tradition
table.
So
right
would
you
agree
Patrick,
first
or
something
else,
that's
something!
Yeah.
C
B
B
Exclamation
mark
attack
saying
that
well,
if
we
need
to-
and
we
most
likely
will
do-
that,
we
will
drop
all
the
data
that
it's
in
there
at
some
point.
So
that's
it.
That's
an
experimental
feature.
That's
sort
of
the
reason
why
we
think
that
shipping
a
very
minimal
database
design-
you
should
be
enough
and
I'll
get
that
going.
This
is.
A
B
B
B
That's
it
implemented.
That
is
basically
receiving
all
those
events
for
analytics
and
there
are
quite
a
few
open
questions
around
that
like.
Well,
that's
a
that's
a
separate
process
like
how
does
it
connect
to
the
database
how
their
server
can
be
Lacombe?
Would
we
even
allow
a
separate
they
didn't
separate
process
to
connect
to
the
database?
That's
still
in
discussion,
and
there
is
a
meeting
on
Thursday.
B
We
see
instead
of
discussing
interesting
and
I'm
thinking,
so
I'm
sort
of
tasked
with
the
with
the
database
and
I'm
thinking.
We
could
actually
use
it
for
ships
some
minimal
hash
partitioning.
It
should
be
quite
it's
simpler
than
the
time
time
partitioning
of
ever
working
on,
because
it
is
static
right.
You
just
create,
like
whatever
amount
of
positions
you
want
to
happen.
That's
that's
never
going
to
change,
so
perhaps
we
can.
We
can
even
learn
something
that
area
Pramod.
A
A
A
B
Not
an
old
detail,
but
there's
two
dimensions.
You
know:
you're,
always
access
data
by
the
project
ID
to
the
analytic
queries
they
always
go
by
by
project
ID,
so
that's
sort
of
for
the
primary
I
mentioned
for
as
partitioning
right
and
then
typically
you
would.
She
would
draw
some
graphs
over
the
last
30
days
or
monthly
or
whatever,
and
that
sort
of
led
to
talk
to
time.
B
B
You
know
I
mean
if
we
like
I
would
wait
without
until
we
have
one
example
that
uses
time
partitioning
and
I
would
expect
that,
once
we
have
that,
it's
quite
quite
simple,
to
mix
that
in
with
the
same,
it's
the
same,
mechanics
that
you
have
when
you
partition
one
table
by
time.
It's
just
a
sign.
This
table
is
actually
a
petition
of
another
team.
B
And
something
that
that
we're
not
so
sure
about
I
mean
we're
sort
of
trying
to
set
the
direction
for
that
feature.
At
the
same
time,
we
don't
want
it,
invest
much
at
this
point
mm-hm,
and
we
already
know
that.
Well,
if
this
feature
is
successful,
we're
going
to
see
that
much
data
that
it's
likely
or
we
already
know
that
this
has
to
probably
go
into
a
separate
database,
maybe
separate
database
cluster
even
or.
A
A
B
You
can
just
just
thinking
about
the
ingestion
side.
If
you
have
a
high
rate
of
inserts,
you
don't
want
to
do
like
one
by
one
inserts
right,
rather
want
to
put
it
on
there
or
maybe
I've
worked
with
Kafka
before.
In
that
scenario,
you
put
it
on
kakhovka
cue
menu
in
certain
batches
into
into
your
data
warehouse,
and
that's
like
entirely
out
of
the
stack
that
we
were
in.
We
don't
have
to
send
escapements
there
currently.
B
There's
a
certain
delay
to
that
and
talking
to
to
see
he,
we
sort
of
went
in
a
direction
where
we
think
that
perhaps
there
should
be
a
data
warehouse
solution,
analytic
solution
that
owns
all
the
raw
data
and
it
provides
access
to
the
aggregated
view
of
that
they
don't
the
ones
that
you
need
for
building
those
graphs
and
the
product
and
then
the
rails.
Application
would
basically
talk
to
the
service
and
only
get
the
maybe
only
the
graphs
or
only
the
aggregated
data
mm-hmm.
B
A
Sounds
like
a
fun
project,
but
I
find
it
interesting,
though,
form
a
team
around
this.
If
it
it
takes
off.
If
it's
successful
I'm
wondering
how
that's
gonna
work,
I've
seen
a
couple
different
team
formation
since
I've
been
here,
I,
don't
know
if
they'll
take
from
existing
teams
or
if
they'll
hand
it
off
to
a
currently
form
team
yeah.
So
they'll
just
be
interesting
to
see
how
that
works
when
it
takes
off
I
think.
B
A
I'm
much
more
of
a
fan
of
taking
work
to
existing
teams.
Cuz,
it's
hard
to
form
a
team.
Can
then
reform
a
team
later
on
that
you
know:
take
individuals
from
a
bunch
of
different
teams
and
throw
them
together
to
work
on
a
project.
So
I'd
rather
see
the
work
go
to
an
existing
team.
It
seems
like
you're
right.
Telemetry
is
probably
the
closest
to
it
or
the
best
fit
so
but
I
don't
know,
I
think
they
have
a
lot
going
on
so
I.
A
A
All
right
so
you're
all
caught
up,
you
know,
what's
going
on
I,
think
yonyx
was
in
the
meeting.
I
have
a
lingering
MRI
I,
think
I
finished
it
up
last
night
before
Chunkin,
kissing
my
butt
anomic,
so,
which
officially
closes
that
one
I've
already
opened
the
template
for
the
other.
One
I
need
to
add
a
lot
more
content
detail
in
there,
where
it's
no
longer
database
focus,
which
I
think
we're
all
very
happy
about.
A
Now
we
just
need
to
deliver
on
partitioning
and
I
need
to
create
that
lexicon.
Page.
That
said,
ask
for
the
terminology
is
still
quite
confusing
for
people
that
are
involved.
Every
meeting
had
at
least
one
clarification
on
what
either
partitioning
or
chardie'
or
poor
data
wrappers
wait.
So
we'll
get
that
out
sometime
today.
A
B
So
far,
what
I'm?
If
we
shifted
right
now,
we
don't
have
the
automatic
partition
creation
and
then,
if
we
don't
manage
to
get
that
in
for
the
current
release,
we
would
have
a.
We
would
have
shipped
a
release
that
sort
of,
if
you
don't
upgrade
it
later,
it
breaks
because
it
runs
out
of
partitions
being
created
and
that's
that's
something
I
want
to
avoid.
Oh
yeah
that'd
be
bad
yeah,
so
I
would
I
would
wait
for
the
automatic
partition
creation.
B
C
We
had
talked
to
you
about
our
Mme,
no
I,
think
you're,
honest
and
I
maybe
had
talked
a
little
bit
about
kicking
off
a
migration
to
on
the
lab
setup
that
you
have
benchmarking.
So
maybe
that's
something
that
we
can
use
the
background
migration
gets
merged.
I
can
look
at
learning
that
maybe
this
week
I'll
take
a
while
to
well.
We
can
speed
up
the
interval
there,
I
guess
in
the
jobs,
but
still
take
a
little
bit.
B
C
A
Yeah,
whenever
you
see
a
word
gets
you
assigned
to
me
it's
because
they
want
to
make
sure
it
gets
prioritized
like
that.
One
came
up
in
the
formerly
known
as
infrastructure
and
availability
grooming
meeting.
It
was
seen
as
people
were
concerned
at
the
frequency
of
that
query,
but
ondrea's
point
and
comments
in
there.
It's
not
known
really
how
much
of
a
performance
impact
this
has.
We
should
certainly
look
into
reducing
the
number
of
calls,
but
don't
know
really
directly
how
much
this
will
impact
the
production
database,
though.
A
B
When
I
last
looked
at
the
select,
one
issue
reminded
me
of
penny:
I
keep
forgetting
the
name
of
that.
You
should
only
remove
the
fence
once
you
know
its
purpose.
There's
there
is
a
rule
for
that
area.
A
law,
something
like
it,
anyways
I,
didn't
so
I
think
this
like
one
comes
from
rails
directly.
So
that's
part
of
the
pooling-
and
there
were
a
couple
of
suggestions
like
for
a
caching
mount
or
entirely
removing
it,
and
rather
retry
on
failure.
B
A
Yeah
there
were
some
interesting
suggestions
in
there
that
could
have
some
broad
impacts
will
be
interesting
to
see
what
the
final
solution
is
and
Pat
did
a
good
job
of
calling
out
the
where
we
clear
all
the
connections
is
that
once
an
hour?
Yes,
then
we'd
have
to
reach
into
this.
We
end
up
caching,
all
these
connections.
A
We
have
to
reach
into
that
and
invalidate
all
those
cache
connections
right,
so
it
could
get
quite
complicated
quite
quickly,
so
it's
it's
I
mean
if
it's
having,
if
it's
just
annoying,
that
we
call
it
that
many
times
it's
really
not
having
an
impact.
Then
we
just
note
that
like
yeah,
this
is
annoying,
but
it's
really
not
a
performance
hit
and
it
would
be.
It
could
be
more
problematic
to
try
and
implement
something
to
fix
this
annoying
problem
right.
A
B
Would
be
kind
of
myself,
PG
bouncer
was
was
able
to
catch
that
for
you
like
when
you,
when
you
rely
or
unselect
ones
you
want
to
detect.
If
the
connection
is
alive
and
if
you
can,
you
can
catch
that
on
PG,
bouncer
and
and
not
sort
of
hit
database
with
it,
and
that
would
already
be
an
improvement.
I
guess
I
would.