►
From YouTube: 2022-10-27 #1 Code Review Performance Round Table
A
And
we're
live
all
right
run,
Welcome
to
our
first
performance,
Roundtable
of
the
code
review
group.
The
idea
of
being
a
Roundtable
is
like
we're
all
equals,
so
anybody
could
bring
up
I'm
just
taking
the
initiative
of
like
doing
the
first
introduction
call.
But
apart
from
here
it's
it's
our
thing.
So
everybody's
can
lead
the
call
and
stuff
like
that.
So
welcome.
A
A
We
are
looking
to
have
a
constant
conversation
about
what
we
can
do
to
improve
the
performance
of
the
code.
Review
area,
discuss
potential
crazy
ideas
and
not
so
crazy
ideas,
so
everything
that
is
related
to
Performance
specific
performance.
All
of
it
is
welcome
here
and
everybody
as
well.
So
next
week
will
be
the
different
time
for
more
welcoming
for
for
APAC
times
so
try
to
make
it
there
as
well.
I'll
I'll
try
to
be
there
too,
but
yeah
and
there's
an
issue
each
each
week.
A
The
idea
is
to
start
conversations
throughout
the
week.
You
can
ask
questions,
you
can
discuss
things
in
a
synchronous
manner
on
the
issue,
but
then
here
we
have
an
opportunity
to
discuss
a
little
bit
more
synchronized,
but
but
the
issues
are:
are
there
to
discuss
as
well,
so
we'll
see
how
that
goes.
I'll
be
opening
a
new
issue
after
this
call,
and
then
that
will
capture
the
discussions
for
the
next
week
right,
so
I'll
get
straight
to
business.
A
So
the
first
topic
that
I
was
shared
was
actually
me
so
I'll
just
before
I
actually
go
to
the
top
any
questions.
Any
anything.
That's
not
clear
on
the
call
that
should
be
made
more
clear.
A
Great
so
topic
so
in
the
issue:
I
shared
some
links
for
the
proposal
and
the
the
vision
that
stanislav
has
presented
and
there's
a
recording
also
about
his
talk.
A
That
will
be
that
we'll
be
considering
and
investigating
the
feasibility
and
which
is
basically
changing
the
way
that
we're
dealing
with
rendering
front-end
components
and
that
led
me
to
yeah.
So
that's
great.
We
have
our
own
things
to
sort
out
on
the
front
end
like
the
the
components
need
to
be
written
in
a
certain
way.
We
need
to
to
handle
the
the
server
side
rendered
inside
a
worker,
so
we
don't
have
to
have
a
node
server.
A
So
all
of
that
is
front-end
scope,
but
then
the
question
then
became
what
about
the
back
end
like
how
this
this
different
approach
affects
the
back
end,
namely
we're
getting
information
from
giddly
we're
starting
to
get
it
in
databases
sometimes,
and
then
we're
delivering
it
to
the
page
in
batches.
A
So
we
get
dispatched
to
deliver
the
the
actual
lines
that
were
changed
so
now
we're
looking
to
move
into
a
way
where
we're
not
no
longer
definitely
paginating
things
we're
just
constantly
getting
a
stream
of
information
from
the
back
end
and
rendering,
as
we
get
it
and
for
for
the
the
the
benefit
of
those
watching
the
recording
in
in
here
in
the
discussion
as
well.
A
A
So
when
we're
we're
talking
about
client
rendering
this
is
what
we're
talking
about,
which
is
similar
to
what
we
have
in
in
the
batch
dips,
which
is
we
get
the
permission
from
the
back
from
from
the
back,
and
then
we
render
it
now.
We
have
a
little
step
ahead,
which
we
do
batch
batch
requests,
so
we
start
running
them
in
batches
and
it
still
takes
a
while
because
we
still
have
to
compute
the
data.
A
How
does
this
work
when
we
have
to
get
information
from
giddly
and
there's
a
Time
that's
going
to
be
involved
in
those
requests?
How
do
we
see
that
changing
the
way
that
we
calculate
diffs
and
I'll
leave
it
at
that
for
now,
and
then
we
can
have
a
circular
discussion?
Does
anybody
have
any
thoughts
on
how
we
can
improve
that
for
this
constant
streaming
of
information
to
the
front
end.
B
Which
I
know
earlier
we
talked
about
how
I'm
not
up
to
date
on
what
status
self
has
been
doing.
Do
you
know
if
that
particular
like
that
demo
is
using
websockets
or
some
other
streaming
mechanism.
A
So
I
I,
don't
know
by
heart.
The
one
thing
I
was
gonna
do
was
actually
I.
Have
that
question
for
for
front
end,
which
is
coming
after?
What's
the
vehicle
like?
How
are
we
delivering
it?
Is
it
graphql?
Is
it
graphql
descriptions
with
websockets?
What's
what's
the
actual
vehicle,
so
that's
the
question
that
I
still
have
to
clarify
a
little
bit
in
my
mind,
so
yeah
I,
don't
know
the
straight
answer.
I
mean
I'm,
basically
inspecting
the
page
right
now,
so
so.
B
Yeah
yeah
I'm,
not
yeah,
just
kind
of
curiosity
I'm,
not
sure
that
it
will
affect
I,
don't
actually
the
protocol
will
affect
the
data
format,
just
I'm
just
interested
in
like
how
it's
being
done
right
now.
C
A
Sure
so
I'll
I'll
add
some
more
concrete
question.
Then
just
something
well
in
the
past,
eukary
suggested
the
pre-computation
of
things
we
need
to
render
the
merger
Crest
I.
Don't
I,
don't
know
where,
where
that
discussion
led
to
in
the
way
that
I
see
it
is
right.
Now
we
live
in
a
time
where,
when
you
eat,
when
we
need
to
render
a
merger
requests,
we
get
information
from
giddly
and
then
great
if
I'm
wrong.
A
Sometimes
you
we
capture
that
information
on
the
data
on
the
database
and
then
we
serve
it
to
the
to
the
front
end.
Is
that
accurate,
somewhat
accurate,
okay
right
so
and
then
the
question
then
becomes?
How
can
we
have
this
in
a
way
that,
like
we,
don't
always
spend
the
same
kind
of
effort,
pushing
pulling
Things
From
Italy
and
putting
it
on
database
so
I'm
guessing
that
the
database
acts
as
sort
of
a
caching
layer?
A
They
actually
don't
have
to
go
to
giddly
all
the
time.
But
what
that
means
to
me
is
giddly.
Doesn't
stream
data
for
us
right,
so
we
want
to
have
to
do
the
RPC
calls
to
to
Italy
to
get
the
information
from
Italy
and
that
store
is
going
to
be
either
paginated
at
best
or
in
bulk,
where
we
get
all
the
information
from
the
merger
press.
Do
we
get
everything
at
once
from
Italy
like
the
whole
diff.
A
C
No,
we
get
it
in
one.
It's
been
a
while
since
I
believe
what
we're
doing
is
we
we
make
a
requests
together,
we
get
all
that
information
back
and
then
we're
doing
imagination.
C
C
A
Okay,
yeah,
but
that
that
was
my
my
idea,
so
the
question
then
becomes
so
the
way
I
see
it,
and
this
is
the
topic
for
many
other
things
like
there's
a
concept
that
Tim
is
proposing.
I,
don't
know
how
visual
visible
this
is
all
of
you
in
the
back
end
called
one
app
I'll
put
it
in
the
document
as
well.
A
I'll
share
it
on
some
slack
just
for
convenience,
one
at
and
in
that
in
that
epic,
we're
describing
a
slightly
different
architecture,
which
will
involve
a
lot
more
work
on
the
front
end
like
pre-cache,
The,
View,
apps
and
and
even
potentially,
using
indexeddb
for
client-side.
Caching.
A
That
Thomas
has
done
some
investigation
in
the
past
and
the
idea
there
will
be
to
make
the
UI
a
lot
more
Snappy,
but
we
still
have
the
problem
of
we're
going
to
need
the
data
we're
going
to
need
the
actual
meat
of
the
page,
and
that
is
always
going
to
be
a
problem
of
getting
it
fast
enough
to
the
front
end.
So
we
can
start
rendering
as
soon
as
we
get
it.
A
A
This
might
be
related
to
your
second
topic
that
we
have
today
to
discuss,
which
is
a
database
of
the
diffs.
It's
growing
and
growing
and
growing
so
I'm.
Guessing
that's
a
problem,
but
is
there
any
other
way
that
we
have
talked
about
in
the
past
to
make
all
of
this
information
delivered
faster
in
a
more
continuous
like
on-demand?
Have
it
ready,
even
if
it
needs
some
pre-computation,
somehow
yeah.
C
Yeah,
the
only
the
only
conversations
I've
had
is
just
around
that
idea
right
of
like
what
what
do
we
spend
time
doing
right
like
thinking
about
like
looking
at
the
pro?
What's
the
what's,
the
the
pro
the
problem
right
is
like
we
want
to
be
faster
but
like
where,
where
are
we
spending
time?
What's
making
us
slow
and
the
slowness
you
know
comes
from
it's
just
the
accumulation
of
everything
right,
it's
like
oh
I,
don't
have
the
thing
I
have
to
go
to
giddly.
C
Oh
now,
I've
got
to
like
store
the
database
again
database
now
I'm
going
to
format
it
not
go
to
mungent.
Oh
gotta
get
some
more
data
like
Global
onto
this
response,
so
we
can
ship
it
up,
and
that
was
the
idea
behind
I
proposed
like
like
a
year
or
so
ago
like
if
we
could.
We
know
that
some
of
these
or
we
assume
that
somebody's
going
to
be
coming
in
and
looking
at
page.
We
know
what
we
have
to
generate
that
isn't
changing.
We
don't.
So
we
don't
need
like
up
to
the
minute.
C
We
just
need
up
to
the
commit
computation
right
so
just
generate
that
ahead
of
time,
and
that
is
just
a
matter
of
fetching
it
from
wherever
we're,
storing
that
and
shoving
it
up
the
pipe
and
that's
that
cuts
out
all
that
computational
step
and
the
back
end
of
like
you
know
having
to
go
to
giddly
if
we
have
to
right
so
just
it
kind
of
cuts
that
variability
down,
and
ideally
it
cuts
out
some
part
of
that
computational
step.
C
So
yeah
I
mean
that's
kind
of
the
story
we
ever
got
with
that
particular
thread.
I,
don't
know,
I
think
that
we've
talked
about
some
other
things
that
are
more
around
what
the
front
end
requests
and
whether
we
could
break
up
the
requests
so
that
we're
spreading
we're
kind
of
spreading
it
around.
So
we're
not
Computing
as
much
in
a
single
request
so
that
we
have,
you
know
that's
as
far
as
I've
ever
been
involved.
In
conversations,
though,.
A
Yes,
small
payloads
would
definitely
be
useful
yeah.
You
said
something
very
interesting
there,
because
we
only
need
so
that
there's
a
bunch
of
data
that
we
need
to
generate
a
merge
request:
the
diffs
data,
the
file,
the
div
files,
data,
which
changes
for
each
version
which
changes
on
each
commit
or
slash
push
it
changes
on
each
push.
A
Until
there's
another
push,
it
could
be
minutes,
it
could
be
years
right,
so
that
timeline
is
definitely
long
in
computer
in
performance
perspective,
so
yeah,
depending
on
how
long
it
would
take
to
calculate
this
blob
of
data
that
it's
on
a
main
stationary
and
I'm,
guessing
that
we
could
store
it
as
a
to
prepare
a
little
package
of
Json
and
keep
it
somewhere
in
a
static
storage.
Somewhere.
The
question
is:
what's
a
TTL:
should
we
expire
it
that
sort
of
thing?
A
C
C
This
is
not
an
update,
updating
number,
but
I
was
told
somewhere
on
the
order
of
30
to
35
of
red
as
his
ass
just
our
stuff
alone,
because
we're
putting
we're
already
putting
a
lot
of
a
lot
of
diff
data
and
a
lot
of
a
lot
of
this
data
so
that
you
can.
We
can
avoid
the
database
already
we're
already
putting
that
into
redis,
but
one
of
the
slow
things
is
that
like
okay?
C
Well,
it's
all
in
red
is
that
so
we're
cutting
out
the
database
set,
but
now
we're
basically
pushing
burden
off
of
the
database
into
redis.
So
he
stopped
like
get
all
the
Savannah
right
as
they
compile
it
into
a
response
and
push
it
back
in
one
one,
big
one,
big
chunk.
A
Is
the
where
we
started
in
red
is,
do
you
know
what
we're
starting
in
red
is?
Is
it
the
data
structure,
or
is
it
like
a
Json
string
that
we
spit
up.
A
I'm
going
to
start
I'm
going
to
start
listing
like
loose
questions.
What
data
do
we
store
in
redis
and
is
it
structured
or
just
plain
Json
string
and-
and
this
is
for
us
to
investigate
later
I'll
post
it
in
the
issues,
so
we
can
have
some
ten
minutes
later.
So
no
worries.
If
you
don't
know
the
top
of
our
heads,
yeah
I
need
to
think
on
the
second
one.
Anybody
has
any
questions
or
comments
on
that.
B
For
a
while,
I
have
I
guess:
I've
been
on
trying
to
understand
or
not
trying
to
understand,
but
trying
to
promote
the
idea
of
like
Deltas
in
in
merge
requests
and
like
going
from
whole
document
bodies
to
like
just
the
diffs
kind
of
like
just
what
what's
changed
since
the
last
time.
B
You
loaded
it,
so
you
can
apply
those
differences
so,
listening
to
you
Carrie,
it
almost
sounds
like
that:
wouldn't
really
help
us
get
faster
because
it's
that
doesn't
alleviate
the
computational
load
at
all,
which
is
the
problem
but
I
wonder
like
in
your
opinion.
Could
we
leverage
like
the
concept
of
like
Progressive
or
timeline
of
Deltas,
to
improve
how
much
we
store
in
redis
how
quickly
it
can
be
computed?
B
How
quickly
we
can
get
it
down
the
pipe
and
maybe
maybe
improve
like
the
speed
of
our
rendering
on
the
front
end
at
some
point.
Would
that
would
that
be
useful?
Or
is
that
would
that
be
just
sort
of
transferring
the
computational
load
to
a
different,
a
different
job.
C
Yeah
I
mean
it
transfers
it,
but
that's
not
necessarily
bad
right,
because
we
have
to
do
a
certain
amount
of
work
and
so
part
of
performance
as
I'm
sure
you
know
you
know
right.
It's
like
it's.
It's
the
magician's
trick
in
some
ways
right,
it's
like:
where
are
you
going
to
do
the
work
when
and
where?
Is
that
we're
perceived
by
the
user?
C
I,
hadn't
I.
Don't
think
that
we
and
I
talked
about
this.
The
Delta
idea,
but
that's
interesting
I'd,
have
to
think
about
that
more
and
then
talk
about
it.
A
bit.
B
Sorry
I
was
just
I'm
just
going
to
leave
it
like
one
final
comment
like
in
thinking
about
this
in
in
this
time
and
conserve
over
the
past
few
months,
Deltas
are
kind
of
antithetical
I
think
to
get
like
get
stores
like
a
snapshot
of
all
of
the
stuff.
So
giddly's
like
here's,
how
thing
I
assume
here's,
how
things
are
right
now
and
it
seems
like
it's
extra
work
to
go
like
okay?
B
What's
the
difference
from
the
last
version
of
of
this
file
or
whatever
or
the
last
the
last
time
you
took
a
snapshot
but
I,
don't
really
know
enough
about
git
to
say
that's
for
sure
the
case
I'm
just
concerned
that
Deltas
would
be
like
more
work
than
just
using
git
actually,
but
it
could
be
useful.
So.
A
Yeah
in
the
context
of
what
we
I
remember
discussing
this
in
the
past,
and
we
were
envisioning,
a
a
client
side,
very
verbose
catch,
and,
if
you
think
about
the
you
know
the
not
modified
sense
concept
where
you
come
to
the
page
you're
like
hey,
here's
the
last
time
step,
here's
the
is
the
time
step
of
the
data
I
have
of
my
storage
client
side.
Do
you
have
anything
for
me
and
if
you
do
give
me
the
stuff
that
changed,
don't
give
me
the
stuff
all
over
again
right.
A
That's
the
whole
eternal
five
gigabyte
downloads
of
game
updates.
Right
I,
don't
want
the
whole
game.
I
just
want
button
changed,
but
the
thing
with
Git
is
that
a
new
commit
can
add
one
line
or
can
remove
all
the
files
and
add
new
ones
right.
So
there's
a
huge.
It
could
be
drastic
changes
between
those
two
moments,
so
yeah,
I
I,
don't
know
how
how
these
things
will
come
into
play,
but
I
wanted
to
share
something
with
you
all,
because
this
was
discussed
yesterday
on
the
on
the
weekly
calling.
A
Some
of
you
might
not
have
seen
this.
This
is
the
effort
that
ux
is
doing
with
restructuring
BMR
and
again
this
is
very
early,
so
this
might
change
completely,
but
the
idea
of
going
from
like
a
concept
of
like
the
CMR,
it
has
the
overview
with
the
activities
of
the
system.
Notes.
I,
don't
think
it's
showing
the
entire
thing,
but
it's
the
system
notes
in
one
tab.
Then
you
have
the
comments
and
then
the
comments
open
up,
insert
sort
of
like
a
little
sidebar
that
you
can.
A
If
you
clicked
on
kushal's
name,
you
would
filter
by
the
comments
from
kushal
and
all
that
stuff.
So
the
way
I
was
thinking
about.
Was
you
right?
If
you
have
all
this
information
cached
in
in
the
client
side,
we
could
then
ask
the
back
end
hey
what
new
comments
do
you
have
since
this
moment
and
then
we'll
be
able
to
download
them
to
the
local
cache,
and
then
we
would
render
them
on
the
UI
and
mark
them
as
new
or
something
so.
A
The
thing
about
the
new
comments
since
you
came
back
could
be
very
useful
for
for
user
experience.
So,
even
though
it
might
not
be
useful
for
the
diff
line,
part
of
the
data,
it
could
be
useful
for
something
like
this.
If
we
end
up
going
with
local
type.
Caching
client-side
caching
of
the
comments
a
little
bit
more
aggressively.
A
I
mean
cool,
see.
A
A
while
ago,
we
had
a
crazy
idea:
I
I
still
I,
don't
know.
If
you
remember
this,
though
we
talked
about
what,
if
we
just
grab
the
actual
git
output
of
the
of
the
changes
right,
we
we
do
a
get
diff
and
we
get
a
whole
output
of
git.
We
just
what,
if
we
dumped
it
on
the
front
end
at
the
front
and
dealt
with
it
yeah
we
kind
of
like
Muse
on
that
for
a
while,
but
it
would
be
fast
delivery,
but
then
we'd.
C
About
how
much
we
said
back
and
forth
and
like
I
was
to
me
I
was
thinking
about
like
the
the
issue.
We
had
some
issue
recently,
where,
like
I,
believe
it
happened
right
now.
Oh
that
were
like.
What's
this
old
line
new
line?
What's
it
mean
when
the
new
line
requests
this
nil
and,
like
you
know
all
that
kind
of
stuff,
and
it's
all
until
like?
How
do
we
format
the
format
or
how
do
we
yeah?
How
do
we
get
the
formatting
on
the
back
end
for
the
front
end
when
it's
like?
C
Oh
you
deal
with
it
here,
you
know.
Would
it
be
faster
if
you,
if
you,
if
Italy
had,
if
you
could
access
get
free
without
us
in
the
way
yeah
but
then
I'm
like
well
wait
a
minute,
then
what
would
we
do.
B
B
Okay,
like
in
one
regard
so
for
to
me
personally,
the
front
end
is
all
it
should
be
all
display
and
the
back
end
should
be
all
persistence
and
authorization,
and
you
know
all
the
all
that
stuff.
So
it's
like,
if
we
all
just
agree
that
we're
going
to
use
the
the
get
the
rod
git
output,
then
we
all
agree
on
what
lines
are
what
you
know
line.
One
is
line
one.
B
We
all
know
understand
that,
and
then
the
friend
can
decide
how
to
display
that,
whether
it's
raw-
because
it's
too
big
to
to
stylize
too
big
to
to
format
or
if
it's
we,
you
know
every
time
we
scroll
we,
we
start
formatting
the
next
to
get
diff
in
line
and
then,
as
people
leave
comments,
we
just
say
hey
back
end.
You
know
on
on
comment
online,
seven
of
the
diff.
They
they
left
the
combat
and
it
turns
out
that
that
line
was
a
removed
line,
not
an
added
line
or
whatever.
B
We
all
just
agree
on
that
I
think
I,
don't
know
it
could
work.
I.
Think
the
issue
is:
is
that
formatting
or
styling
a
a
a
bunch
of
diffs
on
the
front
end
could
actually
be
pretty
expensive
and
we
would
have
to
be
careful
about
how
much
we
do
that.
But
it's
that
would
be
my.
A
Comment
yeah:
it
could
work
we're
trying
to
do
it,
which
I
I
think
on
the
front
end
we're
trying
to
do
less
so
that
we
can
do
more
of
faster,
no
there's
a
possibility
at
night,
music
of
moving
all
of
those
calculations
to
a
worker
that
we
could
sort
of
use
as
a
in
the
browser
back
end
to
calculate
all
of
that.
But
all
of
that
just
seems
then
we
would.
We
wouldn't
be
able
to
provide
that
as
an
API.
A
That
would
be
the
first
immediate
downside
into
the
front
that
would
still
be
doing
a
lot
more
than
we
we
need.
We
need
to
be
much
lean,
leaner
on
the
front
end,
all
right,
so
I
I
think
we'll
reach
a
point
of
like
we
need
to
investigate
a
little
bit
more
of
this,
so
I'm
going
to
table
this
for
a
little
while.
So
we
have
four
minutes
to
talk
about
the
second
point,
the
second
topic
that
Kerry
brought
up
on
the
issue,
which
is
about
the
database.
A
So
the
first
topic
we
can
still
carry
on
on
the
new
issue,
I'm
going
to
add
a
new
entry
there,
and
we
can
still
have
a
discussion
investigate
at
what
disorders
we
have
in
redis
and
all
that
stuff.
So
it's
not
dead
or
closed,
but
we're
going
to
continue
that
right.
So,
okay,
do
you
want
to
talk
a
little
bit
about
that
issue
and
how
that
would
help
the
performance?
What
what
what
would
be
helpful?
What
would
what
metric
would
that
impact?
What
would
be
fast.
C
Sure,
well,
we've
got
the
the
long-term.
This
isn't
like
performance,
measured
from
necessarily
user
perspective,
but
like
from
a
system
perspective
right,
the
the
never-ending
growth
of
these
tables.
Specifically
I.
Was
this
merger
Quest
diff
and
merge
request,
diff
files
refresh
my
memory
here.
Yes,
diff
commits
and
defiles
between
them
on
SAS
they're,
around
15
gigabytes
and
they
grow
in
the
order
of
two
to
three
gigabytes.
C
Excuse
me:
15
billion
rows,
so
we're
talking
about
terabytes
like
over
three
terabytes
of
data
or
yeah
we're
huge
consumers
of
status
database,
and
we
persist
these
rows
records
for
eternity.
When
really
there's
no
need
to
my
mind
in
a
lot
of
ways
and
that
I
I
have
not
measured.
This
I
have
not
got
metrics
on
it,
but
my
gut
instinct
is
after
a
couple
months
who
looks
at
who
looks
at
emerged.
C
Mr
right,
it's
merged
time
has
moved
on
you,
some
one
individual
might
come
back
and
look
at
it
for
research
purposes
or
to
you
know,
get
some
context
or
find
out
like
who
touched
it
last
or
something
like
that.
But
in
general,
why
do
we
keep?
You
know?
15
billion
these
records,
we
really
probably
only
need
a
few
million
I
would
say
a
billion
at
most
so
from
a
database
performance.
C
That
also
makes
these
these
tables
just
massive.
So
if
you
want
to
do
any
kind
of
work
with
these
tables,
you
have
to
do
it's
multiple
releases
of
effort
of
building
background
workers
and
background
migration
processes
that.
C
Not
complete
even
within
a
month,
so
that's
that
makes
that
very
complicated.
So
because
these
are
records
that
we
can
easily
regenerate
by
making
a
giddly
call
to
say:
hey
Google,
give
me
that
data
again
I
want
to
rebuild
out
these.
These
rows
I
wanted
to
explore
finding
a
way
to
just
throw
away
that
data,
because
it's
regenerable
and
like
Italy,
handle
the
long-term
long-term
storage
thereof.
I
guess
the
issue
then,
is
like
well.
C
If,
if
someone
blows
away
their
repository,
then
giddly
doesn't
have
that
information
necessarily,
but
no
one
really
does
that
per
se.
So.
A
C
Is
that
I
believe
that
all
the
changes
that
you
make
in
a
git
repository
will
stick
stick
around
forever
in
the
git
log,
even
if
you
get
rid
of
those
commits
and
branches
and
whatnot
like
records
that
they're
still
Fingerprints
of
that
in
in
the
log
files,
but
I,
don't
know
how
much
gitly
is
set
up
to
like
dig
through
that
to
do
like
Recreation.
So
specifically,
what
I'm
thinking
right
is
like
we
have.
We
have
a
merge
request.
Yes
and
it's
getting
merged
that
Branch
gets
deleted.
C
All
those
commits
are
gone.
Can
we
regenerate
those
diff
files?
Can
we
regenerate
all
of
that
information
that
led
into
building
that
it
seems
like
it
from
what
I've
experimented
with
locally
we're
able
to
do
that?
C
But
we
need
to
have
some
discussions
with
with
giddly
and
to
understand
the
oh
understand
what
we
might
be
asking
of
them,
so
there's
some
Metric
Gathering.
That
needs
to
happen
here.
First
to
understand
like
what
is
the
actual
usage
behavior
that
you
are
doing
with
these
older
at
Mars
where's
that
cliff
and
this
is
sort
of
more
gets
any
like
you,
accidentally
user
research.
You
know
what
what
is
the,
what
is
the
the
interaction
behavior
with
the
merge
request?
What's
its
life
cycle
really
like.
B
I've
heard
a
very
interesting
thing:
just
now
in
this
conversation
are,
are
you
saying
that
these
15
billion
rows
do
we?
Do
we
use
those
on
a
moderately
regular
basis
to
like
restore
deleted
things
or
or
as
a
kind
of
as
a
backup
do
we
use?
Do
we
do
that,
or
is
that
just
a
potential
use
case
for
that
data.
B
Well,
so
I'm
only
asking
that,
because
it
sounds
like
that's
what
you're
implying
that
if
we
deleted
all
these
records
and
someone
deleted
their
actual
git
store,
then
we
wouldn't
have
the
ability
to
you
know
restore
their
their
git
history,
because
it's
it's
gone
from
both
places.
That
implies
that
we're
using
it
now
to
restore
things
that
get
itself
doesn't
remember
or
something,
but
maybe
I,
miss
hearing
that
or
a
misunderstanding.
C
No
no,
but
that
is
interesting
in
that,
like
the
the
records
that
were
saving
in
in
the
database,
these
merge
requests
with
merge,
request,
diff,
commence
and
merge
request,
div
files,
which
are
just
sort
of
like
both
like
they're
they're,
just
our
our
massaging
of
the
get
data
to
attach
metadata
and
other
like
representation,
specific
things
about
it,
you
kind
of
could
recreate
because
it
is
sort
of
a
a
record
of
the
changes
as
they've
come
into
the
into
the
application.
C
So
I
mean
you
could
use
it
like
that.
But
we
don't
we.
You
know
it's
not.
It's
not
meant
to
do
any
sort
of
recovery.
B
Okay,
so
so
then
I
I
still
need
to
clarify
my
understanding
of
what
we're
talking
about
here.
What
is
the
utility
of
this
of
these
15
billion
rows
now
in
today's
in
today's
world,
and
why
would
it
be
a
problem
to
just
wipe
them
out.
C
C
C
Time
right
right,
it's
as
slow
as
the
first
time
you
access
it.
What
it!
What
it
does,
though,
is
it
allows
us
to
not
Pummel
giddly
every
single
time.
Somebody
wants
to
look
at.
You
know
commit
XYZ
right,
because
we've
already
got
that
data.
It's
in
postgres
postgres
is
fine.
Just
getting
you
know
a
bajillion
records
per
second,
whereas,
like
we
don't
want
to
hit
giddly
about
100
000
times
per
second
for
the
same
exact
request,
they
would
get
cranky
at
us,
so
we'll
just
get
it
from
postgres.
C
A
Yeah,
because
we
just
because
we're
not
doing
much
with
it,
doesn't
mean
that
we
don't
have
other
parts
of
git
lab
that
are
using
these
records
like
I'm
talking
about
not
who
won't
be
running
CI
jobs
on
emerging,
merge
requests,
but
like
for
compliance
reasons,
there
might
put
some
bits.
I
think
it's
important
to
understand
exactly
what
is
lost
if
we
delete
this
so
that
in
itself
is
a,
we
need
to
have
full
trust.
Full
confidence
on
that
answer.
A
I
feel
like
there's
a
definitely
a
spike
to
be
made
there,
if
not
at
least
for
the
user
research,
but
but
for
that
particular
understanding.
I
think
it
could
be
definitely
important
for
us
to
drill
deeper
there
before
we
go
blasting
record
records
into
the
Oblivion,
but
yeah
thanks
for
bringing
it
up,
though,
do
you
feel
like
we're
ready
to
start
investigating
this
and
then
oh
and
one
question?
Are
we
ready
to
do
investigation
in
second
I'm
just
guessing
that
the
impact
of
this
work
would
be
faster,
lookups
or
or.
C
C
Yes,
although
we
do
really
really
well
with
indexing
on
these
tables,
so,
like
you,
don't
even
notice
the
delays,
we
might
see
a
small
performance
Improvement
because
again
I
mean
you
know,
you're
looking
over
10
billion
records
I
would
assume
that
it
would
be
a
little
bit
faster.
That
said,
we
really
like
honed
our
performance
around
these
tables,
so
we
were
able
to
touch
that
data,
really
it's
development
efficiency
as
well
as
overall
application.
Health
in
that
this
has
been
raised
as
a
concern
of
hey.
C
These
tables
grow
at
a
ridiculous
rate,
so
so
this
10
billion
records
I
mean
that
was
as
of
May.
So
in
the
last
five
months,
I'm
sure
that
has
increased.
You
know
some
other
some
other
things.
So
it's
been
flagged
to
us,
as
here
is
a
long-term
issue
around
application,
Health
that
we
would.
A
Like
to
look
into
yeah
got
it
I'm
calling
that
stability,
slash,
availability
improvements,
because
that's
yeah
application
help
all
right,
we'll
flag
this
as
potentially
something
for
for
Matt
to
look
into
yeah
sure.
Let's
do
that.
Thomas
truncate
everything,
we're
overtime!
Sorry
for
going
a
little
bit
over
time!
Do
we
have
any
last
minute
comments
or
thoughts
or
questions
all
right,
then
I'll
just
ask
you:
was
this
call
valuable
to
you
at
all
and
drop
it
in
the
agenda,
please?
A
That
would
be
useful
for
us
to
to
assess
and
then
I'll
create
an
issue
for
a
new
week
if
you
can
be
there
if
not
to
describe,
discuss
asynchronously
on
the
issue
and
then
we'll
Carry
On
from
there.
Okay,
thank
you
for
coming.
It's
been
great
having
you
I
had
fun.
So
let's
keep
having
these
and
there
are
days
exactly
to
keep
talking
about
this
as
we
go.
So
thanks
for
making
the
time.