►
From YouTube: 2020 07 27 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
to
the
weekly
memory
team
meeting
so
top
of
the
list.
I
just
have
follow-up
items
from
last
week
I
noticed
josh,
you
removed
the
release
post
item
for
block
controller.
Thank
you
see
and
then
the
follow-up
items
there
and
remember
this
one.
A
So,
with
a
request
from
kathleen's
and
look
at
it,
we
have
a
separate
issue
for
blob
controller.
The
issue
is
reopen
the
art,
the
import
project.
Some
reason
I
could
never
transfer
it.
I
don't
know
what
it
was.
I
created
a
bug
for
it.
Ultimately,
the
import
team
ended
up
exporting
and
importing
and
they
have
a
working.
A
I
archive
the
project
and
then
I
reached
out
to
our
internal
data
team
about
size,
sense,
licensing
and
yes,
it's
a
perceived
license
so
they're
trying
to
limit
as
much
as
they
can
so
we'll
stick
with
me
and
shinyu
for
editing
those
for
now.
If
it
becomes
a
bottleneck,
we
can
expand
access
to.
Since
we
need
to.
A
Great
retro
is
due
this
week
the
last
couple:
retros
we've
waited
right
up
until
the
last
minute
to
get
items
in,
and
we
can't
do
that.
A
There's
a
new
format,
so
jerome
is
going
to
take
over
the
format
this
week
and
what
he's
going
to
do
is
record
a
quick
video
summary
of
the
highlights-
and
I
think
that's
going
to
be
a
five
minute
overview
of
the
overall
retro
and
then
he
is
going
to
select
two
topics
for
discussion
so,
instead
of
it
being
a
long
readout
which
it
has
been
for
the
last
several
retros,
it's
going
to
be
more
of
a
question
and
answer
discussion
session.
A
So
the
timeline
is,
we
need
to
get
all
our
retro
feedback
in
by
friday
and
then
sorry
by
thursday
would
be
ideal.
Then
friday
I
can
summarize
it
add
it
to
the
retro
doc
and
then
jerome
will
format
and
then
we'll
have
the
new
format
discussion
on
the
following
tuesday.
It
is,
I
believe,
so,
take
a
look.
The
timeline
and
new
format
is
described
there
on
that
issue.
A
A
So
that
upper
management
can
go
through
each
department
and
see
what
okrs
are
enabled
by
each
sub
department
are
created
by
each
sub
department
and
then
within
the
tracking
issue
for
each
objective,
and
we
have
three
categories:
iacb,
product
and
team.
We
create
an
epic
that
describes
what
our
department,
our
sub
departments
okrs,
are
cell
memory
groups
okrs
and
then
for
each
epic.
We
create
an
issue
to
track
those
okrs.
A
So
it's
all
manual
process.
I
have
it
all
stubbed
out
at
least
here
for
this
one.
I
ended
up
just
linking
to
the
issues
that
I
created
within
the
memory
group
project,
which
should
be
fine.
Camille
will
eventually
create
his
for
q3
and
the
reason
why
they
do
this
so
for
each
of
these
epics
that
I
have
created
chun
has
a
similar
category.
Epic
for
product
iacb
and
team
and
christopher
has
one
and
eric
has
one
and
sid
has
one
so
at
the
very
top.
A
A
A
So
as
for
the
content,
please
take
a
look
if
you
have
any
questions
or
any
feedback
or
any
ideas
on
other
okrs,
we
should
track
for
this
quarter.
This
should
be
a
an
interactive
process.
It
shouldn't
be
me
just
handing
it
down,
the
sadie
ratio
might
be
relatively
new
to
folks
it's
it's.
The
deliverable
label
that
we've
been
using
and
they're
actually
starting
to
track
that
now
so
they'd
like
to
see
the
overall
goal
is
to
get
us
above
70
as
a
team.
A
I've
been
mostly
managing
that,
but
it's
something
we
should
probably
do
as
a
team
process
so
at
or
near
the
kickoff,
for
each
milestone
run
through
the
issues
that
we
think
we
can
deliver
within
the
milestone
and
mark
those
accordingly
and
then
again
as
a
team,
we
can
figure
out.
What
do
we
want
to
do
with
the
issues
that
we're
bringing
in
during
the
milestone,
whether
we
want
to
mark
them,
or
if
it's
a
late,
breaking
issue
late
in
the
milestone?
A
C
B
C
A
D
A
Really
loud
neighbors
going
on
there,
okay,
so
image
resizing
chatting
with
camille
about
imagery
sizing
and
looking
through
the
issues
trying
to
figure
out.
What's
what's
the
goal
that
we
are
trying
to
accomplish
for
this
milestone
and
matthias
you
can
you
want
to
verbalize
your
feedback
here.
B
Yeah,
I
mean
it's
a
fair
question.
Of
course
I
mean
because
we're
still
pretty
early
on
in
this,
and
it
still
feels
very
experimental
right
now
at
this
point,
so
I
I
would
say
I
mean
it's
still
pretty
early
in
the
current
milestone.
So
and
maybe
that's
just
wishful
thinking,
but
I
mean
we,
we
have
one
and
a
half
pocs
already
so
we
have
identified.
We
already
have
identified
solutions
right.
I
would
even
say
we
have
identified
too
many.
B
Maybe
so
maybe
one
goal
could
be
to
cut
this
down,
to
like
three
approaches
that
we
really
seriously
want
to
look
into
and
consider
create
the
pocs
for
them
and
yeah.
Take
one
of
them
get
at
least
like
yeah,
make
a
decision
and
at
least
get
one
of
them
close
to
being
production
already
so,
and
the
problem
with
this
is
yeah.
C
B
With
the
simplest
one
we
could
think
of,
that
would
fulfill
the
nbc.
B
So
so
that
said,
if
we,
if
we
say
if
we,
if
we're
gonna,
pass
on
that,
so
the
next
one
will
take
more
time
to
to
do
both
as
apoc
and
also
if
we
decide
to
do
that,
to
make
it
production
right.
So
I
guess
it
will
get
increasingly
more
difficult.
B
The
sooner
we
discard
the
simple
stuff
which
is
yeah,
it
sounds
like
there's
always
been
a
bunch
of
concerns
being
floated
about
what
we
have
done
so
far.
I
don't
I'm
not
sure
I
agree
with
all
of
them.
That's
also
why
I
wanted
to
talk
to
camille
again,
because
I
still
think
it's
a
valid
first
iteration
and
we
made
it
simple,
very
conscious
decision
to
make
it
simple
but
yeah.
So
so
I
think
so
to
answer
a
question.
B
I
would
love
to
see
two
to
three
pocs
implemented,
but
that's
of
course,
being
nowhere
near
production
ready,
have
enough
data
in
place
to
compare
them
so
that
we
get
a
rough
idea
of.
Does
it
even
make
sense
to
build
that
you
know?
Is
it
like?
Definitely
like
too
slow
or
for
this
or
that
reason
not
does
it,
does
it
not
suit
our
existing
architecture
or
stuff?
Like
this,
you
know,
and
for
this
we
just
need
to
build
more
stuff
and
it's
been
slow
because
yeah.
B
Us
an
initial
nice
kickstart,
but
it's
been
mostly
alexei,
and
I
I
mean
he's
off
right
now:
I've
been
spending
my
time
doing
this
in
telemetry
and
we're
both
the
learning
go
and
working
with
workhorse.
So
it's
it's
not.
C
B
B
B
That
came
up
in
in
this
open,
open
office
in
the
office
hours
that
we
had
where
we
had
a
guest
from
the
static
site
team,
which
was
to
yes
exactly
so.
Basically,
this
third
point
there.
So
he
said
what
we
could
do
is
to
look
into
optimizing
images
as
they
are
being
uploaded,
because
they
are
tools
that
can
compress
or
optimize
images
so
that
it
is,
it
would
be
a
lossy
compression.
It's
not
resizing
right,
so
it
would
be.
B
The
size
remains
the
same
the
same,
but
it
is
a
form
of
shrinking
in
terms
of
the
image
size
which
is
lost
with
imperceptible
kind
of
like
what
mp3
does
to
audio
right.
So
so,
if
you
look
at,
if
you
look
at
these
images,
it's
like
a
normal
human
being,
you
would
very
likely
not
be
able
to
tell
them
apart,
but
they
can
have
pretty
dramatic
positive
impact
on
size.
B
So
I
thought
that
was
a
pretty
good
lead
and
the
nice
thing
is
that
would
be
totally
paralyzable
to
the
ongoing
work,
because
that
touches
on
the
totally
different
end
of
that
project,
because
we
are
like
alex-
and
I
have
been
very
focused
on
the
serving
side-
and
this
would
probably
happen
as
you
upload
images.
So
we
would
just
do
that.
So
the
user
gives
us
a
five
megabyte
image
and
we
turn
it
into
a
one
megabyte
image
without
them,
probably
ever
known
that's
kind
of
the
idea
behind
it.
B
So
that's
the
one
thing
I
can
think
of.
I
would
love
to
hear
more
ideas
about
other
things.
We
could
do.
C
B
Size
well,
some
of
them
were
pretty
dramatic,
but
also,
I
think
their
showcases
were
for
very
large
images
and
that
they
had.
You
know
stuff,
like
images
that
come
out
of
a
digital
camera,
where
there's
a
bunch
of
extra
metadata
attached
that
we
might
not
even
need
so
so
so
like
some
of
the
gains,
also
just
by
stripping
out
metadata
for
the
mvc.
I
don't
know
how
much
that
would
like
yield,
because
for
the
mvc,
we're
focusing
only
on
avatars
and
they're,
already
very
small,
like
the
maximum
size
of
an
avatar
is
200k.
C
I'm
kind
of
thinking
right
now
that,
like
this
upload,
optimization,
could
be
like
a
nice
thing
to
have
in
the
future.
Oh
yeah
yeah
us
on
its
own
I'll,
try
but
like
there
is
still
like
this
valid
because,
like
I
mean
technically
we're
gonna,
do
it
with
the
image
with
the
dynamic
image
resizing
it's
just.
It's
gonna
be
slightly
less
performant
right
like
so.
The
compress
I
mean
compression
is
pretty
much
constant,
so
we're
gonna
slightly
less
space
efficient,
but
the
image
resizing
dynamic
in
exercising
we
actually
optimize.
These
images.
B
Yeah
also
we
we
actually
do.
We
talked
about
this
already
camille
right,
workhorse
already
does
already
strips
out
exif
metadata
when
you
upload
an
image,
so
we
have
a
small
form
of
optimization
already
so
yeah.
C
B
A
Okay
sounds
like
we
need
to
track
this
in
a
separate
issue.
Then.
B
Yeah
yeah,
definitely
it's
not
even
related
to
resizing
it.
Just
came
up
during
that
discussion.
Yeah,
let's
break
this
out.
I
also
think
it
yeah,
it's
probably
more.
Like
a
future
thing,
I
I
would
expect
that
so
far
our
idea
was
we
focus
on
the
mbc,
which
is
avatars.
So
this
is
why
this
didn't
look
very
pressing
or
super
useful.
Yet,
but
in
the
future
we
would
evolve
whatever
we
would
build.
That
would
work
for
avatars
to
also
work
for
content
images,
which
can
be
quite
large.
A
D
B
But
what
could
be
helpful
at
this
point
if
we
want
to
speed
up
the
process
is
to
build
more
pocs
in
parallel,
especially
for
the
static
approach.
I
we
haven't
really
looked
much
at
this
at
all.
If
we
still
want
to
do
that,
that's
something
we
could
do
totally
in
parallel
with
the
dynamic
pocs.
It's
a
bit
tricky
to
do
it
in
parallel,
so
that
you,
because
you
work
on
the
same
code
base
and
the
work
of
code
base,
is
much
much
smaller
than
good
labs.
B
It's
very
easy
to
step
on
each
other's
toes
a
little
bit
like
well,
while
doing
that,
so
that's
something
we
would
have
to
figure
out,
but
yeah
like
if
anyone
else
wants
to
join
that
or
if
there's
capacity.
I
guess
that
would
help
as
well
like
to
move
that
forward.
A
Yeah
nicola
has
expressed
interest
in
kind
of
joining,
I'm
not
sure
where
he
is
on
the
sql
cash
work
that
he's
doing,
but
he'd
have
to
find
a
balance
there,
because
it's
pretty
important
work
to
do
too.
So
you
can
always
follow
up.
B
B
B
B
C
B
Third
approach,
which
was
the
out
of
process
approach
where,
instead
of
doing
it
directly
on
the
request
response
path,
we
would
spawn
a
command
that
would
execute
it
in
a
separate
process,
but
then
return
that
data
into
workhorse.
So
that
can
be
served.
So
I'm
I'm
like
wow
80
percent
done
with
that.
Maybe.
C
B
And
so
the
next
one
to
pick
up,
I
think
the
main
one
I
personally
struggle
with
is
if
we
should
even
look
at
an
external
service
like
image
proxy
and
I'm
not
a
huge
fan
of
that
for
a
number
of
reasons.
But
if
we
want
to
that
seems
to
be
the
one
that
is
like
most
dramatically
different
to
all
of
the
other
things.
B
So
maybe
there
could
be
a
good
next
one,
because
the
only
other
approach
for
dynamic
scaling
that
we
had
identified
was
very
similar
to
what
I
do
now,
just
that
we
wouldn't
fork
in
your
process.
Every
time
you
need
to
resize
an
image,
but
rather
have
like
a
sidecar.
It
just
lives
perpetually
like
it
lives
and
dies
with
workhorse,
more
or
less,
and
that
we
would
send
requests
to
and
then
a
process
wakes
up
does
something.
C
C
Okay,
because
like
then,
we
would
have
like
these
concerns
about
the
mmr
usage
and
cpu
usage
solved,
but
we
would
also
not
have
to
integrate
additional
service,
because
this
would
be
very
well
within
workforce
and
within
the
current
architecture.
So
yeah
like
image
proxy
is
like
I
mean
what
I
mean
whatever
like
image.
Processor
is
like
it's
full
flat.
I
I
don't
know
if
we
really
want
to
ship
that
in
that
form,
it's
yeah.
B
B
It
also
there's
some
really
questionable
overlap
with
workhorse,
because
workhorse
is
already
our
reverse
proxy
and
I
know
yeah
like
from
an
architectural
point
of
view.
Image
proxy
does
not
really
fit
into
what
you
have
it
would
just.
It
would
feel
a
bit
like
a
fifth
wheel.
You
know,
because
there
is,
there
would
be
some
actually
significant
overlap
with
both
our
rails
app
and
workhorse,
because
it
also
does
things
like.
B
Oh,
it
can
connect
to
a
cloud
storage
backend,
but
we
don't
need
that
because
we
already
sent
these
requests
to
our
rails
app,
which
resolves
these
urls
and
authenticates
these
urls
for
us,
which
we
need
to
do
anyway.
So
we
couldn't
even
use
that
all
this
additional
functionality
that
image
proxy
would
buy
us
yeah,
but
yeah,
that's
discussion.
I
guess
so.
C
Like
I
think
out
of
the
pocs
to
test,
as
for
the
complexity,
we
are
really
left
with
checking
out
the
static
ones,
how
it
would
differ
and
the
static
one
would
be
interesting.
Also
in
the
context
of
the
image
of
like
image,
let's
say
even
pre-image,
optimization
or
something
like
that,
pretty
much
so.
B
Yeah,
I
I
agree
because
also
because
it's
so
wildly
different
from
anything
else,
we're
doing
right
now.
Oh
and
it
also
it's
nice,
because
we
can
it's
perfectly
paralyzable
to
the
work
that
alex-
and
I
have
done
so
far
because
it
again
similar
to
these
other
optimizations
we
talked
about
it-
would
come
from
the
other
end.
It
would
come
from
the
upload
and
batch
side
of
it,
which
would
probably
just
be
a
psychic
job
or
something
along
those
lines,
so
yeah.
So
maybe
I
can
yeah.
C
I
I
mean,
like
we
already
have
like
this
design
management
stuff,
that,
like
it,
does
some
kind
of
image
resizing
on
upload,
but
it's
kind
of
like
it
stores
only
a
single
image.
C
So
it
seems
that
carrier
life
already
supports
like
image
resizing
internally,
and
there
is
like
this
process
to
fit.
So
I
guess
it's
the
really
challenge
with
the
static
one
is
untangling
carrier
wave
mass
of
storing
many
images,
and
I
I
have
no
clue
how
easy
it
is
like.
It
feels
to
me
that
this
is
pretty
complex.
B
It
is,
and-
and
also
it's
not
just
this-
it's
also
that
retroactively-
we
would
have
to
go
back
and
for
all
of
gitlab's
image
library,
because
we
wouldn't
have
a
lazy
ad
hoc
approach.
It
would
have
to
generate
the
respective
sizes
right
for
the
data
we
already
have,
because
anything
you
do
with
carry
away
or
whatever
that
would
be
on
the
upload
path,
would
only
work
for
new
images.
B
So
so
it's
almost
like
two
things.
We
need
to
look
into
that
like
how
do
we
do
this
one-off
thing,
which
I
don't
know
how
long
that
would
take.
I
mean
I
think
we
said
what
with
like,
80
gigabytes
of
avatars,
which
actually
doesn't
sound
that
bad,
but
like
we
would
have
to
still
build
something
that
converts
all
these
images.
C
So
so
I'm
thinking
like
from
the
design
perspective,
you
should
do
it
in
the
way
that
does
that
lazy,
like
maybe
it
pre-computes
this
image
just
on
upload,
but
follow
all
the
other.
It
does
this
slightly,
but
it
kind
of
I
think
for
me,
it
poses
another
change.
It's
not
only
about
resizing
image
nicely
I
mean
the
sizing
image
on
the
upload
is
probably
the
easiest,
but
there
is
like
resizing
image,
lazy
and
the
third
item
that
I
I
saw
that
github
struggles
in
like
in
many
places.
C
So
I
I
think
like
for
me
the
biggest
concern
in
the
static
resizing.
It's
like
basically
like
two
one
how
these
data
are
being
stored.
I
mean
in
terms
of
the
structure
in
terms
of
the
database
to
know
exactly
which
size
is
already
generated
and
which
node
does
it
really
like?
Maybe
it's
carrier
wave
checks
that
dynamic
key?
C
I
know
like
kicks
against,
like
the
object
storage
to
check
if
the
file
exists,
but
then,
like
the
another
challenge,
is
like
how
it
actually
removes
these
files.
Does
it
actually
checks
each
individual
size
that
we
put
there,
because
I
would
assume
that
they're
gonna
be
some
kind
of
like
static
list
of
different
sizes
and
if
we
have
six
different
sizes
on
each
removal,
we
fire
seven
requests
instead
of
one.
C
C
A
A
A
B
I
think
also,
I
think,
was
dimitri
or
someone
mentioned.
They
know
the
people
behind
this.
They
have
worked
with
them
in
the
past.
They
said
you
know.
D
C
B
B
I
think
where
I
could
also
imagine
imagine
an
approach
where
you
request
an
image
of
particular
size,
but
we
find
we
don't
have
that
size
generated
yet,
and
then
we
have
a
sidekick
job
which
does
it
like
upon
request,
so
that
might
take
a
while
right.
So
there
might
be
some
latency
if
we
find
that's
too
long.
Maybe
we
do
something
like,
but
we
still
serve
the
original
image
until
we
have
the
resized
one.
B
So
in
the
next
request
that
follows
assuming
the
sidekick
job
is
done,
we
would
then
have
that
image
available
kind
of
so
that
it
kind
of
converges
to
the
desired
state
over
time.
So
I
don't
know,
there's
a
bunch
of
ways
you
can
go
about
this,
but
yeah
it
would
be
yeah.
There's
like
a
spectrum
of
things
like
in
terms
of
how
dynamic
or
aesthetic
they
are,
you
know.
D
Yeah,
it
just
seems
like
if
we're
going
down
the
dynamic
routes
that
seems
like
the
better
way
to
go.
So
I'm
not
sure
if
we
need
to
go
explore
the
static
up
front
scaling
one
like
like
what
are
the
benefits
of
doing
that
aside
from
doing
a
poc.
If
we
already
have
an
implementation
working
for
the
dynamic
one.
B
Well,
I
think
one
benefit
I
can
think
of.
Is
that
it
will
it
it's.
Basically,
it
would
be
completely
oblivious
of
the
size
of
the
images
I
would
say
so
what
we're
building
right
now,
it's
quite
sensitive
to
the
size
of
the
image
you're
scaling,
because
if
you
do
a
look
at
you
know
images
that
are
five
to
ten
megabytes
in
size
which
eventually
we
will
probably
have
to
look
at
for
like
images,
you're
posting
comments
or
issues,
then
we
can
lose
significant
time.
B
You
know
in
workhorse
doing
this
on
the
fly,
but
but
that
can
also
be
alleviated
with
other
things.
You
know.
Maybe
we
only
spend
that
compute
once
and
then
we
catch
that
image
right
now
we
only
look
at
avatars
and
for
avatars
it's
so
fast.
B
We
don't
even
need
to
talk
about
caching,
I
think
because
it
will
take
like
10
milliseconds
to
scale
down
an
avatar
but
yeah,
so
so
that
approach
would
not
have
this
problem
because
for
the
sidekick
job,
if
you
do
it
ahead
of
time
or
even
lazily,
it
probably
doesn't
matter
if
it
runs
for
a
second
or
200
milliseconds
all
right.
So
we
could
do
it's
like
more.
Like
a
one-size-fits-all
thing,
almost,
I
would
say.
B
C
D
Yeah
I
mean
yeah
it'd
be
interesting,
like
look
at
once,
we
have
caching
in
place
for
dynamic
sizing,
which
I
think
we
should
still
go.
Do
we'll
take
a
look
at
the
cost
benefit
analysis
of
that
of
the
compute
versus
the
storage,
for
my
sense
is
that
for
popular
images,
right,
like
all
the
avatars
that
you
consistently
load
on
gitlab.com
for
gitlab
employees,
right,
it's
probably
not
worth
regenerating
them
at
every
single
request,
especially
well.
I
guess
the
browser
will
cast
into
some
degree
too.
So
yeah.
B
Yeah
and
and
you're
right,
because
with
dynamic,
it's
also,
it
seems
more
obvious,
like
weather
caching,
what
happened,
which
would
probably
be
something
like
nginx
or
even
in
the
browser
cache
or
at
the
edge,
whereas
with
static.
It's
a
bit
trickier
right,
because
if
you
don't
have
it
all
generated
up
front,
how
do
you
determine
whether
you
need
to
do
that
work
right?
You
need
to
do
like
a
database
lookup
or
something.
Then
you
also
need
to
store
like.
Where
do
you
store
that
new
size?
B
You
would
have
to
probably
push
it
back
to
s3
or
something
from
the
app
which
also
costs
time
yeah.
I
don't
know
like
there's
a
bunch
of
it's.
I
don't
think
it's
the
simpler
approach
either.
D
Yeah
yeah
and
then
I
one
thing
I
probably
should
consider
because
it
will
come
up
eventually,
I
don't
think
needed
immediately,
but
just
top
of
mind
here
is
someone
asked
the
other
day
about
they
pushed
pii
up
into
the
git
repo
and
they're,
trying
to
figure
out
how
to
get
rid
of
it
all
and
so
they're
asking
about
like
the
gitlab.com
elasticsearch
index,
and
it's
like
how
do
we
get
out
of
the
elasticsearch
index?
D
And
so
you
can
imagine
if
someone
pushed
up
pii
in
the
form
of
an
image,
and
we
got
a
takedown
request
from
like
a
forget
me
request
from
someone
on
gitlab.com.
How
do
we
service
that.
B
D
For
caching
like
if
it
came
up,
we
can
just
purge
the
cache
it
will
be
still
on
the
cdn
to
some
degree.
I'm
not
sure
how
we
do
that,
but
yeah
anyways,
it's,
I
think,
there's
multiple
problems
there
today,
I'm
not
sure
I
handled
today-
and
I
can
I
can
take
this
up
I'll-
hold
up
an
issue,
maybe
in
the
in
the
epic,
but
I
might
be
worth
thinking
about
to
some
degree
as
a
follow-on
iteration.
So
I'm
sure
that
I'm
sure
it'll
come
up
eventually.
A
There's
a
couple
follow-up
items:
oh!
If
it's
not
closed,
I
will
go
through
and
close
that
other
issue
within
the
epic
and
then
josh
sounds
like
you
were
going
to
create
some
follow-up
issues
on
clearing
out
static,
caches
right.
A
Okay,
telemetry,
it
looks
like
chin.
You
updated
some
of
these
issues
and
mathias
you're
still
spending
about
20
of
your
time.
On
that
one.
B
Yeah
I
mean
I,
I
need
to
sync
with
ginyu
again
as
well.
So
like
most
of
the
time
we
spend
just
by
looking
at
data
that
trickles
in
from.
C
B
It's
only
pre-release
data
so
far,
but
yeah
a
couple
interesting
things,
so
one
thing
I
did
today
was
look
a
bit
into
why
we
have
why
we
spend
like
60
seconds
collecting
telemetry
data,
for
I
think
it
was
a
half
percent
of
our
users.
So
far,
that's
only
7
000
reports,
but
that
will
grow
over
time.
B
So
I
think
that
was
concerning
so
that
I
noticed
there
was
a
problem
with
we
do
not
specify
http
timeouts
at
all
in
our
http
library
that
we
use
nowhere
by
the
way,
not
just
usage
ping.
So
I
look
into
this
a
bit
today
send
an
mr
to
fix
that.
I
think
we
actually
should
fix
that
across
gitlab.
That's
not
good
that
we,
I
have
a
default
timeout
of
one
minute
when
we
talk
to
an
http
api
yeah.
B
B
B
Okay,
fair
enough
yeah!
Oh
actually,
this
would
be
great
if
we
could
get
your
help
with
this,
because
jenny
and
I
had
a
chat
and
we
both
have
a
different
understanding
of
what
the
nsm
means
other
components
in
it,
and
we
couldn't
really
like
agree
on
what
it
means.
So
it
would
be
good
if
we
could
clarify,
because
if
what
jinyou
points
out
is
true,
then
I
might
have
closed
the
nsm
issue
but
prematurely,
and
we
might
have
to
collect
more
like
data
that
we're
not
already
collecting.
B
A
And
then
there
was
some
sorry
there
was
a
delay
and
my
feedback
on
the
team
lesson
learned
question
that
a
conversation
between
matthias
and
camille
about
the
high
level
overview
that
matthias
wrote,
which
is
fantastic
and
camille's
concerned
about
having
some
more
specific
drill,
downs
and
I
kind
of
I
straddled
the
fence
on
my
answer.
A
I
like
both,
so
I
I
think
it
makes
sense
to
submit
the
mr
largely
as
it
is,
there's
a
couple
open
questions
in
there
and
links
requested,
and
then
we
can
drill
down
into
specific
use
cases
for
our
handbook
entries,
because
I
you're
right
matthias.
I
don't
think
there's
any
way
we
can
abstract
everything.
We've
learned
into
a
generic
pattern.
A
I
think
there's
just
going
to
be
some
specific
use
cases
that
will
mean
more
or
resonate
with
folks
that
are
reading
it
and,
if
like,
if
the
overview
doesn't
mean
anything
to
them,
then
they
won't
drill
in.
But
if
it
does,
then
they
can
drill
into
the
specific
use
cases
and
see
the
tools
and
methodologies
we
use.
That
may
be
helpful
to
them.
So
I
I'd
like
to
see
the
mr
merged,
largely
as
it
is,
and
then
we
can
pick
some
projects
to
drill
down
into
over
the
coming
quarter
for
people
learning
no.
B
C
C
I
just
really
want
to
finish
all
these
items
this
week
before
my
time
off,
it's
gonna
really
mark
a
major
milestone
for
me.
As
for
this
story
and
how
to
deliver,
and
when
I
get
back,
I
will
probably
start
radiating
this
knowledge
to
other
people
using
the
future
flux,
and
I'm
gonna
be
that
close
to
closing
the
future
flags.
From
the
perspective
that
I'm
kind
of
concentrating
my
involvement
there,
nice.
A
All
right
we're
coming
up
on
time
grant.
Did
you
have
anything
you
want
to
cover
this
week.
E
No,
I
did
see
that
psychic
memory
leak
potential
issue.
I
think
you've
marked
it
as
quad
planning,
so
I
will
look
at
it
this
week,
but
yeah
that's
interesting,
considering
that
I
raised
an
import
issue
a
little
while
ago
and
that
people
remember
were
import
of
the
old
import
version
kind
of
type,
not
ndjs
industry
json
that
just
increased
their
memory,
notably
in
13-1
and
last
I
checked
with
the
developer
looking
at.
E
He
couldn't
actually
find
out
exactly
why
he
just
says
it's
happened,
but
andy
jason
is
fine,
so
we're
just
we're
gonna
leave
it,
but
I
wonder
if
it's
related
or
not,
but
yeah,
it's
just
interesting.
I
was
just
intrigued
to
see
that
so
I'll
look
at
this
week
and
add
my
thoughts
on
how
we
should
make
sure
we
keep
testing
in
the
future.
B
A
And
I
have
that
created
for
13
for
for
us
to
take
a
look
at
it's
in
the
planning
issue
too.
So.
C
I
have
one
question
because
how
is
ruby
2.7
coming?
You
probably
know
that.
C
Why
I'm
asking?
Because
I
think,
as
soon
as
we
are
close
to
ruby
2.7,
there
is
a
lot
of
for
us
to
like
investigate
with
that
with
2.7
like
compacting,
their
best
collection,
how
it
behaves,
how
it
affects
system,
and
it's
gonna
make
all
these
like.
Sideki
puma
memory
concerns
to
be
also,
I
guess,
handled
differently
as
well.
D
D
C
I
I
think
so
I
mean
like
I,
I
think,
like
ruby.
2.7
are
gonna,
bring
pretty
nice
memorial
improvements.
It
would
be
helpful
for
us
to
kind
of
speed
up
this
process
if
it's
possible.
D
Yeah
and
I
think
anyways
it's
going
to
just
take
someone
doing
it
so
stan
to
go
back
at
it
and
you
know,
and
not
all
tech
that
should
fall
to
stan.
You
know
it's
like
a
model,
so
I
I
think,
since
we
have
an
interest
in
dropping
this
forward
and
it's
sort
of
common
foundational
work
anyways
that
yeah
I
I
think
it
makes
a
lot
of
sense
for
us
to
pick
it
up.
D
I
think
there's
one
right
and
I
wanted
to
think
about
continued
four
two,
but
we'll
get
it
on
the
play.
Let's.
A
D
D
What
was
the
other
13-401,
the
real
user?
Metrics
yeah?
I
think
that
one's
also
really
interesting
as
well.
Okay,
we'll
get
this
one
on
there
too,.
E
Yes,
we
created
the
performance
test
against
2.7
at
some
point.
If
I'm
right
or
someone
did,
maybe
it
wasn't
2.7.
E
I
bet
I
was
getting
asked
the
better,
and
I
said
I
just
need
normally
lost
package,
and
then
I
can
do
it
but
yeah.
These
are
blockers.
Just
give
me
a
share.
When
we
have
on
this
packages
worklift
2x7
built
it
will
require
obviously
a
brand
new
environment
yeah.
That's
that
that
that's
not
fine.
It
only
took
me
quite
a
day
to
actually
do
the
testing
this
out.
E
D
E
Yeah,
it
does
need
to
be
like.
I
just
need
the
package,
because
that's
what
our
virus
are
built
with.
It.
D
E
Officially
released
or
put
into
the
nightly
like
repo
or
anything,
it
can
just
be
the
the
literal
dev
file,
but
yeah.
It's
always
a
bit
weird
performance
testing
performance
testing
is
also
somewhat
functional
by
proxy.
So
by
doing
performance
tests,
you
know
we'd
probably
find
issues
if
there
was
any,
but
I'd
say
on
the
same
kind
of
idea.
I
guess
performance
testing
should
only
happen
until,
like
actual
functional
testing
has
been
done
against
it
as
well.
To
make
sure
you
know,
there's
not
anything.
E
Obviously
broken
performance
testing
is
weird
and
that's
always
at
the
end
of
the
chain.
So
but
yeah.
If
you
guys,
want
to
know
what
that
performance
was
just
give
me
a
give
me
a
show
happy
to
take
a.
C
Look
cool,
so
I
guess
it
could
be
interesting
like
track
for
us,
which
is
like
will
be
2.7
and
everything
related
to
my
real
search
and
the
impact
on
the
puma
and
psyche,
and
it
could
be
kind
of
connected
with
this
issue
that
is
assigned
investigate
memory.
If
it's
actually
something
that
what
is
the
reasons
for
that,
and
is
it
something
that
it's
getting
better
or
worse?.
D
The
one
other
thing
with
ruby
sorry
go
back
through
which
f7
real
fast
is.
It
looks
like
the
current
plan
is
to
drop
support
for
centos
on
november
22nd.
E
D
You
know
kind
of
there's
apparently
there's
a
problem
with
with
updating
grpc
yeah
onsen.