►
From YouTube: 2021 03 01 Memory Team Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
and
then
nikola
measuring
performance
impact
for
a
proposed
web
engine.
C
Well,
on
friday,
I
added
a
lot
of
new
measurements
related
to
the
like
times
for
gc
starts
and
full
dc
cycle,
and
some
additional
things
in
comparison
between
like
running
sidekick
with
the
web
engine
and
without
just
to
compare
like.
Is
there
a
big
difference
between
running
puma
and
sidekick
and
psychic
with
the
engine
without
it?
So
I
think
that
we
have
and
camille
agreed
that
we
have
everything
that
we
need
for
the
blueprints
to
update
the
blueprint
and
calculate
the
potential
impact
on
gitlab.com
and
other
installations.
D
A
D
We
fulfill
the
goal
of
measuring
the
impact
of
the
weapons
right
thanks,
like
probably
like
the
biggest
surprise,
to
me
that
like
if
we
would
have
this
web
engine
side,
key
boot
up
time
would
swing
from
45
seconds
to
20
seconds.
Yes,.
D
I
didn't
expect
that
much,
that's
something.
This
is
something
that
I
noticed
then,
like
we
in
this
web
engine
having
control
our
sharpie
graphql,
we
would
be
looking
at
around
30
percent,
lower
memory
footprint
on
idle
and
also,
I
think,
like
on
running
some
of
the
requests.
I
think
it
was
a
running
some
of
the
requests.
D
D
I
think
this
these
are
the
key
highlights
of
of
that
measurements.
E
But
30
percent
reduced
memory
usage
for
sidekiq
isn't
sidekick
one
of
the
main
components
that
actually
uses
memory,
so
it
that
sounds
like
a
lot.
D
To
be
honest,
like
this
is
only
part
of
the
story:
if
we
could,
we
didn't
investigate
it
further,
but
we
could
the
same
apply
for
puma,
not
loading
or
the
side
key
aspects
or
like
loading
version,
so
you
like
in
mirror-
or
you
could
look
at
having
thirty
percent
here
and
thirty
percent
on
puma
of
reduced
memory
usage
and
like
from
what
I
saw
like
given
amount
of
the
sidekick
that
we
run
on
the
production
like
even
like
not
optimizing
further,
just
having
those
we
would
be
looking
at
on
github.com
saving
over
500
gigabytes
of
the
memory
in
total
on
site,
key
alone.
E
A
D
It's
a
multi-step
process
on,
like
we
architecturing
like
how
we
build
a
software
editor,
and
this
is
like
related
to
the
composer
code
base,
but
I
think
now
just
that
we
have
the
data,
it's
like
very
reasonable
to
say
that
this
is
big
return
of
the
investment.
D
E
And
when
you
say
multi-step
process,
you
know
and
again
my
ignorance
here
is
this
something
that
will
be
laid
out
in
this
this
blueprint
already
or
is
that
something
that
we
can
do
sort
of
once?
E
That
is
more
defined
and
say
like
this
is
a
potential
path
forward
here
in
iterations
and
depending
on
how
much
needs
to
be
sent
out
to
other
teams,
then
maybe
that
is
something
that
we
can
actually
think
about,
coordinating
with
others
right,
because
if
you
have
such
big
gains-
and
you
can
say
okay,
these
are
the
the
20
steps
that
would
need
to
happen
right,
and
if
we
do
this,
we
believe
we'll
save
500
gigabytes
of
ram
on.com.
These
are
the
savings
right.
Then.
E
Maybe
we
have
an
opportunity
to
actually
push
that
through
different
stages
as
well,
but
I
guess
they
would
need
they
would
need
you
know,
sort
of
a
more
tangible.
This
is
what's
actually
required.
D
So
nikola
work
presents
like
the
proof
of
concept
of
something
that
is
functional
and
gives
you
like
a
very
good
sense
of
what
needs
to
be
changed.
D
So
I
think,
like
at
least
from
matthias,
there
was
like
a
alternative
suggestion
of
the
patchwork
that
we
didn't,
that
we
didn't
investigate
for
there,
but
this
is
something
that
we
would
have
to.
Like.
Probably
look.
I.
F
Think,
like
we
actually
noticed
that
puck
work
is
already
deprecated,
they
put
it,
they
released
it
six
months
ago
and
the
in
the
latest
blog
post.
I
I
happen
to
just
like
look
at
the
page
again
and
now
it
says
it's
already
in
maintenance
mode.
There
was
no
explanation
as
to
why,
but
just
the
side
notes.
D
D
We
may
just
help
moving
some
of
that
today,
but
fabian,
I
think,
like
we
need.
In
any
case,
it's
gonna
need
involvement
from
others
because,
like
we
gonna
like
we
can
execute
that
to
some
extent,
but
I
think
I
the
the
biggest
challenge
for
me
of
that
is
like
we
see
the
benefits
of
that.
D
We
have
a
proof
of
concept
that
nicolas
spent
on.
I
think
it
was
like
probably
two
iterations
that
you
spent
on
yeah,
so
it's
it
kind
of
already
shows
that
it's
not
easy,
but,
like
the
proof
of
concept,
also
shows
that
if
you
notice
that
it's
quite
easy
to
get
there
because,
like
these
changes,
are
fairly
straightforward,
if
you,
if
you
order
them
in
the
correct
like
yeah,
so
actually,
okay,
so
I
I
I
think
like
now,
we
need
to
finish
blueprint.
D
I'm
kind
of
thinking
that
we
need
to
push
that
forward
right
now,
because
the
graphql
thing
is
like
something
that
we
may
be
affecting
graphql,
and
this
was
our
actual
first
pick
to
move
graphql
as
something
that
is
completely
new
element.
D
It's
still
like
kind
of
disconnected,
but
this
reality
gonna
be
changing
in
upcoming
milestones,
because
graphql
wants
to
implement
a
subscription
model,
and
if
you
would
want
to
change
this
architecture,
we
would
kind
of
work
with
them
before
they
would
actually
change
the
graphql
to
the
subscription
model,
and
it
would
imply
that
they
in
I
mean
it
really
depends
on
how
they
approach
the
graphql
subscription
model.
D
But
one
of
the
approaches
is
that
you
push
graphql
payload
over
redis
to
action,
cable,
websocket
connection
that
is
living
on
another
process,
so
they
use
so
they
would
require
graphql
to
be
present
in
the
site
key
and
it
kind
of
breaks
our
contract
because,
like
our
first
step,
was
graphql,
but
we
can
also
like
we
reorient
on
something
different,
but
we
see
just
like
that
in
the
bug
the
graphql
nicola,
how
much
the
difference
was
on
the
graphql
alone.
It
was,
I
think,
16
megabytes
right
around.
D
So
so
like
we
would
be
looking
how
percentage
of
that
was
like
70
megabytes
on
700,
it
was
like
10.
C
E
And
we
don't,
we
don't
need
to
come
to
a
solution
now,
but
I
think
my
like,
I
agree
with
this
and
I
think
from
some
recent
experiences
with
the
sort
of
the
geo
self-service
framework
people
are,
I
think,
generally
quite
willing
to
do
the
right
thing
if
they
understand
what
that
requires
from
them
right
as
in
if
there
are
clear
steps
right
and
there's
enough
information
right,
and
we
can
make
it
very
explicit
what
you
know
doing
this,
this
thing
actually
means
for
them
right.
I
think
they
and
they
understand
the
benefit
of
it.
E
You
know,
I
think,
there's
a
good
shot
that
folks
would
actually
be
quite
open
to
doing
it
right,
but
the
more
hand
wavy.
This
is
a
great
idea,
but
we
don't
quite
know
how
to
do
it
and
it
can
take
a
long
time
right
and
the
less
likely
people
are
probably
to
get
on
board.
So
I
think
the
more
work
we
can
do
sort
of
based
on
nicolas
poc.
You
know
and
the
blueprint
to
make
it
easy
for
for
folks
to
actually
do
the
the
right
thing
right
for
us,
the
more
likely.
D
I
think
that,
like
my
perception
is
like
that
we
know
if
you
would
follow
race
engine
route,
we
know
exactly
what
to
do.
We
need
to
figure
out
like
the
iteration
and
steps,
but
I
think
nikola
has
this
very
good
understanding
what
needs
to
be
done
in
what
order
and
and
how
it
should
look
like
because,
like
we
actually
got
to
graphql
with
the
being
green,
using
also
tests
so
like
we
did
go
on
the
graphql
pretty
much
to
the
plc
that
could
likely
be
merged.
If
you
would
really
aim
to
for
that.
E
Sure
so
I
I
said
that
I
think
what
we
can
do
here
is,
for
example,
if
we
know
what
the
steps
are
for
for
graphql
right,
we
can.
E
E
You
know
here
is
how
you
can
how
you
can
accomplish
the
same
thing
for
another
component,
I'm
not
sure
of
the
details,
but
I
think
what
I'm
looking
for
is.
Is
you
know
a
way
for
folks
in
other
teams
to
know
what
they
need
to
do
you
know
and
so
that
when
we
work
with
them-
and
we
actually
want
them
to
do
this
right-
they
have
documentation
and
a
lot
of
sort
of
information
on
how
to
do
it
right
ahead
of
time,
because
then
they
can
understand
it.
E
I
think
that
that
may
be
a
suggestion.
Yes,
I.
C
D
If
we
would
follow
that
because,
like
I
think
like,
we
need
to
understand,
like
the
complexity
versus
benefit
versus
impact
over
time
and
like,
I
think,
with
everything
we
don't
want
to
be
like
spending
one
year
on
something
you
rather
want
to
have
something
like
in
two
or
three
iterations
at
most,
at
least
for
the
like
preliminary
work,
so
maybe
introducing
some
kind
of
skeleton
framework,
but
don't
move
the
data
yet
there.
D
So
I
think
like.
If
we
figure
out
iterations
and
like
steps,
then
it
should
came
like
with
the
estimates.
How
long
can
it
take
us
to
do
it
and
what
is
the
benefit
benefit?
We
know
what
is
the
benefit
but
like
how
long
it
would
take
us
to
do
it
and
basically
how
many
working
hours
we
kind
of
expect
roughly
to
spend
on
that.
A
So
what
are
the
immediate
next
steps?
It
seems
like
it
should
in
a
general
sense,
it's
pull
all
of
the
knowledge
out
of
nikola's
head
and
make
sure
that
it's
represented
in
issues
and
the
blueprint
is
that
right.
A
A
D
I
would
expect
that,
just
because
the
graphql
would
be
like
prominent
example
and
really
like
the
first
one
on
the
list
to
solve.
We
would
hear
from
the
graphql
experts
on
that
point
what
they
think
about
the
proposal
and
what
are
the
potential
downsides
or
limitations
that
we
would
apply
on
them
and
we'll
figure
out
exactly
how
to
solve
these.
F
Do
we
already
know
if
I
was
just
wondering,
do
we
know
of
any
other
potential
candidates
that
we
could
extract?
I
I
remember
great,
was
being
talked
about.
I'm
just
wondering
those
are
the
only
two
I
know
of
because
that
might
also
maybe
be
useful.
F
C
A
C
I'm
sure
that
we
can
ship
it
at
the
moment.
I
think
that
the
only
one
that
is
shippable
is
graphql,
but
we
need
to
talk
about
their
future
needs,
especially
for
those
subscriptions
and
again,
the
controllers
and
the
grape
api
also
have
some
drawbacks
that
need
to
be
solved
like
there
are.
There
are
some
circles,
dependencies
related
to
the
urls
that
are
used
on
the
sidekick.
B
B
D
A
So
we
want
to
document
the
implementation.
If
it's
already,
there
share
the
proposal
with
the
graphql
team
or
teams
to
gather
feedback
to
make
sure
we
haven't
missed
anything
and
then
update
the
epic
with
the
next
implementation
steps
and
which,
which
epic
would
we
use.
This
is
kind
of
a
generic
parent
one,
but
are
we
gonna
break
it
down
by
areas
so
like
have
a
graphql
one,
a
grape
one?
E
That
makes
intuitive
sense
to
me,
but
we
can.
We
can
maybe
like
cross
that
bridge
when
we,
when
we
have
to
how
to
split
it
out
specifically,
but
yeah.
That's
exciting.
I
think
that's
a
that's
a
really
good
outcome.
A
Already,
okay,
so
nicola,
are
you
clear
on
what
you
need.
C
A
Honestly,
there's
still
some
discussions
going
on,
so
I
would
continue
down
this
path
for
now.
There's
still
some
debate
on
whether
we
should
pick
specific
endpoints
from
the
rapid
action
or
we
should
focus
on
moving
all
the
sidekick
jobs
to
replicas.
So
while
you
have
momentum
in
this,
I
would
focus
on
this.
E
Yup
yeah
and
nikola.
If
you
want
some
help
or
like
a
sync
session
to
like
talk
about
this
and
how
to
structure,
it
feel
free
to
reach
out,
and
we
can
schedule
something
and
then
happy
to
help
with
that.
D
A
A
All
right
back
to
the
board
matthias
you're
up.
F
Yeah
so
the
gc
story,
that
was
the
one
where
we
kind
of
reduced
the
scope
to
just
changing
the
initial
leap
size.
I'm
I'm
working
with
john
jarvis,
like
ev
every
day,
basically
to
roll
this
out
slowly,
so
we
have
a
we're
still
like
trying
to
figure
out
the
specific
final
rollout
plan,
but
like
the
first
few
steps
we
agreed
on
so
those
are
already
in
progress.
So
today
we
emerged
to
change
that.
F
We'll
put
it
on
staging
there's
two
open,
mrs
as
well:
we're
not
merging
them
right
now,
because
that
would
enable
it
for
everyone.
So
we
first
want
to
like
slowly
roll
it
out
on
sas
and
see
how
it
works.
So
that's
kind
of.
A
F
You
know
a
little
bit
every
day.
I
guess
so
we'll
probably
drag
on
for
another
week
or
so
yeah,
but
it
is
on
staging.
It
will
be
on
canary
soon
and
then
next,
we're
gonna.
Do
it
probably
by
workload.
So
we're
probably
gonna
put
it
on
one
kubernetes
cluster
in
the
git
and
websockets
fleets,
and
if
that
works
out,
deploy
it
across
those
entirely
and
then
we
will
start
slowly
putting
it
on
web
and
api
as
well.
F
That's
different
kind
of
work
because
they're
still
running
on
vms,
not
on
kubernetes,
so
that
makes
it
more
complicated,
but
yeah
we'll
figure
that
out.
So
it's
still
ongoing
and
the
next
one.
When
was
the
last
state,
so
it
looked
promising
there's
one.
This
has
been
running
on
one
cluster
on
the
web.
F
Sockets
leads
john
said:
he
wanted
to
look
into
roll
this
out
more
widely,
but
I
I
might
have
to
ping
him
again
on
this,
so
I'm
not
really
sure
what
the
state
is
there,
but
it's
not
fully
rolled
out.
F
A
Do
you
want
to
talk
about
the
sql
n
plus
one
pipelines,
controller.
F
Yeah,
so
I
think
I
was
able
to
fix
one
n
plus
one.
A
F
Not
totally
sure
how
best
to
verify
that
this
was
effective,
it
is
on
canary-
and
I
looked
at
an
a
dashboard
that
I
thought
would
show
the
impact
and
I
don't
see
any
impact,
so
I'm
not
sure
how
impactful
this
actually
was.
It's
just
really
hard
to
say,
because
I
don't
have
access
to
the
initial
initial
request
that
like
caused
this
to
really
kind
of
get
out
of
hand,
I
know
that
camille
is
also
looking,
so
we
ended
up
splitting
this
up
into
two
separate
fixes.
F
So
I
know
camille
is
also
looking
at
a
different
aspect
of
this,
which
had
to
do
with
the
number
of
environments.
A
build
can
run
against
so
so
maybe
that
will
be
more
impactful.
It's
just
super
hard
to
say
by
running
a
unit
test
or
something
to
know
which
one
was
actually
the
big
factor
in
production
that
yeah
kind
of
escalated
this.
So
I
don't
know
camilla
had
anything
to
add,
because
I'm
not
looking
at
it
anymore
right
now,.
D
I'm
not
looking
at
this
anymore
as
well,
this
environment,
one.
I
just
started
focusing
on
some
other
stuff.
F
I'm
also
not
sure
if
it's
worth
it,
because,
like
a
few
of
these
issues
that
we
have,
we
really
need
to
be
careful
with
how
we
phrase
the
issues,
because
they
seem
to
suggest
that
the
database
is
the
problem
and
I'm
not
convinced
it
is
like.
This
was
one
example
and
the
same
controller.
The
show
endpoint
is
another
example
where
yes,
it's
slow,
but
it's
not
slow
because
it
spends
so
much
time
in
the
database.
It
actually
spends
95
of
the
time
on
on
cpu,
not
in
the
database,
so
so
either.
F
For
some
reason,
it's
super
expensive,
whatever
active
record,
is
doing
so
that
I
guess
is
kind
of
then
related
to
the
database
or
it's
something
else
like
it
could
be
json
rendering
things,
because
we
know
there
has
been
a
problem
in
the
past
and
then
I
kind
of
got
like
a
little
confused
as
to
what
are
we
trying
to
solve
with
this
rapid
action,
because
it
was
all
kind
of
phrased
around
database
pressure
and
because
I'm
I
just
can't
verify
that
this
is
a
problem
in
terms
of
database
pressure
from
where
I
am
so.
F
I
kind
of
started
to
look
at
things
that
are
more
obvious.
Obviously,
database
related
issues
like
where
we
actually
spend
a
lot
of
time
running,
queries,
yeah,
so.
A
There's
a
couple
aspects
on
database
pressure,
along
with,
like
the
amount
of
data,
is
being
pulled
back.
It's
the
frequency
too.
It's
just
the
sheer
volume
of
calls
whether
the
you
know
whether
the
single
call
is
heavy
or
not
we're
just
seeing
far
too
many
calls
right
now,
and
maybe
this
is
one
of
those
where
they
just
want
to
reduce
the
number
of
calls.
A
But
I
agree
with
your
previous
statement
that
the
rapid
action
is
probably
not
correctly
labeled
in
that
it's
focusing
the
title
implies
that
it's
focusing
entirely
on
the
database
where
it
could
be
just
the
volume
of
calls
needs
to
be
reduced
and
that's
something
next
time
we
meet
for
the
rapid
action.
I
will
ask
for
a
re-title
on
that,
because
it
is
misleading.
F
Yeah,
I
wonder
if
we
actually
all
on
the
same
page
about
this,
because,
like
the
way
andrew
phrased,
that
we
are
focused
on
the
database
related
things
and
very
specifically
on
moving
traffic
away
from
the
primary
nodes,
and
I
have
like
a
super
hard
time
even
assessing
like
what
traffic
goes
with
the
primaries.
So
I'm
yeah
it's
a
bit
tricky
right
now
to
prioritize
the
work.
That's
there
or
like
pick
out
these
individual
workers
or
end
points
to
focus
on
I
mean.
D
It's
even
more
tricky
because
we
appear
to
have
even
more
reasons
for
this
rapid
action
and,
like
craig
mentioned,
that
it's
not
only
database
and
it's
like
maybe
way
beyond
database,
and
I
actually
can
start
with
across
andrew
suggestion.
Today.
It's
like
we
should
focus
only
on
the
queries
hitting
primary,
because
this
is
our
like
the
biggest
issue
that
we
have
very
immediate
capacity.
D
A
From
the
last
meeting,
we
had
a
very
similar
discussion
about
the
point
and
the
focus
of
the
rapid
action
and
the
feedback
was:
let's
get
andrew's
feedback
and
infrastructures
feedback
on
what
problem
we
need
to
solve
immediately
and
andrew
is
coming
back
with
the
sidekick
work
so
and
sidekick
work
that
he
is
suggesting
actually
plays
into
a
lot
of
what
we're
seeing
on
the
database
side
as
far
as
reducing
the
pressure
on
the
primary.
A
So
I
agree
with
andrew's
assessment
that
that's
probably
the
most
important
thing
we
can
do
right
now
and
there
are
probably
some
spot
fixes
that
I
can't
even
remember
which
team
we
got
pulled
into
for
this
runner
team
or
or
verify
group
that
we're
helping
out
for
this
rapid
action.
F
Yeah
and
I
no
it
totally
does
and-
and
I
think
it
sounds
like
anything
that
we
see
where
psychic
is
involved-
is
a
pretty
safe,
I
think,
to
to
look
into
optimizations.
There
camille
posted
a
couple
really
interesting,
like
charts
or
distributions
or
breakdowns.
I
guess
of
like
how
much
time
particular
workers
spend
in
the
database
and
so
forth,
and
there
were
definitely
a
couple
that
looked
pretty
off,
so
I
I
just.
I
don't
think
this
is
totally
up
to
date.
F
Could
you
actually
refresh
this
board,
because
I
had
unassigned
myself
from
this
last
issue
there
and
I
because
I
picked
up
a
different
one
related
to
sidekick.
Oh,
why
is
it
not.
A
F
Weird,
okay!
Well
I
I
thought
I
had
understand
myself
but
like
that
last
one
is
the
one
I
ended
up,
picking
because
it
doesn't
run
super
frequently
the
store
security
reports
worker,
but
when
it
runs
it
is
quite
impactful.
So
there
were
a
couple
instances
where
it
ran
like
wait.
Oh
it
been
three
minutes
in
the
database.
You
know
stuff
like
that,
and
it's
quite
inefficient.
F
It's
a
bit
difficult
to
dive
into
the
security
code,
stuff
it's
fairly
complex
but
yeah.
It
looks
like
it's
related
to
fetching
a
lot
of
items
from
the
database
and
then
it
does
like
kind
of
a
service
execution
for
each
of
those
and
then
that
execution
does
a
ton
of
other
things
again.
So
you
have
this
kind
of
explosive
growth.
A
Okay,
we'll
wait
for
andrew's
feedback
on
this
one.
I
see
that
you
asked
him
for
some
data
about
six
a
little
while
ago
and
he's
just
gotten
back
today,
so
he's
probably
still
catching
up
with
his
backlog
of
requests.
So
I'll
wait
for
feedback
on
that.
There
are
three
unassigned
items
in
this
column
and
fabian.
Correct
me
if
I'm
wrong,
but
with
this
workflow
that
we're
using
probably
shouldn't
be
in
dev
if
it's
unassigned.
E
D
So,
yes,
I'm
actually
like
working
on
that.
I'm
almost
done,
and
I
resent
the
these
two
review.
It's
very
simple
dogs
like
showing
the
difference
between
these
counters,
like
on
example,
that
actually
I
posted
in
the
backend
channel
during
a
workload
right
now,
because
it
was
super
interesting,
the
difference
between
up
and
to
a
string
with
two
ways
of
doing
that,
and
we
usually
do
this
less
efficient
one.
We
not
use
body.
We
always
use
this
less
efficient
one
first,
all
right.
What
about
second
one?
D
I
did
not
yet
start
working
on
this,
but
I
I
think
like
when
I
finish
the
first
one.
I
will
just
push
a
few
graphs
to
that
one
and
then
maybe
craig
or
you
fabian
could
figure
out
how
we
could
use
these
graphs
to
raise
awareness
of
the
memory
usage
because,
like
I
think
at
that
point,
when
I
post
these
things
to
the
graphs
my
job,
there
would
be
basically
kind
of
dawn.
Then
it's
more
like
irradiating,
this
knowledge
and
figuring
out
how
we
could
engage
other
teams.
E
E
A
A
There
are
far
too
many
craigs
here
by
the
way
I'm
not
used
to
that.
I'm
usually
the
only
one
all
right,
and
then
we
have
one
last
one
here:
optimize
database
call.
F
Count,
I
should
have
probably
not
had
it
active
there.
I
I
am
hi
it's
like,
because
I
I
spent
most
of
the
day
just
looking
in
a
few
into
a
few
things
trying
to
identify
which
which
ones
would
be
impactful
to
work
on,
but
then
this
one
I
came
to
the
same
conclusion
as
with
the
index,
one
that
is
actually
not
the
database.
F
It
doesn't
look
like
the
database
is
the
big
problem
here,
but
rather
that
we
spent
so
many
cpu
cycles
on
something
and
then
because
I
wasn't
sure
if
that
is
really
relevant
to
the
rapid
action
I
then
moved
on.
So
maybe
you
can
just
drop
the
active
label.
I
can
do
it.
It
probably
shouldn't
be
on
the
board.
B
Actually,
I
moved
it
on
our
board.
Just
during
our
meeting
I
applied
both
labels
to
both
nurse
requesting
issue,
because
it
makes
more
sense
since
we
are
working
on
it
and
will
come
towards
our
team.
So
it's
in
montana
review.
Currently,
I'm
not
actually
working
on
it
right
now.
B
E
Just
before
we
move
off
the
board,
I
like,
I
think
there
was
a
little
bit
of
discussion
on
the
documentation
for
our
memory
constrained
environment
thing.
I
actually
have
that
I'd
really
love
for
us
to
you
know
get
the
draft
of
that
out
relatively
soon.
So
we
can
finish
this,
I'm
happy
to
help
with
it.
If
there
is
confusion
on
how
to
do
it,
maybe
we
can
do
that
tomorrow
morning
in
the
office
hour.
B
B
F
Yeah,
if
we
could
just
go
over
some
of
the
basics,
then
we
can
do
it
tomorrow,
like
because
I
think
the
main
thing
I
was
confused
by
was
what
the
whole
like
format
of
this
is
supposed
to
be
like.
Are
we
looking
to
like
inject
small
nuggets
of
information
around
where
to
bring
memory
down
in
like
different
areas
of
the
product,
or
do
we
want
to
have
some
kind
of
like
comprehensive
single
guy
like
which
might
make
its
own
self-contained
page
like
here
under
like
10,
different
things
or
whatever?
F
E
A
F
F
Yeah,
that's
like
that's
just
the
I
that's
this
change
management
issue
I
created
now
because.
F
To
get
into
the
habit
of
doing
this,
but
it's
it's
just
this
one
environment
variable
that
we
set,
but
it's
kind
of
good
to
you
know
have
this
issue
to
talk
about.
How
do
we
want
to
roll
this
out?
And
but
we
talked
about
it
earlier,
it's
the
same.
A
E
Yes,
so
the
first
one,
maybe
you
can
highlight
so
I
created
a
draft
memory
direction
page
the
mr
has
been
sitting
around
for
a
while.
I
got
some
feedback,
but
I
think
it's
essentially
good
enough
to
merge.
As
is
you
know,
we
don't
have
anything
after
you
know.
We
merge
this.
We
have
something,
and
I
created
a
follow-up
for
some
discussions,
but
I
if
there
is
no
objection,
I
would
just
go
ahead
and
merge
it
or
maybe
craig
can
merge
it
now.
Actually.
A
E
The
train-
and
then
this
is
kind
of
related
to
the
rapid
action.
E
It's
mainly
something
that
I
think
you
can
read.
If
you
have
some
time
right,
I
think
craig
is
in
there
as
well
I'm
in
there.
So
there
is
a
notion
that
the
planning
that
teams
are
doing
across
the
organization
does
not
factor
in
infrastructure
like
infra,
death
issues
or
security
enough,
and
so
the
question
is:
how
are
we
going
to
change
that,
but
also
a
little
bit?
Why?
Why
is
this
happening
now
right
and
there's
some
discussion
around
that?
I
think
the
three
main
things
that
sort
of
surfaced.
E
I
think,
there's
potentially
a
sort
of
lack
of
understanding
why
these
things
are
important
right,
and
so
maybe
not.
Everybody
is
aware
why
they
should
actually
do
something
right.
Then
I
think
there
is
a
potential
for
a
sort
of
negatively
compounding
technical
debt
right
over
time.
You
know
if
you
always
like
iterate
you
ship
new
stuff,
but
now
you
have
a
lot
of
stuff
right
and
I
think
we
deal
with
that
quite
a
bit.
It
becomes
more
difficult
right
and
then
rapid
action
is
maybe
the
kind
of
result
where
things
start
to
hit.
E
E
Why
should
it
happen
right
and
my
personal
opinion
on
this
is-
and
that's
really
just
my
personal
opinion,
I
think
like
addressing
infra-death
issues
or
technical
debt
essentially
is
the
same
process
as
any
other
feature
right.
You
should
break
it
down.
You
should
understand
what
you're
trying
to
do
right.
You
should
make
minimal
changes.
It
should
be
planned
just
in
the
same
way,
but
if
you
don't
do
that,
then
it
will
always
fall
behind
right.
E
It's
not
going
to
be
a
priority,
and
then
you
know
you
you
end
up
with
an
issue
there,
so
it's
worth
reading
go
check
it
out.
I
think,
ultimately,
unless
product
and
engineering
are
aligned
with
regards
to
this,
it's
not
actually
going
to
go
anywhere
right
because
your
product
manager
is
going.
It's
going
to
tell
you
ship
these
new
things
or
do
this
or
do
that
and
then
you
know,
like
that's
going
to
be
the
priority.
A
Yeah
to
that
point,
I
think
we
are
less
impacted
by
this,
because
we
are
mostly
focused
on
infrastructure
and
tech,
dead
and
back-end
work.
Anyway,
we
have
less
feature
driven
initiatives,
so
this
may
not
hit
home
as
much
for
this
team,
but
maybe
there's
some
good
ideas
coming
from
this
team.
If
you
have
any
feedback
read
through
the
thread,
if
you
have
time.