►
From YouTube: 2019 06 13 Memory Team Weekly
Description
Topics
- Team formation ideas
- How to optimize for Puma
- Follow up items
B
C
C
A
C
C
A
Okay,
while
we're
waiting
for
Andrew,
we
can
jump
into
some
of
the
team
forming
questions
that
I've
posted
in
the
agenda.
So
one
thing
that
came
up
especially
from
a
couple
folks
in
Europe
setting
up
an
alternating
meeting
time
every
other
week
so
that
gene
you
can
join
because
with
how
distributed
our
team
is,
there's
probably
never
gonna
be
a
time
that
we're
all
on
a
meeting.
A
Unless
one
of
us
wants
to
get
up
in
the
middle
of
the
night,
which
I
wouldn't
ask
any,
let
me
do
because
honestly,
I
don't
want,
do
it
so
I'm
all
for
it,
especially
getting
the
developers
together
on
us
on
a
weekly,
cadence
or
however
often
you
all
wanted
I
would
recommend
more
often,
especially
now,
as
the
team's
forming
the
one
thing
I
would
ask.
Is
somebody
volunteer
that
so
that
they
can
put
their
zoom
invite
in
there?
C
E
B
A
Alright,
I
distracted
by
Camille's
typing
the
other
option
of
having
regular
510
minute.
Real-Time
stand
up
again
for
overlapping
timezone,
came
up
to
anybody
in
favor
or
opposed
to
this
and
again
I
would
ask
maybe
someone
in
that
time
zone
schedule.
Those
do
y'all
can
get
together
and
do
a
quick
sync
for
for
now.
C
I
wonder
if
there
is
a
reason
for
for
Central
High
School's,
because
for
me
it
seems
that
synchronous,
stand-ups
work
pretty
well,
so
I
wonder
if
there
is
a
reason
to
change
it
at
first.
If
not,
I
would
strongly
prefer
to
use
to
keep
using
cousin
house
calls,
because
it
keeps
the
all
the
flexibility
of
this
so
yep,
no.
A
A
And
if
it
comes
back
up
again,
we
can
talk
about
it
later,
yeah
and
then
something
else.
That's
worked
well
for
distributed
teams
that
I've
worked
with
in
the
past
and
again
this
is
an
option,
there's
no
desire
for
it
that
wouldn't
even
do
it,
but
just
having
open
channels
on
zoom'
where
people
can
jump
in
and
jump
out
so
just
schedule
an
hour
or
two
on
certain
times
of
the
week,
even
a
couple
times
a
week
daily
whatever
so,
people
can
join
and
just
talk
about
what
they're
working
on
and
it's
all,
entirely
opt-in.
A
You
just
know
that
it's
open
and
it's
almost
like
sitting
next
to
the
person
you're
working
with
it's
worked
really
well
with
a
couple.
Other
companies
I've
worked
out
where
we've
had
folks
across
the
planet
and
as
they're
working
on
something,
and
they
know
people
are
on
the
channel.
They
can
just
ask
instead
of
using
slack
but
again,
if
there's
no
desire
for
it,
we
don't
need
to
set
it
up.
A
B
Most
I
would
like
to
try.
It
sounds
pretty
interesting
just
to
have
team
dedicated
it's
like
not
like
zoom
channel
and
everyone
to
be
able
to
join
the
room
and
sit
there
and
maybe
meet
someone
else
in
in
my
previous
team,
we
even
tried
sometimes
to
move
our
one-to-one
discussions
about
the
court.
Susie's
public
room
for
everyone
could
be
able
to
join
and
maybe
contribute
to
the
discussion
it
it
made,
like
all
the
communication,
feel
more
connected.
I,
don't
know,
I
feel
it
could
be
a
good
idea
to
try
to
list
I.
D
For
the
test
automation
group
board,
we
have
a
weekly
call
and
it's
separated
into
two
windows,
one
for
a
pack
and
one
for
the
u.s.
and
others,
and
we
share
the
same
agenda.
Is
this
a
weekly
call
and
I
try
to
make
it
not
related
to
work,
because
you
can
go
to
issues
and
slack
and
whatnot
just
to
follow
up
on
the
latest
status.
People
should
be
keeping
that
up-to-date
as
we
go
along,
but
those
are
like
the
time
to
get
to
know
your
team.
What
have
you
been
doing
for
the
past
week?
D
C
Yeah
I
agree,
as
Mike
mentioned:
I
think
that
this
is
a
slightly
different
type
of
code
and
correct
meant.
We
we
introduced
these
social
social
calls
in
Glenn
team
recently
too
and
I
think
it's
a
good
idea.
I
think
it
works
quite
well,
so
I
will
suggest
to
use
them
for
memory
to
about
the
second
type
of
the
event
which
I
think
recommend
just
joining
for
a
few
hours
on
call
and
staying
there.
C
If
something
observes
like
working
in
one
room,
I
think
that
there
was
an
email
call
every
day
for
this
and
basically
sales
department
used
to
do
this.
I
join
it
few
times
when
I
was
new
hire,
but
it
was
slightly
different
than
my
department.
So
so
it
was
quite
unrelated
to
me,
but
I
think
it's
a
good
idea.
So
perhaps
we
can
try
it
out.
I
was
looking
for
I
I'll.
Take
leave,
bought
both
calls
just
to
try
it,
but
it
depends
some
other
certain
elements.
A
E
I
guess
I
can
apply
it
right.
So
just
quick
note,
I
did
found
in
our
handbook
that
we
have
something
like
random
rule,
which
is
which
is
part
of
our
random
channel.
There
is
a
link
to
hangouts
that
you
can
join
and
maybe
there
are
other
people
from
the
company
joining
this
code.
There
is
a
link
in
our
meeting.
Talk
please
Mike.
There
is.
D
F
Hi
everyone,
by
the
way
I,
will
take
this
chance
to
introduce
myself
it's.
My
first
meeting
I
was
scheduled
to
join
last
week,
but
I
was
out
of
office,
so
yeah
I've
been
working
on
the
artillery
Lotus,
which
would
be
used
by
this
team
and
I
believe
I'm,
also
here
temporarily,
until
grant
joins
and
he
wraps
up
so
until
then
I'll
be
I'll,
keep
working
on
the
performance.
That's
it
yeah
pleased
to
meet.
C
E
I
started
this
discussion.
Point
I'm,
like
I,
don't
feel
that
I'm
gonna
be
able
to
prepare
myself
so
for
this
week,
so
actually
I'm
trying
to
we
factually
trying
to
solve
some
of
the
problems.
It's
part
of
our
issue
tracker
or
something
like
that.
It
would
focus
on
like
measuring
performance
and
the
memory,
consumption
and
some
of
the
key
cards
and,
like
showcasing,
you
know,
sort
of
the
different
tools
that
we
already
have
in
place
like
the
graph
on
our
metrics,
like
the
bots
that
we
have
sensory.
E
What
are
the
things
that
we
have
in
the
code
base
that
makes
it
easier?
I
get
a
performance
bar
the
profile
of
the
requests.
Things
like
that,
so
basically
very
hands-on.
It's
not
one
on
one
think
it's
very
like
engineering
oriented
but
hopefully
like
when
we
do
that
we
could
just
you
just
maybe
cannot
have
some
genera
understanding.
E
What
may
be
useful
in
all
other
stuff
that
you
do
not
be
working
on,
so
it's
gonna,
I'm,
I'm
gonna,
write,
propose
some
time
and
like
something
fight
to
everyone,
so
it
could
feel
like
risks
like
say
that
this
is
not
the
suitable
time
for
me
and
if
it's
gonna
be
like
okay,
everyone
agrees
that
it's
gonna
be
recorded
and
we
just
do
not
try
to
do
it.
Maybe
it's
gonna
be
enough
and
we
just
don't
have
occurred,
but
like
I'm,
also
gonna.
E
Look
for
something
like
how
we
could
improve
that
later,
because
maybe
this
should
become
a
deep
dive
session
from
the
minority,
and
maybe
we
could
organize
it
together.
That
would
be
not
only
for
us,
but
also
like
the
company
right,
and
we
could
then
like
instruct,
like
all
the
new
team
members,
to
take
a
look
at
that
and
help
them
get
the
same
amount
of
the
knowledge
that
we
have.
A
You
know
we
can
add
this.
If
it's
we've
won
a
useful,
we
can
add
it
to
our
onboarding
template
that
we're
building
out
right
now,
all
right.
That's
everything
on
our
agenda
that
doesn't
include
the
document
from
Andrew.
Should
we
jump
over
that
or
just
assume
we're
gonna
reschedule
once
he's
able
to
join.
H
H
H
Do
and
I'm
sorry
that
I
missed
you
meeting,
so
I
didn't
really
get
a
chance
to
listen
to
what
you
guys
were
talking
about.
But
do
you
want
to
kind
of
just
move
across
to
the
ii
doc?
The
the
second
agenda
that
I
put
together?
Yes,
so
I
guess
I
kind
of
just
jotted
down
a
couple
of
random
thoughts
that
I
that
I'd
been
thinking
about
and
and
the
first
one
is
really
about
the
goals,
and
you
know
you
guys
want
humor
and
I
want
pure,
but
the
reasons
that
we
actually
want.
H
If
you
want
like,
for
example,
the
the
defaults
that
we'd
want
to
use
on
gitlab
calm,
our
workers
on
github.com
have
got
I,
think
they've
got
a
hundred
and
twenty
gigs
of
memory,
or
something
crazy
like
that.
So
we
certainly
don't
have
a
memory
problem
on
on
gitlab,
low
common
and
in
order
to
make
Pima
work
fast
in
that
environment.
H
We
want
to
use
as
much
memory
as
we
can,
whereas
you
guys
are
using
it
to
kind
of
constrain
memory
and
use
less
memory
and
so
I
think
that's
just
something
that
we
need
to
be
aware
of,
and-
and
you
know,
in
order
for
us
to
work
better
together,
I
think
I
think
that's
something
we
need
to
be
aware
of.
I
don't
know.
I
haven't
actually
read
through
people's
notes
here.
H
Do
we
do
we
choose
the
kind
of
the
memory,
starved
version
for
omnibus,
or
do
we
find
some
sort
of
in-between,
or
do
we
go
with
the
kind
of
beefier
instances
like
and
I
guessing?
It
wouldn't
be
the
beefier
instances.
We'd
probably
want
to
sort
of
focus
on
on
a
smaller
gait
lab
instance
for
Pima
tuning
in
as
the
default,
but
certainly
what
works
on
there
is
not
going
to
be
the
same
as
works
on
get
like
no
comment.
I
think
it's
just
a
matter
of
being
aware
of
that.
So.
E
E
It's
only
a
way
to
make
it
a
liter
a
efficient,
but
if
we're
gonna
solve
the
memory
problems,
it's
not
gonna
make
the
memory
difference
substantially,
but
it's
gonna
open
us
like
more
efficient
operation
and
I
gonna
open
was
like
your
set
of
you
set
of
the
different
technologies
like
WebSockets.
That
may
result
in
the
further
improvements
of
the
our
application
stock
and
basically,
as
an
outcome,
be
easier
on
the
on
the
geek
up.
E
I
mean
handling
course
of
the
single
Rika
is
because,
like
the
request
for
a
boot,
not
gonna
increase
with
Puma,
it's
like
we
are,
but
on
I
heard
by
other
things
that
are
not
related
to
the
memoir.
It
may
result
in
some
small
reduction,
unlike
on
the
smaller
eastern
stuff,
because
the
smaller
in
status,
maybe
we
don't
need
to
run
to
workers,
but
we
may
run
just
one
cluster
boom
on
out
with
like
multiple
threads,
so
it's
gonna
increase
the
throughput
and
gonna
slightly
decrease
the
memory
usage.
E
There's
like
this
kind
of
fine-tuning
that
Andrea
are
talking
about
the
trike.
We
don't
know
what
is
the
balance,
because
we
don't
know
like
all
the
factors
how
the
Puma
is
behaving,
but
we
gonna
learn
exactly
tried
how
to
optimize
that,
like
Puma,
just
like
I'm
trying
to
tell
anyone
from
are
not
gonna
like
solve
them
a
more
issue
of
beat
up
it's
it's
got.
E
That's
it
this.
This
is
how
it
works.
Puma
doesn't
make
it
better
on
this
holes.
We're
not
gonna
have
a
higher
throughput,
even
though,
on
the
smaller
instances
we
might
have
higher
throughput,
because
we
gonna
maybe
have
better
verticality
idiot,
occasionally,
resources,
XE
q,
because
it's
much
cheaper
to
run
multiple
trusts
and
multiple
Forks.
H
Yeah
and
the
way
that
the
like
one
of
the
reasons
why
I
I'm
really
excited
about
it
from
the
infrastructure
point
of
view
is
like
literally
the
whole
day
today,
I've
been
on
an
incident
call
and
the
the
incident
that
we're
dealing
with
is.
We
are
running
out
of
unicorn
workers,
so
something's
slowed
down
in
the
application.
H
Obviously,
ideally
we
want
to
go
to
something.
We
were
actually
auto-scaling
like
horizontal,
auto
scaling,
but
we're
not
there
yet,
but
having
Puma
will
give
us
sort
of
a
bit
more
vertical
sort
of
elasticity
in
our
cluster,
and
so
that's
what
I'm
really
excited
about,
but
also
I've,
seen
that
a
lot
of
the
the
times
kind
of
leading
up
to
handing
over
a
request
from
the
from
workhorse
to
Puma
is
much
faster
than
from
workhorse
unicorn.
H
Why
that
is
I,
don't
really
know
yet,
but
it
definitely
seems
to
be
the
case
and
one
of
the
things
that
I
find
super
helpful
is
actually
using
distributed
tracing
with
this
and
seeing
the
steps
so
I
hope
done.
If
you
guys
are
all
aware
of
the
distributed
tracing
that
we've
got
in
GDK
and
if
you've
tried
it
and
I
definitely
recommend
taking
a
look
at
it.
H
So
in
the
performance
bar
there's
a
little
trace,
there's
a
little
label,
it
says
trace
and
if
you
click
on
that
it
will
bring
up
Jaeger
and
it
will
kind
of
pre
fill
it
with
a
search
for
the
current
correlation
ID
and
you
can
go
and
see
stuff
stepping
through.
And
you
know
one
of
the
things
that's
really
shocking
when
you
do,
that
is
when
you
go
from
workhorse
to
a
unicorn.
H
They'll
often
be
you
know,
100
200
milliseconds,
which
is
which
is
crazy,
like
what's
happening
in
that
space
of
time,
and
that's
just
on
your
local
machine
that
doesn't
even
have
any
node
yet
so
definitely
you
know
take
a
look
at
that
I
guess:
we've
only
got
a
few
minutes,
so
I'm
going
to
quickly
move
on
quite
quickly
with
the
correctness
like
Camille.
Do
you
have
a
sort
of
feel
for
you
know?
Should
we
just
kind
of
blindly
go
ahead
with
this
or
you
know?
H
If
you
look
at
the
linty,
there's
there's
a
lot
of
stuff
that
the
linters
say
we
is
not
thread
safe
and
if
you
look
at
a
lot
of
a
clearly
ism
thread
safe
and
on
the
one
hand,
this
all
works
in
sidekick.
On
the
other
hand,
it's
a
little
bit
scary,
so
I
don't
know
if
you've
got
an
opinion
on
this.
I
am.
E
I
am
a
little
scared
of
that
I
I
trust
site,
key
that
it
works,
but
also
I
think
that
it
requires
a
little
validation,
because
the
code,
loading
in
race
is
not
thread
safe.
So
we
may
be
hiding
a
lot
of
these
problems
because
we
plead
out
the
application
on
ahead
of
the
time.
But
what
happens
if
we
stop
reloading
on
some
occasions
in
order
to
conserve
them
that
memory,
so
I
think
that
it
requires
at
least
I'm,
not
a
team,
that
this
is
the
behavior
that
we
have
to
be
aware.
E
So
this
is
actually
on
my
to
do
is
to
go
through
all
of
these
offences
and
try
to
understand
you.
This
is
something
that
gonna
cause
us
troubles
and
yes
in
most
circumstances,
because
it
works
in
the
side
key,
but
I
guess
it
works,
because
we
just
build
out
the
application
on
ahead
of
the
time
and
it's.
This
is
basically
why
this
safe.
E
Now
we
do
the
same
for
the
Puma,
but
I
am
NOT
certain
that
we're
gonna
continue
reloading
the
whole
application
of
each
each
time,
I'm
I'm,
assuming
that
right,
we
might
be
as
part
of
the
memorial
effort.
At
some
point,
we
may
say
that,
like
we
are
providing
the
crucial
parts
of
the
application,
but
not
everything
because
there's
our
radius
features
and
we
may
just
load
them
on
the
demon
to
conserve
the
memorial.
So
in
such
we
ret
I
guess
it's
gonna
be
a
problem.
We
Street
swiftness
I.
E
H
I
think
one
of
the
things
is
that
we
should
probably
start
like
giving
developers
the
tools
so
that
they
can
do
things
the
right
way
easily,
and
you
know
at
the
moments
it's
just.
The
only
way
to
do
things
is
is
to
kind
of
stick
stuff
on
a
on
a
class
instance
or
class
variable,
and
you
know
like
if
we
give
people
like
the
tools
to
do
safe,
initialize,
a
safe
lazy,
initialization,
for
example,
or
maybe
just
tell
people
it's
a
lazy.
Initialization
is
a
bad
idea,
although
full
that's.
E
I
was
waiting
like
some
some
of
the
like
ideas:
how
we
could
make
this
rich
life
to
be
really
like
this
offices
to
be
really
a
treat
right.
But
it's
like
it's
very
much
tricky
because
it's
not
very
structured
like
when
it's
loaded
and
how
it's
ordered
and
like
how
it
behaves
and
like
also
be
the
logic.
If
we,
if
we
load
in
the
subsequence
it's
it's
like,
we
are
walking
or
not
talking
or
we
are
actually
creating.
The
the
local
copy.
E
So
I
think
that
this
problem
is
like
a
multi-dimensional,
not
because,
like
probably,
we
can
say
that,
like
we
have
three
times
easier
ization,
we
have
like
the
class
initialization,
which
provides
you
like
opera
context
on
like
on
the
other
class
methods
that
are
initialized,
but
also
you
have
like
the
class
metals
that
perform
something
some
mutation
of
the
data
over
time.
I
am
most
worried
about
this
mutation
of
the
data,
because
the
class
initialization
is
mostly
safe
but
like
having
private
people
who
like
using
this
class
but
tasteful
I.
E
Don't
know
that
and
the
tricky
part
of
that,
like
these
problems,
that
we
have
will
be
Street
safety.
They
are
not
easy
to
Patrick
the
Robocop,
let's
disperse,
that's
the
problem,
I
guess,
because
we
can't-
and
we
cannot
rely
confident
this
way
in
what
context
is
being
executed.
But
then
we
get
into
tricky
territory
of
like
runtime
runtime,
like
validation,
but
then
we
need
to
have
the
confidence
that
or
this
runtime
offenses
gonna
get
caught
as
part
of
our
testing
suit.
So.