►
From YouTube: Engineering Fellow Group Conversation
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yours
first
of
all,
is
the
customer
outages
and
the
second
is
memory
usage
and
we
have
to
cut
that
short,
give
them
time,
constraints
and
I'll
happy
to
take
questions.
A
brief
history
about
me:
I
I
joined
gitlab
calm
as
a
user
on
October
2014
I
want
to
give
everyone
heads
up.
I
do
have
a
twin
evil
twin
brother,
so
people
have
asked
about
cloning
me,
but
I
am
not
the
same
person
as
a
person
Roger
who
you
can
link
to
him.
They'd.
Send
me
a
merger
class
a
few
months
ago.
A
Out
of
my
surprise,
sometimes
I
merge
his
code,
but
doesn't
that
I've
been
I
joined
full
time
get
lab
in
2015
and
bounced
around
different
roles
grew.
The
team
helped
grow
the
team
to
about
a
hundred
fifty
before
handing
off
the
reins
to
Eric
and
now
I
focused
most
of
my
time
on
the
hard
technical
challenges
we
have.
A
So
what
exactly
do
I
do
here
and
that's
a
very
good
question?
I
could
talk,
there's
a
great
job
description
link
about
what
the
engineering
fella
troll
is
about.
I
can
talk
more
specifically
about
what
I've
been
doing
this
past
year.
I
mean
the
first
thing
I
want
to
talk
about.
Is
these
customer
outages
Sid
mentioned
early
in
the
month?
A
Customer
outages
are
out,
are
outages
as
well,
and
I
got
pulled
into
this
about
two
weeks
into
the
problem,
so
a
little
bit
late
on
our
side
getting
interaction,
but
essentially
the
customer
was
running
into
a
really
bad
issue
where
access
is
the
key
life
was
really
slow.
This
was
you
know,
load
pages
were
taking
30
60
seconds
or
not
even
coming
up
at
all.
This
is
just
a
core
problem
for
the
day-to-day
business
and
they
upgraded
from
gitlab
10.8
in
mid-january
to
come
from
five
and
they
experienced.
A
You
know
okay
performance
about
a
week
and
then
maybe
two
weeks
into
the
into
into
the
deployment
they
it
was
almost
unusable,
and
so
you
can
see
in
this
graph
they
shared
with
us
is
the
the
load
average.
The
basically
measurement
of
how
many
processors
are
waiting
to
do
something
jumped
from
with
usually
under
five
on
any
given
day
with
the
jump
to
over
close
to
20
at
peak
times,
and
so
this
was
part
of
this
is
a
symptom
of
part
of
the
problem
and
what
changed
between
10.10
point
and
eleven
point
five.
A
Well,
you
know,
as
most
of
you
talked
about,
is
moving
away
from
NFS
and
and
fundamentally
we
changed
the
way
we
access
get
data.
So,
in
ten
point
a
on
the
left
side
we
have
and
in
some
requests
came
in,
we
went
through
the
unicorn
process.
Unicorn
actually
talked
to
the
NFS
server
directly
via
the
this
rugged.
Api
you'll
hear
that
term
a
lot.
This
is
basically
a
ruby
library
that
accesses
get
data
and
provides
interface
to
access,
get
data,
and
so
we
were
doing
a
lot
of
direct
access
on
the
NFS
server.
A
And
so
then
some
of
the
requests
are
going
through
cue
to
leave,
but
for
the
most
part,
most
of
is
going
directly
from
Ryu
nakorn
to
the
NFS
server.
When
we
upgraded
at
eleven
point
five,
we
moved
away
from
that
model
where
we
basically
abstract
most
of
the
get
data
calls
almost
all
the
get
data
calls
in
fact
to
ticularly
service.
A
So
every
request
that
came
from
get
lab
anything
that
touched
get
data
had
to
go
through
giddily
and
that's
what
you
see
on
the
right
side
and
ghibli
in
there
in
turn,
either
talk
talk
directly
to
NFS
server
or
in
get
lab
comms
case.
We
actually
talk
directly
to
the
disk,
so
there's
a
fundamental
difference
and
I
won't
get
to
talk
to
more
about
going
forward
here,
and
so
when
we
logged
into
the
customer's
instance
we
solve
this.
We
saw
many
of
these
get
processes
running
and
many
of
them
stuck
in
this
T
state.
A
On
the
fourth
column,
you
see
this
D
everywhere
and
what
that
means
is
these
processes
were
essentially
just
waiting
to
hear
back
from
that
file
server
in
the
network
file
server
and
that's,
what's
called
what
caused
the
high
load.
And
so,
if
we
look
under
the
hood,
what
exactly
happened
and
what
happens
when
you
actually
make
a
request?
I'll
take
an
example.
A
A
Italy
then
turns
that
around
and
and
spawns
this
new
get
processes,
KitKat
file
process
on
the
second
step,
and
that
involves
a
bunch
of
different
little
I/o
operations
to
read
your
configuration
files
and
so
forth,
and
that
takes
about
40
different
operations
and
then
once
it
gets
that
answer,
I'll
come
back
to
the
application
to
say
yeah.
This
branch
is
commit
so
1
2,
3
4.
Whatever
the
stage
the
show
ideas
so
I
give
that
back
on
that
takes
less
than
10
milliseconds
on
any
given
most
99
percentile
the
the
request
for
our
customer.
A
This
is
actually
taking
close
to
a
500
milliseconds
400
by
3
at
75
milliseconds.
So
that's
an
eternity
of
you
to
think
about
this
model.
This
is
a
very
common,
a
graph
of
what
is
timing
in
this
world.
So
on
the
world,
the
most
important
things
look
on
the
right
side,
because
this
is
sort
of
that
we
get
to
the
disk
access.
So
you
can
see
like
on
the
right
side,
there's
a
packet
of
California
and
Netherlands
round
trip
takes
about
150
milliseconds.
A
It
needs
to
be
a
top
priority
for
a
company,
because
this
change
that
we
made
was
great
for
Gil
a
become,
but
it
was
not
great
for
customers
using
NFS,
and
this
is
well.
You
know
this
is
the
reference
architecture
we've
been
giving
to
customers
for
a
long
time
and
we
haven't
switched
them
off
and
a
fest,
because
there
isn't
the
alternative
I'll
talk
about
the
alternative
later,
but
but
I
want
to
underscore.
A
This
is
more
than
just
a
giddily
team
responsibility,
because
it's
not
so
much
Ghibli's
fault,
it's
how
the
backend
is
using
giddily
right,
for
example,
we
should
only
have
to
issue
one
request:
do
certain
things
we
shouldn't
need
a
hundred
of
them
and
that's
fundamentally
the
problem
here.
So
how
do
we
fix
this?
Well,
the
first
thing
we
need
to
do
is
sort
of
bring
back
the
original
functionality,
the
the
functionality
that
used
this
rugged
API
under
a
feature
flag,
so
customers
using
this,
can
enable
this
mode
and
that's
what
we've
done.
A
That's
what
I
basically
focused
on
last
month
getting
this
in
because
there's
a
priority
for
for
this
customer
and
other
customers
as
well.
There
isn't
a
feature
flag
on
11.9.
You
can
look
at
the
documentation
that
will
neighbor
enable
bunch
of
these
feature
flags
for
customers
using
NFS,
but
the
way
we
really
solve
this
is
to
optimize
these
queries.
A
Comm
was
a
representative
environment
used
to
be
that
if
it
worked
on
get
lab
comm,
it
would
work
for
customers
that
is
no
longer
true
and
and
there's
number
of
reasons
why
so,
for
example,
we're
using
solid,
solid
state
disks
most
of
our
customers
are
so
using
spinning
disk
with
NFS,
on
top
of
it
we're
and
the
way
we're
accessing
very
different
to
you
right.
We've
gotten
rid
of
NFS
for
the
git
repository
data
customers
have
not
and
we're
directly
accessing
the
storage
on
the
syste.
You
get
much
more,
but
much
better
performance.
A
So
if
you
look
at
this
graph
here,
we're
actually
talking
about
this
blue
section
here,
where
any
access
on
solid
state
is
probably
in
the
microsecond
time
frame
we're
using
a
much
more
powerful
cluster.
We
have
way
more
CPU
way
more
ram
and
a
lot
of
customers
are
just
using
a
single
instance
that
have
all
the
services
running
at
the
same
time
so
and
you'll
hear
a
lot
about
get
early
high
availability.
A
That's
really
the
main
points
I
wanted
to
highlight
every
for
our
shift.
Gears
and
I
don't
want
to
spend
too
much
time
on
the
memory
usage,
because
I've
already
taken
eight
minutes,
but
basically
I
did
some
investigation
over
the
weekend
to
figure
out
well
how
much
memory
and
we
talk
about
memory
usage
about
gitlab,
we're
talking.
We
have
to
talk
about
two
different
things:
baseline
memory
and
runtime
baseline
of
sorta,
like
a
hot
air.
A
Balloon,
that's
sitting
in
your
garage
is
uninflated
it's
how
much
space
it
just
occupies
by
doing
nothing,
and
this
is
sort
of
the
graph
over
time
of
how
much
baseline
memory
has
been
used,
and
so
you
can
see
what
it
was
surprising
to
me
is
that
it
increased
from
8.4
to
99.
We
integrated
ci
in
there,
but
since
then
it's
increased,
but
it
hasn't
increased
month-over-month,
which
is
suppose
a
good
thing.
A
I,
don't
know
exactly
why
yet,
but
it
is
interesting
to
me
that,
while
it's
increased,
it
hasn't
been
this
linear
progression
as
I
expect
and
there's
a
bunch
of
different
graphs.
I
can
talk
about
specifically
their
place.
Self-Explanatory
is
a
LinkedIn
issue
about
more
details
about
these
things,
but
I
sort
of
want
to
open
up
the
floor
to
just
questions,
about
memory,
user
to
customer,
customer
outages
and
so
forth.
So
I'll
turn
it
over
to
the
floor
here
and
I.
Think
the
first
question
I
have
is
from
Rachel.
A
What
is
the
next
step
in
deciding
if
uma
will
help?
That's
a
good
question
really.
We
need
to
get
metrics
integrated
with
Puma,
because
we
have
weave
shipping
we're
shipping
Puma
right
now
in
experimental
mode
and
omnibus
you
can
enable
it
today,
but
that
doesn't
need
any
good
unless
you
actually
can
measure
how
many
workers
you
need,
how
many
threads
you
need.
So
you
know
that
will
really
determine
how
much
memory
use
if
we
need
as
many
processes
as
we
do
today
with
unicorn.
You
know,
that's
not
gonna
save
us
any
memory.
A
We
need
a
lot
more
threads
than
we
think.
Then
you
know
it.
Could
it
could
add
up
to
more
memory,
so
really
have
to
experiment
with
the
right
values
there
get
the
metrics
in
do
a
lot
of
testing
to
see
what
works
for
us
and
what?
What
how
much
remember
and
that's
going
to
take
very
niche
question?
Do
we
know
why
delete
diff
file
works
max
allocate,
is
so
high
and
slide
16.
That's
a
good
question.
I
haven't
drilled
into
that
specifically
yeah.
B
A
Possible
I
mean
the
way
we're
measuring
yeah
yeah.
So
we
could
look
at
the
issue.
I
think
it's
possible.
The
measurement
is
off
because
there
may
be
other
things
going
on
that
throw
off
that
measurement,
but
sometimes
you'd
be
surprised.
You
might
actually,
if
you
actually
profile
you
might
actually
see
it,
do
something
really
silly.
So
that's
sort
of
thing:
that's
the
nature
of
memory,
authorization,
okay,.
A
We
are
seeing
a
lot
of
bugs
related
NFS.
Do
we
still
have
an
environment
using
NFS?
We
still
use
NFS
to
a
limited
extent,
but
not
forget
repository
data
and
I.
Think
jar
I've
mentioned:
there's
an
issue
there
to
bring
up
a
test
bed
to
use
NFS
specifically
to
test
this
problem
that
we
saw
with
customers.
A
Assignment
if
kid
lifetime
is
represented,
custom
experiment
we
cure
for
customers
effectively,
you
can
be
Spencer
how
they
should
deploy
yeah,
that's
a
good
question
and
this
is
sort
of
brings
back
to
the
point
of
spinning
up
an
environment.
It's
more
representative
and
I
think
that's
the
issue
that
jarm
was
talking
about
six
one:
nine
four
here
yeah
the
test
beds
for
on
premise:
yeah.
Essentially,
either
we
have
to
dog
food.
You
know
this
configuration
or
we
have
to
have
an
environment
that
actually
simulates
customers,
environments.
A
C
A
That's
there's
links
issues
there,
I
think's
Brendan
for
chiming
in
there,
and
that
goes
to
our
testing
and
and
I
think
the
slide
that
six
one
nine
forty
issue,
that
jar
linked
as
well
talks
a
lot
about
that
of
specifically.
What
do
we
need
to
test?
How
do
we
need
to
test?
It
may
give
confidence
and
then
I
think
the
goal
is
to
get
that
gives
much
data
before
we
sent
this
customer
and
say:
hey
we've
tested
this
XY
and
Z.
We
can
help
you
test
it
on
your
staging
environment
as
well.
A
If
we
bring
direct
access
using
roses,
I
mean
we
need
to
fix
a
bunch
of
bugs
and
timeouts
that
we
see
that's
a
good
question.
We
may
need
to
do
that.
I
mean
I'm,
hoping
that
the
existence
of
the
the
feature
flags
that
we
have
can
just
be
enabled-
and
you
know
it
worked
before
for
our
customers-
I-
don't
think
we
think
we
had
the
circuit
breaker
in
place
before
and
we
I
don't
think
we've
actually
ever
enabled
it
for
customers.
So
I
don't
see
that
as
high
priority,
but
it
is,
it
is
possible.
A
We
need
may
need
to
do
some
minor
fixes.
If
we
do
see
issues
should
we
move
get
lab
back
on
to
spinning
this.
Also
because
it
save
costs,
that's
a
good
question.
You
know
it
may
harm
performance,
obviously
for
and
it
will
be
cheaper,
but
it
may
harm
performance,
but
it
will
be
cheaper,
so
I
think
Jarvan
I.
D
Mean
if
I
had
a
can
I
had
something
to
that
as
well.
Yeah
I
mean
we
could
we
could
totally
do
that,
and
certainly
with
the
with
all
the
cloud
spend
issues
that
we
sing
at
the
moment.
It's
definitely
something
that
we
need
to
consider,
but
the
big
problem
is
like
the
classic
outage
that
we
used
to
see
in
Azure
was
basically
all
of
our
unicorn
workers.
All
this
could
be
human
as
well,
it's
not
unicorn.
It's
just
the
front.
D
The
entire
fleet
is
is
bogged
down
with
requests
to
that
one
server,
and
then
everything
falls
over
and
you
start
seeing
fiber
tubes
everywhere
and
so
before
we,
so
one
of
the
things
that
we
could
do
to
bring
down
card
spending
is,
is
switch
some
people
over
to
magnetic
discs
and
have
other
people
and
SSDs
like
premium
clients
and
non-premium
clients,
but
the
thing
that
we
would
definitely
need
before
we
did.
That
is
like
a
really
good
circuit.
Breaker
may
be
built
on
top
of
Envoy.
D
You
know,
instead
of
building
it
ourselves
like
actually
just
use
something,
that's
up
there
ready,
and
so
when
we
have
a
slow
getting
server,
that's
got
spinning
disks.
We
can
just
cut
it
off
and
just
have
500
errors
for
that
particular
giving
server
without
bringing
on
the
inside
cluster
I.
Don't
know
if
that
makes
any
sense,
but
that's
how
sort
of
works
in
my
head,
yeah.
A
That's
absolutely
right,
I
think,
that's
a
reason
why
that
we
were
hesitant
to
do
that.
I
mean
I.
Think
that's
why
we
were
thought
considering
the
circuit
breaker
functionality
right,
but
there
was
an
issue
yeah.
It
basically
be
tie
up
any
of
the
disks
for
a
long
time.
You
can
harm
the
entire
fleet.
A
C
I'll,
just
chime
in
a
bit
because
I
put
the
question
there
but
and
the
the
cost
of
get
Lacombe
are
way
higher
than
they
should
be,
and
we
move
to
SSD
just
because
availability
is
number
one,
but
we
got
availability
fixed
now
and
putting
every
single
repo
and
SSD.
That's
never
gonna
make
get
Lancome
sustainable.
D
That's
a
job
and
I
and
we're
talking
about
this
was
morning
and
we
were
trying
to
work
out
like
what
percentage
of
the
gitlab
fleet's
has
been
used
in.
The
law
of
the
git
repositories
have
been
used
in
the
last
six
months,
so
you'll
probably
find
us
about
10%,
and
so,
if
we
could
take
the
rest
of
those
and
sort
of
cold
storage,
put
them
in
in
in
object,
storage,
and
you
know
providing.
We
can
load
them
up
in
a
minute
or
two
like
I,
think
that
would
be
a
huge
saving.
I
think.
C
That
would
be
amazing
and
I.
Think
that
would
totally
work,
and
so
maybe
maybe
magnetic
disk
is
not
the
answer.
Maybe
dad
is
the
answer.
Just
gets
good
stuff
of
the
get
rid
of
the
90%
that
is
not
being
used
for
last
half-year,
someone
access
it.
We
show
hey,
sorry,
retrieving
your
reactivating
your
repo,
because
it
hasn't
been
used
in
six
months.
I
think
that
Stoli
legitimate.
A
A
E
A
A
Okay,
thanks
thanks
for
that
CJ
Brendan.
How
do
you
consume
so
many
issues
so
efficiently?
It
seems
sometimes
that
you
were
everywhere.
You
sometimes
get
to
verify
issues
before
my
cellphone
yams
yeah.
It's
a
good
question.
I
subscribe
to
a
lot
of
issues
and
I.
If
I
can
be
helpful
and
I
see
something
that
I
could
be
helpful
very
quickly,
I
will
chime
in
there's
no
other
secret
sauce
than
just
you
know,
being
able
to
check
those
check
my
email
quickly
and
respond
quickly.
It
does
help
I
type
pretty
quickly.
A
A
Usage
are
probably
the
top
of
mine
things
here
right,
because
memory
usage
is
is
something
that
doesn't
give
as
much
attention,
especially
runtime
memory,
because
it's
easy
to
write
something
that
just
uses
sixteen
gigabytes
of
RAM
as
I
showed
in
one
of
the
slides
here.
The
thing
this
this
thing
we
were
using
seven
gigabytes
for
a
long
time
and
nobody
noticed
and
I
came
in
and
spent
a
week
and
finally
figured
I
was
something
we
didn't
need
to
do.
A
So
we
need
to
do
moral
things
like
that,
where
we're
constantly
evaluating
and
benchmarking
what
we're
doing,
because
one
of
the
things
that's
happening
all
constantly
I
get
a
lot
calm
and
in
our
customers
is
that
Unicorn
and
sidekick
workers
are
constantly
being
restarted
because
of
memory,
and
that's
not
that's
nice
shouldn't
be
acceptable
to
us,
because
no
process
in
enterprise
world
has
to
restart
constantly
to
recover
from
memory
leaks.
I
mean
that's
just
something.
A
I
really
have
I've
tried
to
fix,
but
it's
a
really
hard
problem
in
general,
so
we
may
need
to
think
about.
You
know
more
dramatic
solutions
like
rewriting
certain
parts
of
gitlab
in
a
language
that
doesn't,
and
it
doesn't
use
as
much
memory
or
just
is
way
more
performant,
but
that
has
a
good
to
be
addressed
on
a
case-by-case
basis.
A
When
we
recommend
an
hae
of
architecture,
we
strongly
encourage
NFS.
Should
we
instead
try
to
encourage
community
not
to
use
NFS,
especially
we
published
Italy
this
blog
post
about
giddily,
it's
a
good
question.
I
mean
we
we've
strongly
encouraging
NFS
to
customers,
to
enterprise
customers
right
and
a
lot
of
enterprise.
Customers
have
a
solution
for
like
an
NFS
appliance,
that
is
fault,
tolerant
right.
So
if
a
single
hard
drive
goes
down,
they
can
recover
from
that
it
may
be.
You
know
if
we
recommend
giddily
is
the
only
way
they
may
raise
some
questions
about.
A
Well,
how
fault
tolerant?
Is
this
and
I'm
not
sure?
If
that's
what
the
default,
we
should
recommend
I
mean
obviously
for
people
who
are
not
concerned
about
availability
and
fruit,
they
care
more
about
performance.
That
may
be
that
one
small
trade-off
but
I
think
it'd
be
hard
for
us
to
say
for
all
of
you
for
our
major
customers
that
say:
don't
use
NFS
until
we
have
a
giddily
high
availability
solution.
That's
my
personal
opinion.
I,
don't
know
if
other
members
have
an
opinion
there.
E
Yeah
I
was
gonna,
grease
and
I.
Think
that
and
they
come
in
from
the
world
of
services
with
our
customers
and
that's
exactly
it
as
they.
You
know,
don't
have
a
team.
That's
going
to
run,
you
know,
take
care
of
availability
like
our
team
doesn't
get
comm
right.
We
can
accept
shard,
but
we
have,
to
you,
know,
take
care
of
and
feed
and
water
and
get
back
on
line,
whereas
for
customers
that's
not
acceptable,
and
so
they
want
to
have.
You
know
a
device.
Take
care
of
that
for
them.
A
Yep
sit
slide.
15
to
16
shows
much
less
memory
usage
from
get
level
11
5
to
11
6.
Does
this
reduce
the
recommended
memory
requirement?
Get
live
from
8
gigabytes.
I
would
hesitate
to
say
that,
because
you
know
psychic
takes
if
psychic
takes
2
gigabytes
of
RAM
there's
a
lot
of
other
things.
I
take
Ram
right,
so
the
thing
that
often
takes
RAM
is
actually
the
get
process
itself.
A
We
forget,
if
you
do
a
git
clone
at
the
linux
repo,
which
a
lot
of
people
like
to
do
that
can
take
2
to
4
gigabytes
and
ram
just
by
itself,
and
so
when
you
start
to
think
about
all
the
different
things
that
need
to
run.
Yes,
this
helps
but
there's
other
things
that
also
need
to
run
and,
and
then,
as
I
see,
and
in
the
next
case
you
know
this
is
Gilad
accom.
A
Obviously
we're
doing
we're
hitting
we're
doing
things
that
other
customers
may
not
be
doing
as
frequently
things
things
are
higher
today,
like,
for
example,
the
export
worker
you
can
see
is
using
anywhere
for
4
to
5
gigabytes
around
the
import
worker.
So,
even
though
customers
may
not
be
using
this
functionality
as
much
as
we
are
I
get
LOD
calm,
they
may
still
hit
these
hit
these
barriers
when
they
do
something,
and
so
we
have
to
just
have
enough
Headroom
for
them
to
function.
A
Now,
if
they're
doing
really
small
repos
not
doing
a
whole
lot,
yeah
I
think
4
gigabytes
might
work,
but
once
you
start
doing
more
interesting
things,
it's
very
easy
to
hit
these
these
hit
these
memory
limits
and
the
baseline
is
obviously
if
it's
over
2
gigabytes.
You
know
if
your
sidekick
is
using
2
gigabytes,
you're
already
hitting
4
gigabytes,
leaving
room
for
very
little.
A
F
A
Certainly
help
in
some
cases
right
if
you
disable
export/import
right
that
will
at
least
potentially
knock
off
this
graph
from
have
the
top
two
things
right
there
are.
You
know
some
of
our
memory
usage
is
just
due
to
code
size
and
our
dependences.
We
have
a
thousand
over
close
to
a
thousand
gem
dependencies
on
top
of
our
own
code,
so
that
all
that
code
gets
loaded
into
memory
and
also
a
contributes,
or
this
is
sort
of
the
picture
of
that
here
and
chance
to
talk
more
about
it.
A
But
a
lot
of
those
that
write
that
not
right
that
red
corner,
then
a
30%
of
our
code
is
just
instruction
sequences
from
Ruby
all
right,
that's
just
code!
That's
gonna
grow
that
over
time,
unless
we
do
something
to
optimize
the
interpreter
or
own
code,
so
that
will
help,
but
it
died.
I,
don't
know!
If
you
know
you
can
you
can
only
do
so
much.
So
if
you
do
disable
code,
it
would
be
nicely
we
can
compile
it
out.
That
would
help
if
you
could
disable
it
outright
so
that
you
don't
hit
these
memory
usage.
A
That
would
help,
but
it's
very
tricky
to
kind
of
optimize
it
completely
right
without
without
you
might
you
know,
our
mantra
is
to
have
an
application
that
just
works
out
of
the
box
that
has
a
reasonable
settings,
so
I
think
it's
hard
to
just
start
tuning
different
things
to
get.
You
know
just
under
the
memory
performance
now
we
know
this
is
a
problem
with
certain
things,
but
for
the
most
part
you
know
like,
for
example,
get
garbage
collector
worker
takes
a
good
two
gigabytes
of
RAM.
A
We
don't
actually
control
a
lot
of
that
because
that's
the
get
process
doing
its
thing
on
large
repos.
So
it's
hard
to
just
you
know,
memory
box
that
but
yeah
I,
think
I
think
it
does
kind
of
go
against.
Like
our
application,
just
working
out
of
the
box,
it
should
just
work
efficiently
without
us
having
to
tune
so
many
feature
flags.
A
A
I
think
that
if
I
were
to
just
guess,
I
spent,
probably
90%
of
my
time,
looking
at
you
lab
code
and
maybe
10%
on
upstream
things,
but
sometimes
in
January
or
last
last
year,
I
had
to
look
at
the
kernel
really
closely
for
NFS
issues,
because
our
customer
was
running
an
NFS
so
that
month,
I
was
spending.
You
know
50%
my
time.
Looking
at
kernel
code.