►
From YouTube: GitLab Architecture 101 for Support Engineers
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Right
hi
welcome
to
my
session
on
gillette
architecture.
This
is
going
to
be
a
fairly
introductory
session.
I
don't
think
I
will
cover
things
like.
A
That
make
up
kidnap
that
makeup
and
g-lab
environment
work
and
how
they
kind
of
work
together.
I
wouldn't
be
covering
all
the
components
as
well
just
the
core
components,
so
I
probably
will
not
be
talking
about
things
like
that
pages
or
registry,
or
some
of
the
other
ancillary
services
that
we
do
have
so
before
I
really
get
into
it.
I
just
wanted
to
say
that
my
explanations
actually
based
on
my
experience
as
a
developer
of
rails
applications
and
deploying
rails
applications.
A
I
have
never
taken
a
computer
science
course
of
any
thought.
So
in
many
cases
my
explanations
will
be
based
on
just
my
the
way
I
kind
of
see
things.
They
may
not
actually
be
technically
accurate
explanations,
so
my
suggestion
is:
do
your
own
reading
and
if
anyone
does
want
to
chime
in
with
a
more
technically
accurate
explanation,
please
feel
free
to
do
so.
This,
hopefully,
will
be
a
bit
more
interactive.
A
A
I
my
hope
is
that
this
recording
doesn't
become
the
resource
for
understanding
architecture,
but
you
know
it
might
be
a
good
starting
point.
So
with
that,
let
me
get
started
I'll
share.
My
screen.
A
Okay,
but
before
I
go
on,
does
anyone
have
any
you
know,
questions
or
any
general
thoughts
that
they
might
have,
which
is
a
bit
of
a
strange
question
to
us.
I
guess
because
it's
a
bit
vague
but
yeah.
A
A
It
is
basically
just
a
framework
written
in
ruby
that
allows
you
to
serve
web
requests,
so
we'll
start
with
rails
and
I'll
just
draw
a
heart
here
as
well
to
show
you
that
you
know
just
to
say
that
you
know
this
is
the
core
application.
A
Right
so
rails
itself,
it's
actually
that's
the
framework.
It
is
sort
of
like
a.
It
is
a
framework
that
takes
on
requests
web
requests
and
gives
a
response,
but
on
its
own
it's
actually
not
the
web
server.
The
web
server
itself
is
actually
puma.
In
current
gitlab
environments,
we
used
to
use
unicon,
but
now
we
use
puma.
A
So
one
thing
to
know
about
puma
is
that
puma
is
actually
what
they
call
a
rack
server.
So
in
ruby
there
is
this
web
request
framework
called
rec
as
well.
So
that's.
A
Rack,
so
what
drag
does
is
that
it
actually
provides
a
way,
a
standardized
way
to
actually
take
on
a
request,
transform
it
in
some
way
and
then
pass
it
on
to
the
next
chain
so
rails
itself.
A
A
Yes,
red
timer
is
one
of
them
and
then
rec
attack
is
the
other
one.
So
how
it
actually
works
is
that
there
is
actually
a
record.
Let's
use
rectangle,
since
you
mentioned
it
called
rect
timeout.
A
A
So
what
happens
is
that
essentially
a
web
request
comes
in
puma
would
serve
it
to
basically
the
first
rack
in
the
rec
stack
and
then
basically
goes
down
and
it
finally
goes
to
rails,
and
then
you
get
a
response.
So
generally
that's
I
mean
it
probably
isn't
the
best
explanation,
but
that's
how
it's
worked.
That's
how
this
works.
This
is
also
why,
if
rack,
timeout
or
wrecked
attack,
which
we
use
for
rate
trotting,
if
they
actually
reject
a.
A
C
Sorry,
I
have
one
waymink
sure
you
said
that
rack.
Is
this
ruby
app,
that's
like
a
middleware
between
puma
and
rails
and
if
there's
some
sort
of
a
rejection
of
the
request
prior
to
getting
to
rails,
you
know
that'll
happen
outside
of
rails,
but
if
racks
of
ruby
app
is
that
yeah?
Is
that
using
the
same
like
ruby,
runtime
or
is
it
different?
Is
it
separate.
A
I
don't
know
the
answer
to
that.
Actually,
so
all
rec
apps
are
all
middleware
and
in
fact,
rails
itself
is
a
racket
as
well.
So
it's
just
that
real
happens
to
sit
at
the
very
bottom
of
that
stack.
If
that
makes
it
if
that
makes
sense,
so
rack
timeout
sits
on
top
of
that.
Another
good
example
of
a
rack
app,
which
I
think
you
might
be
familiar
with,
but
you
might
not
have
realized
that
it
actually
is
a
rack
app.
A
It's
actually
only
off,
so
we
use
omniof
to
provide
authentication
processing
for
different
providers
such
as
github.
I
think,
google
as
well
and
samla,
and
a
bunch
of
other
stuff,
so
how
it
actually
works
in
this
case
is
that
it
actually
receives
the
authentication
callback
from
these
external
providers.
It
then
kind
of
takes
the
information
that
we
need,
which
is
generally
the
authentication
token
and
the
username.
I
think,
and
adds
it
to
the
request
environment
variable.
A
So
this
is
how
rails
kind
of
knows
in
in
a
standardized
form
how
to
kind
of
interact
with
those
things.
So
sorry,
I
think
that
was
a
bit
of
digression,
but
does
it
have
one
help
you
to
understand
kind
of
how
it
works?
Yes,
that
was
awesome.
Thank
you,
okay,
but
yes,
I
I
don't
actually
know
if
it's
the
same
ruby
environment.
I
think
it
is
because
I
think.
A
A
A
But
yeah
in
general,
if
there's
any
logging
to
be
done,
you
don't
have
to
be
done
intentionally
by
the
gitlab
developers,
so
I
don't
think
we
actually
have
it.
So
this
is
why
they
don't
show
up
in
the
rails
lot,
but
the
requests
do
show
up
elsewhere.
A
So,
for
example,
we'll
talk
about
this
a
bit
later
on,
but
in
ng
next
they
definitely
show
up
in
workhorse
day
short
as
well,
because
those
are
just
reverse
proxies
and
all
requests
will
go
through
them,
so
they
will
try
to
forward
it
to
puma
and
then,
if
it
kind
of
you
know
disappears-
and
it
doesn't
appear
in
the
rails-
application
lock-
that
that
probably
means
that
it
was
rejected
somewhere
between
the
reverse,
proxies
and
rails
itself.
A
So
there
will
be
traces
of
those
requests
that
might
be
rejected
by
a
racket,
that's
in
the
chain
before
rails.
But,
yes,
it
will
not
actually
appear
in
the
production
or
production,
json,
dots,
okay,.
B
A
I
actually
don't
know
the
technical
details
of
that,
but
my
understanding
is
that
rec
kind
of
takes
a
rap
incoming
http
request
right
and
then
transform
it
into
sort
of
like
a
standardized
format,
so
that
any
other
person
that
wants
wants
to
write
a
reg,
middleware
or
rack
app
has
a
consistent
interface
that
they
can
work
with.
So
they
don't
have
to
worry
about.
A
Like
hey,
you
know,
if
you're
sending
in
this
request,
you
know
what
are
the
haters
that
you're
going
to
send
use,
so
it
kind
of
kind
of
takes
all
of
that,
and
then
it
kind
of
transforms
the
request
into
something
that
is
kind
of
like
a
standard
interface,
so
that
you
know
anyone
that
wants
to
write
write
a
rack
up.
They
can
do
so
without
having
to
worry
about.
A
Oh
you
know,
I
have
to
think
about
so
many
other
ways
that
you
know
you
might
actually
end
up
having
a
http
request,
it's
kind
of
kind
of
like
xml
it
for
for
people
that
you
know
have
worked
with
xml.
There
is
a
standard
for
xml.
Usually
there
is
a
document
documented
standard.
In
terms
of
you
know
what
keys
to
use
what
attributes
to
use,
but
of
course
no
one
follows
that
everyone
has
their
own
xml
standard
and
that
that
is
a
pain.
A
A
B
A
Yeah,
so
there
are
a
number
of
other
things.
I
want
to
talk
about
specific
to
puma.
A
If
you
have
spent
some
time
looking
at
gitlab
blogs,
you
might
have
noticed
that
puma
workers
tend
to
be
cute
after
a
while,
and
it's
not
you
know
it's
not
like
a
few
minutes.
They
actually
killed
something
like
every
20
to
30
seconds.
A
A
It
turns
out
that
my
understanding
is
wrong.
It
turns
out
that
actually,
ruby
does
have
a
pretty
good
garbage
collection
process.
It's
just
that
for
web
applications
like
rails
memory,
allocation
happens
when
client
clients
hit
endpoints,
and
the
frequency
of
a
web
of
endpoints
being
hit
in
a
web.
Application
is
high
enough
that
ruby
actually
ends
up
allocating
more
memory
than
it
frees
up.
So
essentially,
garbage
collection
happens,
slower
than
requests
come
in,
so
memory
allocation
increases
over
time.
A
I
guess
within
the
known
range,
because
the
number,
the
amount
of
memory
that
a
puma
worker
takes
up
does
affect
the
total
number
of
workers
you
have.
I
mean
you
can't
exceed
the
total
number
of
ram
on
your
system,
for
example.
So
so
that's
one
reason
why
you
see
that
all
the
time-
and
it's
not
a
sign
of
the
puma
itself
being
unhealthy
if
it
happens,
something
like
every
20
seconds
to
30
seconds
or
even
40
to
50
seconds.
A
That's
actually
fine,
but
if
you
see
it
happen
too
often,
you
know
like
five
seconds
or
six
seconds,
that's
probably
a
sign
that
two
liter
memories
allocated
to
each
bulma
worker,
so
yeah
any
any
questions
about
that.
B
You
mentioned
about
that.
It's
actually
scheduled
to
be
started
on
a,
I
think,
a
daily
basis,
the
worker
killer.
Did
you
mention
that
system.
A
No,
oh,
that
does
it
it
so
so,
whatever
yeah,
what
I
mentioned
was
that
the
puma
worker
killer
basically
detects
when
who
might
when
each
puma
worker
is
consuming
a
memory
beyond
the
limits.
Then
he
kills
that
worker
and
then
basically
the
worker
will
restart
on
its
own.
B
So
that's
that's
what
I
was
about
to
add.
Just
there
was
one
one
function
of
it,
so
having
puma
worker
being
killed
sounds
bad,
but
it's
not
necessarily
bad
from
from
the
stability
standpoint,
because
the
whole
point
of
having
this
is
to
actually
maintain
the
memory
right
and
I
think
yeah.
There
was
another
thing
about
this:
this
was
the
the
worker
kill
actually
configured
to.
B
We
call
it
recycle
we
get
recycled
on
a
daily
basis,
that's
just
part
of
the
default
configuration
and,
I
think
yeah.
Basically,
it's
the
same
thing
just
to
maintain
the
memory
usage
of
the
rails.
Anyway,
I
thought
it
was
just
something
that
if
you
see
that
it
got
restarted,
it
doesn't
mean
it's
necessarily
bad.
I
think
the
frequency
causes
issues
yeah.
A
So
one
other
thing
that
I
should
mention
is
that
when
I
say
the
puma
worker
gets
killed,
it
always
gets
restarted
because
if
you
know
say
if
we
define
in
the
gilab
rb
file,
we
want,
we
always
want
to
have
six
workers.
For
example,
there
will
always
be
six
workers,
so
even
if
we
kill
one,
the
puma
master
process
essentially
will
try
to
restart
it
such
that
there
will
be
six
processes.
A
No,
there
isn't,
so
it
is
very
possible
that
puma
worker
can
get
killed
in
the
middle
of
room
quest.
I've
seen
it
happen
before
not
in
not
in
gitlab
actually,
but
in
some
of
the
other
rails.
Applications
that
I've
developed.
A
C
I've
seen
that
I've
had
a
couple
of
tickets
involved
where
puma
worker
killer
was
killing
the
puma
workers
in
the
middle
of
requests
and
the
users
were
getting
502s,
but
sometimes
depending
on
the
request.
They
may
not
receive
that,
but
they
might
maybe
see
request
fail
or
some
other
functionality
not
working
as
expected,
yeah
and
usually
in
those
times
we
would,
you
know,
increase
the
worker
per
you
know
per
worker
memory
to
avoid
that
from
happening.
C
But
yeah
you
were
saying
waming
that
it
being
killed,
frequently,
isn't
necessarily
a
bad
thing,
but
if
that's
happening
in
the
middle
of
requests,
that
can
that
can
cause
instability.
Is
that
right.
A
Yes,
that's
right,
so
so
when,
when
I
say
killing
too
frequently,
I
mentioned
the
number
of
you
know:
20
to
30
seconds
being
okay,
so
that
is
just
a
general
guideline.
A
The
the
real
test
of
whether
your
workers
are
being
killed
too
frequently
is
actually
yes,
you
know
are
too
many
requests
being
cut
off
in
the
middle
of
something
happening,
so
this
is
actually
a
statistical
game,
basically,
because
if
your
workers
are
restarting
too
frequently,
what
will
happen
is
that
you
get
a
whole
bunch
of
requests
that
there'll
be
more
requests.
Essentially
they
get
cut
off
mid
flight
and
then
that
creates
a
poor
user
experience
so
20
to
30
seconds.
A
It's
probably
a
good
guideline
as
to
what's
okay,
but
yes,
you
know,
if
you
do
hear
customers
reporting
that
that
they
are
mysterious,
500s
or
502s
all
the
time
yeah.
That
probably
means
that
there's
too
little
memory
allocated
to
each
puma
worker
and
they
are
restarting
too
frequently
causing
there
to
be
too
many
opportunities
where
connections
get
cut
off
needed
like
so.
Yes,
that
is.
That
is
the
better
way
to
look
at
it.
B
I'm
trying
to
follow
up
with
kenneth's
question
earlier
you're
asking
about
whether
there
is
the
graceful
termination
of
connection
was
that
any
particular
reason
you're
wondering
about
that.
I'm
just
curious.
B
Are
you
asking
me
yeah
yeah,
I'm
just
curious
about
like
why
you
asked
that
question
because
yeah
anyway,
I
have
some
thoughts
on
it,
but
I'm
curious.
B
B
Yeah,
so
so
this
is
the
thing
my
understanding
of
part
of
this
is
also
because,
if
the
code
itself
actually
have
some
runaway
process
that
I
don't
know
not
garbage
collected
properly,
this
rails
timeout
actually
guards
against
that.
But
that's
my
understanding
of
why
it
doesn't
actually
have
a
graceful
termination.
B
A
I'm
not
fully
clear
about
it
as
well,
but
I
think
I
did
read
somewhere
that
it
is
extremely
difficult
for
ruby
applications
to
know
introspectively
how
much
memory
is
actually
consuming.
So
the
way
we
keep
program
workers
is
actually
a
separate
outside
of
braille
sort
of
process,
so
it
doesn't
actually
know
what
rails
is
doing.
It
just
sees
that
the
promo
worker
is
beyond
the
range
of
acceptable
memory
usage
and
you
just
use
it.
You
know,
regardless
of
what's
going
on
with
that
worker
process
at
that
point
in
time.
A
So
again,
I
think
this
is
a
pinch
of
salt.
This
is
somewhere
where
I'm
not
super
familiar
with,
but
yeah.
I
I
just
know
that
yeah
that
there
is
no
graceful
shutdown
of
puma
when
it
exits
this
memory.
When
motorcylinder
is
involved,
it
just
kills
it
rather
impolitely.
Actually.
A
Cool
okay,
thank
you
yeah.
I
just
want
to
touch
on
some
of
the
scaling
triggers
that
they
might
see
for
puma
before
we
move
on
to
the
rest
of
the
gitlab
components.
So
the
scaling
triggers
for
puma
are
basically
cpu.
You
need
cpu
to
run
the
workers,
it's
actually
less
than
unicorn.
If
I
phone
with
unicorn
unicorn
is
basically
one
unicorn
worker
per
dcpu,
whereas
I
think
puma
is
invented.
So
that's
a
bit
more
play
there,
but
you
know
cpus
are
still
something
you
might
want
to
think
about.
A
The
other
scaling
trigger
that
we
have
is
ram.
You
know
if
we
have
four
workers
taking
up
adding
1.2
gigs
of
ram
each,
and
you
only
have
four
gigs
of
ram
yeah.
That's
no,
but
that's
not
going
to
fly
so
jealous
getting
trigger
is
ram.
If
you
are
running
out
of
memory
to
run
more
workers,
then
yes,
you
need
more
ram.
A
Yes,
you
could
do
that,
so
you
know.
If
you
want
to
maintain
the
amount
of
ram,
then
you
need
more
machines,
but
I
think
that's
a
better
scope
of
this
environment
session
there's
also
one
other
scaling
trigger
to
think
about.
Although
it's
more
of
a
performance
problem
than
a
real
scaling
trigger
in
a
very
extreme
cases,
you
can
actually
run
out
of
file
descriptors
for
rails
applications.
The
reason
behind
this
is
that
rails
and
ruby
I
mean
rails
in
particular-
is
dynamically
compiled.
A
So
essentially,
whenever
you
run
a
rails
application,
it
basically
opens
tons
and
tons
and
tons
of
files.
So
in
very
rare
and
extreme
cases.
We
have
seen
this
a
few
times
before.
Customers
actually
report
that
the
underlying
os
reports
that
you
can
no
longer
open
new
files-
and
it
just
refuses
to
do
anything
so
that's
something
to
think
about
as
well.
In
case
you
to
encounter
this
mysterious
thing
where
mysterious
errors
are
happening,
but
you
don't
really
know
why
it
could
be
worth
checking
there.
So
yeah.
C
You
can
also
configure
you
know,
apart
from
the
number
of
workers,
and
the
number
of
amount
of
ram
per
worker
to
consume
is
the
threads
per
worker
and
a
lot
of
the
time.
You
know
you
should
keep
that
default,
but
harsh
and
I
had
an
interesting
problem
where
our
customers,
you
know
premiere,
it
wasn't
starting
up.
C
It
just
was
taking
too
much
time
and
it
timed
out,
and
it
turned
out
that
they
had
set
the
thread
count
too
high,
and
it
was
just
taking
way
too
long
to
stop
and
start
so
I
think
it
was
like
16
or
something
and
we
we
tested
all
the
different
numbers
from
1
to
16
and
once
it
got
beyond
14
or
15
yeah,
the
time
it
took
for
the
amount
of
workers
and
the
threads
to
be,
you
know,
destroyed
and
created.
A
Thank
you
for
that
mike,
so
I
will
move
along
now
into
the
more
interesting
part
about
this
and
we'll
talk
actually
about
the
one
key
reason
why
we
have
more
than
just
puma
serving
up
gitlab.
B
B
B
That's
my
understanding
anyway.
That's
my
understanding
why
you
want
to
cut
it
off
in
six
seconds,
because
then
the
request,
stacked
up
and
all
of
them
connecting
trying
to
process
something,
but
it's
hung.
So
all
the
workers
and
all
the
treads
are
processing
things
that
hung.
Hence
it
fails.
A
B
A
That
that
that's
exactly
the
reason
why
we
want
to
have
a
60
second
timeout
per
request.
So
if
you
have
many,
many
many
actually
don't
need
that.
Many
at
all.
We
will
talk
about
this
when
we
talk
about
italy,
which
is
interesting,
but
you
know
if
we
have
two
or
three
requests
that
take.
Let's
say
two
minutes
each
and
you
have
four
of
those
and
you
only
have
four
workers
that
are
serving
requests.
A
A
So
yes,
so
this
is
why
I
believe
in
our
documentation.
In
the
past
we
used
to
put
in
things
like
you
know,
you
can
actually
increase
the
timeout,
but
be
careful.
They
don't
actually
tell
you
why
you
have
to
be
careful.
So
this
is
the
reason
why
you
have
you
have
to
be
aware
that
you
know
if
too
many
long-running
requests
happen.
A
A
That
makes
sense,
I
mean
I
I
don't
know
if
that's
the
case,
but
if
it
is
the
case,
then
yes,
we
would
have
to
increase
the
timeline
across
the
entire
chain.
A
Yeah,
there
are
also
other
timeouts
with
other
components,
but
yeah
we'll
we'll
get
into
that.
A
But
yes,
any
other
questions
about.
You
know
the
timeout
that
we
have
on
on
bruma
request.
A
Okay,
so
I
I
specifically
want
to
talk
about
the
timeout,
because
this
is
actually
the
key
reason
why
we
have
so
many
other
components.
So
I
guess
the
next
question
to
us
right
is:
if
we
have
a
60
second
timeout
on
puma
request,
then
how
do
we
actually
run
processes
or
run
transformations
that
actually
take
more
than
60
seconds?
A
Sidekick
yeah
exactly
so,
we
run
in
psychic,
so
this
is
exactly
why
we
are
psychic.
So
the
whole
purpose
of
psychic
is
to
actually
run
jobs
in
the
background.
A
A
So
yes,
from
from
from
a
web
development
perspective,
there
are
actually
a
few
reasons
you
want
to
run
psychic
beyond.
Just
you
know,
having
beyond
just
jobs
that
might
take
longer
than
60
seconds.
So
a
good
example
is
when
you
actually
need
to
async
something
where
it
doesn't
make
sense
to
have
it
blocked
a
synchronous
transaction.
So
a
good
example
is
say:
if
you
create
a
comment
in
a
support
team
meta
issue
and
on
that
support,
team
meta
issue,
there
are,
I
think,
30
other
support
engineers
who
have
also
commented.
A
Can
you
imagine
if
you
kind
of
post
a
comment,
and
then
you
have
to
wait
for
the
smtp
server
to
send
30
emails
to
notify
everyone.
It
will
take
forever.
So
what
happens
is
that
we
will
just
actually
send
the
comment
first
and
then
we'll
send
a
background
drop
to
psychic
or
30
background
jobs
decide
actually
to
send
those
emails.
A
Something
else
you
might.
You
might
have
seen
with
background
jobs
is
also
with
payment
processing.
You
might
have
encountered
before
where
your
order
actually
is
confirmed,
and
then,
sometime
later
you
get
a
notification
saying
that
your
payment
method
has
failed.
That's
because
some
e-commerce
platforms,
they
essentially
want
your
business,
so
they
confirm
your
order
first
and
then
they
process
your
payment
in
the
background,
because
sometimes
payment
processing
can
take
up
to
two
minutes.
A
If
I'm
not
wrong
to
clear
so
yes,
some
some
e-commerce
websites
also
do
that
where
they
actually
process
your
your
payment
in
the
background,
and
they
tell
you
it
has
succeeded
when
it
actually
has
not.
So
that's
another
thing.
A
Cool
there
are
timeouts
with
sidekick
as
well,
but
they
are
significantly
longer
than
puma.
I
think
the
psychic
timeout
by
default
is
three
hours
or
something
like
that.
I'm
not
too
sure,
but
yeah.
A
That's
actually
pretty
much
it
for
psychic,
there's
not
much
to
talk
here.
However,
we
will
talk
about
how
puma
or
how
rails
itself
actually
sends
jobs
to
psychic.
So,
yes,
how
how
how
do
jobs
get
sent
from
puma,
or
rather
the
web
rules
to
the
psychic
rules?
Does
anyone
know
the
answer
to
that.
C
A
A
B
I
feel
like
the
the
hints,
is
your
next
section
redis.
A
Yeah,
if
you
kind
of
read
the
document,
yes,
you
can
actually
tell
the
answers
to
my
questions.
But
yes,
it's
redis.
So.
A
We
saw
basically
the
queue
installed
in
redis
and
I
think
mike
you're
very
perceptive.
You
actually
said
t
mic,
that
is,
there
is
a
worker
class
and
then
you
have
arguments
so
in
red
is
well.
My
handwriting
is
terrible.
Let
me
fix
that.
A
And
again,
I
don't
know
if
I'm
using
the
correct
flowchart
symbols
for
this.
For
me,
if
I'm
mangling
it,
but
yes,
so
the
queue,
the
background
queue
for
sidekick,
it's
actually
stored
in
redis.
So
a
quick
introduction
to
redis
redis
is
a
key
value
store.
That
is
very,
very,
very,
very
fast
and
the
reason
why
it's
so
fast
is
that
the
entire
of
its
working
memory
is
stored
in
ram.
Essentially
so
the
scaling
trigger
for
reddish
is
just
the
amount
of
ram
you
have
available
to
redis.
A
If
you
are,
I
would
be
very
surprised
if
you
run
out
of
ram
with
redis,
unless
you
you,
you
are,
you
know
provisioning
some
very
set
amount
like
one
gig
or
less
to
register,
but
yeah
it.
It
scales
at
two
as
much
ram
available
as
it
does
so
how
how
how
the
web
rails
actually
sends
jobs
to
the
psychic
rails
is
that
it
actually
does
this.
A
It
essentially
sends
a
request
to
redis
saying
that
you
know
I
want
to
store
this
data,
essentially
this
one.
So
I
want
to
send
the
worker
class
with
the
arguments
into
one
of
the
lists.
Basically,
so
one
one
more
thing
is
that
the
reddit
stores
keys
values
in
list,
so
each
list
essentially
represents
a
queue
in
in
in
psychic.
A
A
Okay,
cool
so
with
redis,
we
have
a
very,
very
fast
store
of
information.
It's
all
in
memory.
Things
happen
very
quickly.
You
send
a
request.
Wait
it
just
you
know,
gives
you
back
the
value
immediately,
because
it's
also
in
memory.
What
else
is
that
good?
For.
A
B
A
Yes,
exactly
so
the
also
remember
the
60
second
timeout
per
request.
This
is
something
this
is
a
key
optimization.
We
need
to
always
plan
against.
We
want
web
web
request
to
be
as
fast
as
we
can
so
that
we
can
serve
more
requests
per
worker,
so
exactly
reddish
is
used
as
a
cache,
so.
A
Yes,
this
is
actually
basically
taking
a
result
storing
in
cash
so
that
the
next
time
you
access
it,
you
no
longer
have
to
run
the
calculations
anymore.
You
just
get
the
result
directly,
some
common
things
that
we
saw
in
cash,
not
just
things
that
require
a
lot
of
processing,
but
also
things
that
are
very
commonly
retrieved
and
it's
much
faster
to
retrieve
this
from
redis
than
it
is
from
the
database.
So
one
common
example
about
for
this
is
actually
application
settings
in
gitlab.
B
I
just
recently
found
out
this
is
probably
a
bit
more
kind
of
hay,
but
not
really
either.
So
I
started
recently
find
out
that
we
can
separate
reddish
cash
and
redis
per
system,
so
I'm
assuming
the
cash
is
that
right,
the
status
and
so
and
the
persistent
is
the
the
list
for
psychic.
B
A
I
was
gonna
say
what
I
know
about
reddit
is
that
so
reddit
stores
everything
in
memory,
but
it
still
needs
a
way
to
persist
this
between
events,
where
you
restart
your
server
or
you
know
you
lose
power
to
your
server.
So
as
far
as
I
know,
in
the
context
of
reddit
persistence
means
that
you
know
every.
A
I
can't
remember
how
often
it
is,
but
every
so
often
it
does
actually
persist
all
of
its
in-memory
values
to
the
disk,
so
that
the
next
time
red
is
reboots,
you
can
actually
repopulate
the
the
what's
in
memory,
so
I'm
I'm
not
sure.
Actually
what
you
mean
by
you
know.
We
can
separate
the
regis
persistence
from
the
various
values
in
memory.
B
I
literally
just
find
out
like
half
an
hour
before
this
call
that
we
can
actually
separate
ready's
cash
and
ready's
persistence,
especially
in
the
10k
architecture.
Above,
so
that's
why
I
was
like
wait.
This
is
a
new
thing
for
me,
so
I've
started
to
kind
of
like
think
about
it
in
a
different
way,
because
I
always
thought
redis
is
a
very.
B
I
mean
historic
memory
right
so
like
I
always
assume
that
all
the
all
the
content
in
it
was
very
ephemeral
like
you
can
just
like
disappear,
but
obviously
with
well,
not
obvious.
I
guess
with
the
way
you
know
how
interactive
sidekick
it's
like.
B
B
A
No,
no
worries
that
actually
gives
me
something
to
look
more
into
as
well,
because
there
are
many
different
ways
to
do.
Redis,
like
you
know
it
stores
things
in
lists.
So
in
theory
you
could
have
multiple
radishes
each
with
its
own
different
set
of
lists,
and
this
is
this
is
kind
of
what
we
do
with
psyche
as
well.
If
you
kind
of
want
different
workers
to
process
different
cues,
we
we
have
them
in
separate
lists,
but
yeah.
I
don't
go
over
that.
A
C
Just
for
my
understanding,
just
so
for
the
asynchronous
jobs
that
rails
initiates
so
does
redis
store,
which
jobs
are
needing
to
be
run
like
the
queues,
but
sidekiq
actually
does
the
execution
of
that
in
its
own
little
rails
environment.
Is
that
correct.
A
Yes,
that's
right,
so
so
in
in
this
flow
here,
whatever
what
happens
is
that
the
web
rails
will
say?
Okay,
you
know
based
off
this
request.
I
want
to
run
this
worker
class
with
these
arguments,
so
it
essentially
stores
these
arguments
into
radis
and
then
later
on,
psychic,
when
a
psychic
worker
is
free,
it
will
pull
reddish
for
the
next
job
in
in
the
queue
essentially
and
it
sees
worker
class,
and
these
are
the
arguments,
so
it
has
the
same
copy
of
rails
that
the
web
rails
does.
A
A
No,
no,
no
worries,
so
the
next
thing
I'll
probably
talk
about
is
postgres,
so
I
mean
all
web
applications
do
need
a
persistent
store.
In
our
case,
we
use
portraits
as
our
database
a
relational
database,
so.
A
This
is
yep
so
for
postgres
there
there
isn't
really
much
to
talk
about
for
postgres
in
terms
of
the
service
on
its
own,
but
we
want
to
talk
about
how
the
connection
between
postgres
and
rails
works
and
we'll
talk
a
bit
about
high
availability
as
well,
just
just
a
little
bit,
but
briefly
about
postgres
postgres
actually
looks
very
similar
to
actually
I
don't
know
if
it's
similar,
but
how
postpress
works
is
that
postgres
will
have
a
master
process
and
it
actually
creates
a
fork
of
itself
whenever
it
needs
to
serve
a
new
request.
A
A
All
right
so
rails
has
this
thing
called
a
built-in
connection.
Pooler
and
connection
pooling
is
actually
it's
not
unique
to
rails.
It's
actually
a
basic
database
sort
of
thing
where
sorry
for
databases,
so
you
have
a
pool.
A
Right
so
the
idea
behind
this
tool
is
that
and
keep
in
mind
that,
in
order
to
serve
new
requests,
postgres
will
always
itself
and
that
to
kind
of
basically
start
up,
a
new
postgres
process
is
a
very
expensive
one.
So
the
reason
why
we
have
a
pool
with
multiple
connections
in
that
pool,
hence
the
connection
tool,
is
that
once
one
database
connection
is
done,
it
doesn't
actually
close
that
connection.
A
It
maintains
the
connection
pool
so
that
if
there's
a
second
connection,
you
can
actually
reuse
an
existing
connection,
so
you
don't
have
to
keep
postgresql
all
right.
The
postgres
doesn't
have
to
keep
itself
in
order
to
serve
new
database
requests.
So
this
is.
This
is
basically
a
performance,
optimization
technique,
and
this
is
why
you
might
hear
about
things
like
database
pooling
or
connection
pooling.
So
this
is.
This
is
a
useful
concept
to
understand
that
rails.
Actually,
does
it
natively
so
psychic?
Does
it
puma?
Does
it
or
rather
rails?
C
A
So
how
this
normally
works
is
that
you
define,
like
you,
want
your
connection,
pool
to
be
eight
essentially
or
for
whatever
it
is,
and
then
it
will
actually
create
new
connections
to
the
database
until
you
reach
the
maximum
number
of
connections
and
then
it
will
stop
creating
new
database
requests
so
on
the
postgres
end,
it
doesn't
actually
know
any
of
this.
It
just
knows
that
I
have
incoming
requests.
A
It
just
happens
that,
because
you
said
on
your
application
site,
you
set
your
connection.
Pool
to
eight.
For
example,
postgresql
sees
that
I
only
ever
get
eight
incoming
requests,
so
I
only
fought
myself
eight
times.
D
A
Pg
pouncer
is
also
a
connection
pooler
and
I'll
go
into
that
just
very
shortly.
So
yes,
yeah
I'll
talk
about
that
in
just
in
just
a
bit.
So,
but
before.
D
A
D
A
But
yeah
was
it
t
might
last
like
that
question
about
the
connection
tools
did.
Did
that
kind
of
answer
your
question
right,
I
mean
I
kind
of
answered
in
a
very
non-technical
way.
I
think,
but
yeah.
C
A
A
Yeah
yeah,
so
so
the
postgres
server
doesn't
care
about
the
connection
pooling
it
just
knows
that
I
have
an
incoming
process.
I
will
for
myself
to
to
meet
that
process.
Sorry
incoming
connection,
I
will
force
myself
to
meet
that
connection.
If
I
don't
have
enough,
if
I
don't
currently
have
enough
processes
to
to
deal
with
it
so
yeah
connection
pooling,
is
always
kind
of
inside.
A
Cool
so
mike
about
pg
bouncer,
that's
actually
one
to
talk
about
nick.
So
when
do
we
use
pg
bouncer?
So
imagine
if
you
have
multiple
rails
applications,
this
would
be
a
skilled
scenario,
so
rails.
B
A
So
you
end
up,
you
know,
instead
of
like
eight
connections,
you
end
up
with
24
connections
to
postgres
and
things
start
to
look
bad
in
terms
of
performance.
A
This
gets
even
worse,
the
michael,
so
so
what
do
we
do
here?
We
have
pg
bouncer,
essentially
so
pg
faster,
get
gets
rid
of
all
this.
A
A
A
A
I
mean
since
you're
on
the
topic:
pg
bouncer
is
not
just
a
connection
puller,
of
course
pg
bouncer,
basically
with
the
pg
bouncer,
it
can
actually
maintain
a
list
of
postgres
servers
to
connect
to
so
essentially,
if
the
top
one
on
this
list
is
no
longer
available,
you
can
go
to
the
next
one
and
that's
the
beginnings
of
a
high
availability
setup
for
postgresql
but
yeah.
A
A
Okay,
if
not,
then
I
I
do
want
to
cover
a
few
more
things
about
postgres,
not
specifically
to
the
service
itself,
but
to
more
how
we
use
it.
So
each
postgres
server
can
actually
have
multiple
databases,
the
one
that
we
use
for
gitlab.
It's
called.
If
I'm
not
wrong,
get
that's
not
a
letter.
G
is
get
lab.
A
Yes,
the
database,
which
we
use
is
hq
production
for
kid
lab
something
interesting
to
note
here.
The
reason
why
it's
called
hq
production
is
because
I'm
guessing
when,
at
the
very
beginning
of
gitlab,
when
we
created
the
gitlab
rails
application,
someone
ran
this
command
rails.
New
gitlab
hq
because
yeah
this
actually
is
a
rails
convention
where
the
database,
this
is
the
app
name,
and
then
this
is
the
environment
that
the
database
is
meant
to
be
in.
A
If
you
actually
created
a
rails
application
and
run
run
it
in
your
local
environment,
so
a
development
environment.
This
would
be
gitlab
underscore.
Sorry
kid
like
hq
underscore
development.
A
A
So
one
thing
to
note
here
is
that,
if
I'm,
if
I
remember
correctly,
the
gitlab
rate
backup
task
actually
only
backs
up
this
database,
it
doesn't
actually
back
up
the
metamodes
database
if
I'm
not
wrong,
but
I
could
be
wrong
as
well.
You
know,
but
yeah,
that's
always
something
to
look
out
for
the
second
thing
about
this.
To
note
is.
A
Yes,
I'll
cover
this
as
well,
so
what
happens
in
fortress
kind
of
stays
in
progress,
what
happens
in
real
estate
in
rails?
So
whenever
we
update
a
rails
application,
we
will
need
to
figure
out
a
way
to
tell
the
database
to
update
its
schema
or
its
data
structure
in
order
to
meet
the
needs
of
the
software
itself,
and
that's
why
we
have
database
migrations,
so
database
migration
is
essentially
a
way
for
rails
to
define.
A
You
know
to
support
this
software
change.
These
are
the
changes
I
need
made
to
the
database.
So
this
is
why
we're
software
sorry.
This
is
why
we
have
database
migrations.
It's
basically
a
way
to
keep
the
database
in
sync
with
the
changes
we
make
in
the
application,
so
yeah
any
questions
about
that.
A
Okay,
so
the
last
data
store
haven't
covered,
yet
it's
basic
is
basically
the
most
important
one
for
gitlab,
and
that
is
italy,
because
you
know
what
what
would
be
without
repository
storage
right
now,
it
won't
be
called
deep
kid
lab
it'll
be
called
jira.
Maybe.
A
A
A
Yes,
exactly
so,
essentially,
all
gately
does
is
that
it
takes
commands
from
rails
in
terms
of
in
the
form
of
remote
procedural
course
and
actually
runs
the
the
raw
git
commands
on
disk
against
the
bundled
repositories
or
the
repositories
in
blended
form.
So,
yes,
yeah
yeah,
exactly
right.
C
C
A
So,
yes,
that
that
that
is
exactly
how
what
it
does
so.
One
thing
to
note
here
is
that
italy
can't
actually
run
any
arbitrary
gate
commands.
It
has
to
run
commands
based
on
remote,
procedural
cores
or
grpcs,
which
have
to
be
predefined
by
developers.
So,
yes,
italy
can't
simply
run
anything
it
wishes.
It
can
only
run
what
our
developers
has.
A
Excuse
me
so
essentially,
here
is
how
it
works
right,
so
the
web
rails
or
the
sidekick
rails
I'll
just
use
the
web
rails.
For
example,
sensor
grpc.
A
And
then
italy
gets
a
response
in
cases
where
italy
takes
too
long
to
get
back
to
rails.
You
see
that
in
famous
like
you
know,
I
think
it's
14
dayline
exceeded.
A
Yeah
14
day
line
exit.
This
is
actually
application
error,
so
there
actually
are
three
tutorial
timeouts
defined
with
this
there's
a
fast
timeout,
there's
a
medium
timeout
and
there's
a
slow
timeout
and
then
there's
an
overall
timeout
the
overall
time
out.
If
I'm
not
wrong,
is
set
to
57
seconds,
which
is
95
of
the
puma
timeout
and
then
the
others
are
set
at
some
value,
which
I
can't
remember
so
again
this.
A
This
is
all
designed
to
prevent
a
situation
where
you
have
a
long-running
operation
that
holds
up
the
puma
server
from
serving
more
requests.
A
So
and
someone
correct
me
if
I'm
wrong
here,
if
I
remember
correctly,
the
timeout
actually
happens
on
the
puma
side.
So
when
you
see
daylight
exceeded,
it
actually
happens
on
puma,
but
italy
in
the
background,
still
runs
the
command.
He
doesn't
care
that
there's
a
timeline.
It
doesn't,
you
know,
do
like
a
queue
of
the
process.
It
just
continues
running
that
git
command
until
it
finishes
so
yeah
you,
you
occasionally
see
this
in
tickets
as
well.
A
Where
people
ask
like
hey,
you
know
this
time
is
happening,
but
the
italy
server
still
seems
busy
and
he's
still,
you
know
churning
something.
No
other.
That's
exactly
why
there's
some
gain
operation
that
is
continuing
to
run,
even
though
puma
says
you
know
I'm
done.
I
don't
want
to
continue
with
this
guitary
operation
anymore.
C
Sometimes
I
see
in
the
grpc
requests
it
has
like
a
start
time
and
then
it
has
a
deadline
start
and
a
deadline
end
time.
So
is
it
that
gitzly's
already
aware
of
the
deadline?
That's
been
set
and
does
it
proactively
fail
if
the
duration's
above
that
or
does
it,
as
you
say,
just
keep
going,
but
you
know
that
the
deadline
has
already
exceeded
and
already
been
enforced
by
nginx
or
something.
A
I'm
not
I'm
actually
not
sure
about
this
one
so
to
be
to
be
very
transparent.
I
have
not
touched
tickets
on
a
day-to-day
basis
for
over
a
year
and
a
half
or
even
two
years
now
so
back,
then
that
was
the
case.
B
C
Yeah,
I'm
not
sure
I
mean
it's
just
something
that
I
observed
and
sometimes
I'd
see
correlations
between
the
deadline
and,
and
you
know
the
end
time
of
the
operation
but
yeah.
I
don't
know
for
certain
if
that
does
happen,
but
it
it
seemed
like
like
it's.
I
don't
know
from
observing
it.
It
looked
like
that
was
the
case,
but
it'd
be
good
to
know.
If
that
is
the
case
or
not.
A
A
Something
else
I
want
to
cover
with
italy
is
so
let's
say
you
have
four
puma
workers
available
to
serve
requests
to
serve
web
requests,
and
you
know
say
if
we
want
to
make
a
request
to
view
a
repository
so
with
four
workers.
How
many
concurrent
repository
requests?
Do
you
think
we
can
do
through
rails.
A
The
the
the
actual
answer
is
the
answer
that
the
intuitive
answer
is
four,
but
the
actual
answer
is
two.
The
reason
behind
this
is
that,
every
time,
every
time
a
grpc
is
made,
gitely
actually
sends
out
a
request
back
to
rails
to
an
internal
api.
A
Turner
api:
it
does
this
to
actually
check
that
the
person
or
the
user
that
is
making
that
request
actually
has
the
permissions
to
view
that
repository
or
do
that
repository
action.
A
A
Yeah
two
threads:
I
think
it's
more
accurate
for
puma
two
workers
will
be
migrated
for
unicorn.
So
but
yes
it,
each
getally
request
does
take
up
a
bit.
More
can
sorry
can
take
up
a
bit
more
threats
or
workers.
Then
then
you
might
think
it
does
because
they
actually
make
an
internal
request
before
completing
the
original
request.
A
Yeah,
so
so
that's
one
culture
to
know
about
which
it's
really
handy
to
know
as
well,
because
sometimes
customers
who
have
a
relatively
small
omnibus
box
or
sorry,
a
relatively
small
single
box
running
on
the
bus,
can
run
into
things
like
this.
Where
hey
I
trying
to
access
a
much
request
that
my
colleague
is
too
and
now
suddenly,
we
can't
do
anything
yeah.
This
is
why,
although
it's
been
less
common
now,
I
think
more
of
our
customers
are
more
vp
machines,
but
you
never
know
when
this
is
useful
to
see.
A
So
there
is,
there
are
two
common
ones
that
that
we
will
we
see
often
in
in
logs,
and
this
would
be
I'll
write,
a
post,
upload
pack
or
ssh
depending
on
you
know
whether
it's
a
kit
or
whether
it's
get
over
http
or
get
over
ssh
and
post
receive
pack.
A
Mistake
I
think
that
people
do
when
they
first
look
at
gitelli
locks.
Is
they
don't
realize
that
the
utility
locks
are
actually
written
from
the
perspective
of
the
server
itself,
so
post
upload
pack?
If
you
kind
of
look
at
it,
you
might
think
that
this
is
actually
a
git
push,
but
actually
it's
a
git
pool,
because
it's
the
capitalist
server
uploading
data
back
to
the
client
and
same
thing
for
git
receive
pack.
A
A
Okay,
the
last
thing
to
know
about
fog,
italy,
is
that
the
main
performance
driver
is
always
iops.
Kit
has
many
operations
that
involve
many
small
files.
So
if
you
have
ions,
that's
really
low,
like
700
800,
spinning
discs,
essentially
it
probably
will
work,
but
you'll
probably
fall
over
really
quickly
as
well.
If
you
have
av
kit
operations,
you
probably
want
something
more
in
the
range
of
7000
to
8000,
which
is
more
like
a
ssd,
so
yeah,
that's
something
to
take
them
off
as
well.
A
The
internals
of
the
environment,
the
the
gitlab
environment
now
we'll
talk
about
how
this
actually
interacts
with
the
outside
world.
So
I'm
just
going
to
move
this
over.
A
A
A
So
both
of
these
are
reverse
properties.
Reverse
properties
are
essentially
web
services
or
web
servers
that
take
a
request
and
then
sends
them
elsewhere,
depending
on
routing
rules.
Usually
this
would
be
hostname
and
portal,
so
gitlab
shell
takes
on
all
ssh
connections.
Generally,
we
are
just
sending
this.
I
think
right
to
keep
the
leaf.
I'm
not
wrong.
A
A
I
don't
know
the
exact
port
numbers
for
many
of
these,
but
yes,
these
are
some
examples
of
where
nginx
can
send
requests
to.
That's
not
actually
the
core
gitlab
components,
but
if
it
does
want
to
send
a
request
to
gitlab
itself,
sorry
the
call
rails
components
it
sends
it
first
to
gitlab
workhorse.
So
this
is
what
costs.
A
So
workhorse
itself
is
actually
another
reverse
proxy.
In
most
cases,
workhorse
will
basically
send
a
request
over
to
rails,
but
the
purpose
of
work
costs
here
and
remember
again
about
the
60
second
timeout.
What
course
is
there
basically
to
intercept
any
long-running
requests,
which
we
don't
necessarily
need
rails
to
actually
do
so.
A
Some
examples
about
this
would
be
handling
git
operations
over
http.
We
don't
need
to
go
through
rails
for
that.
If
it's
just
like
you
know,
push
or
whatever
it
is,
I
think
it
just
goes
to
gitaly
itself,
so
we
kind
of
avoid
having
to
touch
the
rails
application
which
is
relatively
slow.
A
A
This
is
also
the
reason
why
debugging
artifacts
uploads
is
notoriously
difficult,
because
all
you
have
to
work
with
are
the
horse
logs,
the
ng
next
locks,
the
wall
and
the
workhorse
logs
and
the
runner
job
logs.
So
yes,
you
don't
you
don't
get
a
lot
of
information
there.
A
So
yes,
the
whole.
The
whole
point
of
work
course
here
is
to
basically
intercept
any
requests,
which
will
probably
run
longer
than
60
seconds
or
do
not
actually
require
the
rails
application
at
all.
So
this
is
also
an
optimization
type.
Essentially,
you
know
don't
send
requests
to
the
web
server
if
they
don't
need
to
be
served
by
the
web
server.
A
Another
common
example
of
a
reverse
proxy
doing
this
would
be
using
the
reverse
proxy
to
actually
serve
static
files
instead
of
going
through
the
web
server
and
then
the
web
server
telling
you
please
serve
the
static
file,
so
I
don't
think
we
do
that
with
gitlab
or
we
might,
but
this
is
also
a
common
optimization
technique
that
web
developers
use
in
in
other
real
applications.
A
A
A
Okay,
yeah.
I
think
that
that
basically
covers
all
the
topics
I
want
to
talk
about
with
the
architecture,
one
one
I've
covered
the
internals
of
it
and
some
of
the
reasons
why
we
have
them
that
way
and
how
they
kind
of
talk
to
each
other
and
why
they
talk
to
each
other.
That
way,
I've
also
covered
a
bit
about
basically
the
front
doors
to
which
all
requests
are
served,
which
is
the
gitlab
shell
and
nginx
reverse
proxies
and
purely
from
the
developers
perspective.
A
It's
actually
very
interesting
to
see
how
so
many
of
these
components
are
there
just
to
avoid
overburdening
the
rails.
Application
itself,
which
is,
if
you
read
around
people,
have
constantly
consistently
trash
rails
for
being
for
not
being
very
performant,
it's
true,
which
is
why
we
have
all
these.
A
So
that's
it.
I
guess
so
yeah
any
any.
Anyone
have
any
questions
or
any
remarks
we
do
have
about
another
10
minutes.
I
think
for
just
general
discussion.
B
I
have
a
comment,
I
think,
so
it's
only
on
sas
right
now
that
gitlab
shell
runs
as
an
sshd
daemon
on
self-managed.
It
still
uses
openssh
and
in
the
authorized
key
file.
So
when
you
log
with
ssh
using
your
ssh
key,
it
runs
a
force
command
which
then
invokes
gitlab
shell
and
then
that
does
all
the
stuff.
So
on
self-managed
you
still
log
in
over
openssh
first
and
then
it
hits
gitlab
show.
B
I
think
they're
trying
yes,
so
on
sas
right
now
on
gitlab.com,
so
they
are
working
on
replacing
openssh
and
there
they
have
there's
a
feature
in
the
new
version
of
gitlab
show
where
you
can
actually
just
run
as
an
sshd.
B
Yeah,
I
think
that
was
what
ken
brought
up
last
week
that
there's
a
replace,
I
think
it's
replacing
github.com
it's
not
because
it's
not
yet
enabled
on
self-managed
right.
B
C
C
You
know
a
critical
part
of
the
infrastructure
because
yeah
it
satisfied
a
need
to
you
know,
be
in
front
of
rails
and
puma
and
take
a
lot
of
the
load.
I
think
that's
why
they
called
it
workhorse,
but
yeah
interesting
that
it
just
started
from.
I
guess
one
person's
idea
and
just
you
know,
hacking
away
on
the
weekend
and
now
it's
where
it
is
now.
A
One
of
our
other
key
features,
which
is
gillette
ci,
actually
started
out
much
the
same
way,
I
think
back.
Then
there
was
a
lot
of
debate
as
to
you
know,
should
we
have
ci
as
a
separating
or
should
we
you
know
bake
it
right
into
gitlab
camille?
I
think.
What's
the
one
that
said,
you
know
what
I'm
just
gonna
do
this
and
it
happened
the
way
it
did
and
it
has
become
quite
core
to
our
product.
B
A
Yeah,
I
think
in
many
cases
scaling
is
actually
pretty
okay.
You
know
what
maybe
it's
not,
but
I
think
it's
quite
intuitive,
like
you
know,
if
you
want
to
have
multiple
application
servers,
because
you
know
you
have
too
many
incoming
web
requests,
you
need
two
of
these
right
and
then
once
you
have
two
of
these,
you
run
into
the
problem
of
that
you
know
having
multiple
applications
hit
your
postgres,
and
then
you
know
how
do
they
actually
talk
to
you,
redis
and
stuff,
like
that,
so
it
kind
of
grows
from
there,
but.
A
A
Anyone
else
have
any
comments,
questions
they
want
to
bring
up.
Otherwise,
you
know,
let's,
let's
stop
this
here
and
we
can
I'll
go
back
to
you.
The
rest
of
our
days.
C
I
do
have
a
question:
it's
not
related
to
architecture,
but
the
program
you're
using
is
that
microsoft,
one
note
or
a
different,
because
I've
never
seen
this
drawing
freehand
functionality.
A
It
is
microsoft,
onenote
I
I
am
actually
I'm
using
a
drawing
tablet.
I'm
sure
you
can
see
it.
A
Man,
the
focus
is
not
working,
but
yes,
I
I'm
actually
using
a
drawing
tablet.
It's
a
wacom,
so
yeah
you
can
actually
draw.
I
think
you
can
even
draw
using
the
mouse
if
you
want
so
I'm
using
the
mouse
now
so
so
yeah.
It
does
work.
A
B
We
might
have
to
end
it
there
waiting
you've
been
summoned
by
the
ceoc
for
some
help
I'll
in
the
summer.
All
right.
A
All
right,
then,
I
mean
we
are
at
time
anyway.
So
thanks
everyone
for
coming.
I
I
hope
this
actually
helped
you
get
a
better
understanding
of.
You
know
how
gitlab,
how
the
different
github
components
work
together
and
again.
My
hope
is
that
you
will
take
this
understanding
and
build
out
your
understanding
and
also
build
out
our
training
materials
and
our
documentation,
so
that
you
know
this.
This
doesn't
become
like
the
one
resource
that
you
use
for
years.
On
that
I
I
sincerely
hope
that
doesn't
happen.