►
From YouTube: GitLab Pages codebase walkthrough
Description
#gitlab_pages
A
Again,
yeah
and
what
I
will
be
continuing
my
videos
on
github
pages
development.
I
just
finished
the
first
one,
which
was
described
in
the
how
to
show
how
to
create
a
dev
development
environment
and
in
this
video
I'll
try
to
go
over
how
pages
works
internally,
how
everything
kind
of
tied
together
we
have
rails.
We
have
gitlab
pages
daemon.
We
have
some
kind
of
storage
mechanisms
which
allows
us
to
share
files
between
those
two.
A
We
have
api
used
between
them
as
well.
So
I'll
try
to
show
all
this
I'll
try
to
show
code
and
kind
of
explain
how
all
this
works
together.
I'm
not
sure
how
good
of
a
job
I
will
do
because
I've
never
done
such
video
before
I
don't
have
any
slides
so
I'll.
Try
to
show
the
project
open,
some,
maybe
rails,
console
and
show
how
yeah,
basically,
how
it
all
structured
inside
and
how
you
can
debug
these
things.
A
So
I'll
share
share
my
screen.
Now
this
one.
A
A
So
let's
say
we
want
to
understand
like
we
have
a
project
and
we
want
to
understand
where
we
can
find
these
files.
This
is
actually
what
gitlab
will
do
when
we,
when
gitlab
pagesdemon,
asks
for
the
recent
deployment
or
something
so
I
just
opened
the
rails
console
here
and
I
can
just
find
the
last
last
project
because
I
created
it.
The
last
and
every
project
has
a
sub
entity
called
pages
metadata,
sorry
pages
metadata
and
as
you
see
it
has
a
few
things
it
has
artifact
circuitry.
A
This
is
the
remnant
of
our
transition
architecture,
so
just
ignore
this,
but
it
also
has
a
pages
deployment.
Id
and
pages
deployment
is
actually.
Let's.
Let
me
show
it
to
you
so
pages
deployment
yeah,
it's
just
a
simple
entity
which
has
a
artifact,
zip
file.
It
just
copied
from
the
yeah
from
the
artifacts
archive
and
uploaded
to
a
separate
location,
so
we
actually
could
have
used
the
artifact
archive
directly
and
we
did
it
when
we
were
transitioning
to
the
new
architecture.
A
A
A
Yeah,
it
will
take
some
time
so
basically,
every
time
you
run
a
job
not
just
run,
but
every
time
you
job
with
name
pages
and
artifacts
containing
public
directory
finished
yeah.
Okay
finishes:
you
can
yeah
it
uploads.
It
just
copies
this
artifacts,
not
the
job
itself,
but
we
have
a
special
worker
which
copies
this
artifacts
archive
to
a
separate
location.
A
So
you
can
just
search
for
the
pages
worker
yeah
and,
as
you
see
just,
has
a
single
action
deploy
and
it
calls
this
service.
So
we
can
now
go
find
it
update
pages
service.
A
So
this
is
just
an
object
which
receives
the
project
and
the
build
id
build
id
is
right
here.
So
yeah,
you
are
nothing
crazy
happening
here.
So
then
there
is
a
bunch
of
validation
and
then
we
just
create
a
pages
deployment
in
here.
A
Yeah.
Sorry
just
I
just
opened
this
directory
for
the
first
time
because
I
just
created
this
gdk
and
my
editor
editor
tries
to
index
a
lot
of
things
and
kind
of
slow,
so
yeah
what
it
does
it
just
calculates
by
the
bunch
of
things
like
we
take
sha256.
A
It
will
be
cash
key
in
some
cases,
but
yeah
just
ignore
it,
and
then
we
basically
just
created
deployment.
We
do
some
more
validations
here
and
then
we
call
this
update
pages
deployment.
What
it
does
is
just
as
you
saw
above
we
have
this
where
it
is
pages
metadata,
and
it
just
basically
updates
this,
these
two
fields
and
just
updates
pages
deployment
id
and
then
next
time
someone
tries
to
understand
what
which
pages
deployment
is
the
last
for
this
project.
A
This
is
how
it
all
works
when
you're
deploying
stuff,
so
nothing
special
happening
here.
We're
just
copying
one
file
from
one
location
to
another.
A
It
is
possible
to
set
up
these
pages
deployments
to
be
stored
in
the
object
storage,
but
from
this
code
perspective
it's
all
transparent,
regardless
or
on
where
you
can
start,
but
just
to
mention
that
yeah
you
can
store
it
just
locally
on
the
file
system,
also
in
legacy
architecture,
and
we
had
this
on
github.com.
We
were
having
the
nfs
layer,
so
we
were
copying
all
these
files
to
yeah,
not
copying,
but
yeah.
A
We
had
this
kind
of
distributed
way
of
storing
the
files
on
the
nfs,
and
then
pages
demon
could
have
read
them
yeah,
but
now
it's
kind
of
a
duplicate
in
case.
So
if
you
want
to
run
multiple
pages
servers,
multiple
gitlab
web
server
servers
or
just
separate
gitlab
and
little
pages
server,
it's
highly
recommended
to
use
the
object
search.
A
A
Oh
actually,
it's
easy!
You
see
that
we
have
a
object
store
configured
for
pages.
I
mean
we
have
a
bucket
configured.
A
I
was
absolutely
sure
you
can
configure
it
from
here,
but
I
guess
now
you
can't
easily
configure
right
now
inside
the
gdk,
the
object,
storage
or
maybe
it's
configured
by
default.
If
you
have
this,
but
whatever
we'll,
I
see
that
in
a
moment
when
I'll
try
to
show
you
how
it's
all
stored
and
actually
yeah,
let's
go
back
here.
A
A
Just
to
be
sure,
so
just
some-
I
don't
know
explaining
the
path
here,
so
this
hashed
directory
just
added
here,
because
in
the
past
we
had
pages
stored
by
the
domain
namespace
by
the
sorry
namespace,
which
can
be
a
group
name.
Can
your
personal
name
space?
A
So
this
add
sign,
isn't
allowed
in
the
name
spaces.
So
at
hashed
just
means
that
yeah
you
don't
yeah.
We
just
can't
collide
yeah.
There
is
no
possibility
of
name
collision
with
the
actual
namespace
name,
and
this
is
just
inferred
from
the
project
id
just
alone,
hash
name
and
then
yeah.
It's
just
standard
structure
of
how
we
define
files
in
carrier
wave,
but
it
always
be
just
a
single
file
in
there.
So
you'll
always
have
these
pages
deployments.
A
Then
id
of
the
pages
deployment
and
the
single
file
in
there
and
in
this
directory
there
will
be
always
no
there.
There
is
possibility
of
having
multiple
deployments.
In
the
same
time,
we
don't
delete
them
immediately.
After
actually,
if
we
go
yeah,
if
we
go
here,
you
see
that
we
started
the
district
pages
deployments
worker
and
we're
starting
it
with
a
delay.
I
guess
it
was
30
minutes
yeah,
it's
a
30
minute
delay.
A
Just
in
case,
if
pages
daemon
is
still
serving
from
the
old
file,
we
don't
want
to
delete
it
and
risk
running
into
500
500
errors
for
users.
So
within
half
an
hour
pages
demons
should
actually
see
the
new
url
for
the
new
file
and
we
can
safely
remove
the
old
one.
So
in
theory
you
can
have
multiple
pages
deployment
per
project,
but
they
will
be
cleaned
up
after
some
time
yeah.
So
now
how
pages
demon
actually
knows?
What
to
do?
I
mean
what
deployment
to
take
and
things
like
that?
A
A
And
we'll
see
the
recent
blocks
and
then
we
go
and
actually
visit
the
page
yeah
and
you
see
all
this
is
printed
just
by
two
requests,
but
this
is
a
verbose
login,
so
so
yeah
just
some
events.
We
get
some
connection
what
else
it's
kind
of
hard
to
read,
but
yeah
yeah.
Actually,
you
can
see
the
first,
the
very
first
thing
it
does
it.
Can
it
connects
to
the
api,
here's
the
api
url,
so
it's
api
before
internal
pages,
you
can
simply
file
find
this
file
inside
the
rails.
A
Api
internal
pages
yeah.
Here
it
is
it
just-
has
a
single
action,
get
request
with
metal
path.
A
And
what
it
does
it
just
tries
to
find
the
pages
what
it
receives
on
as
an
input
is
the
hostname.
So
in
our
case
it
will
be
this
root,
one
two:
seven,
zero,
zero
one
nip
dot
io,
and
this
is
a
group.
Basically,
this
is
a
group
domain.
A
I
mean
in
that,
in
that
case
it's
user
domain,
but
regardless
we
occur.
We
care
only
about
the
namespace
and
groups
and
users
are
kind
of
the
same
on
the
namespace
basis,
so
yeah,
and
then
we
have
this
pages
project,
but
we
don't
send
it
to
the
api.
We
only
send
this
host
name
as
a
parameter
to
the
api.
A
Then
what
happens?
Is
that
we're
trying
to
find
the
namespace
with
this
host?
If
we
couldn't
find
it,
we
search
for
custom
pages
domain,
you
can
actually
add
custom
domains
for
your
websites
and
from
here
there
is
no
distinction.
After
running
this,
we
just
getting
some
virtual
entity
which
is
called
virtual
domain.
A
It's
just
a
way
to
present
things
to
the
api.
What
it
does
inside
is
just
going
into.
A
Actually,
I
can
probably
show
this
yeah
if
we
go
to
the
namespace
during
the
reason
this
virtual
domain-
I
just
saw
it
yeah,
so
what
it
has
is
just
we
have
a
ssl
key
and
certificate.
This
is
only
valid
for
the
custom
domains.
We
don't
provide
the
those
for
the
namespaces
and
then
we
have
a
thing
called
lookup
path.
A
Inside
the
url,
we
now
only
have
these
pages
project.
What
what
this
pages
project
is?
It's
just
a
project
name
from
here.
It
is
possible
to
have
a
multiple
namespace.
So,
for
example
like
in
case
of
gitlab,
it
will
be
like
gitlab.
A
Org
slash,
I
don't
know,
subgroup
and
then
some
project
inside
it
so
in
the
it
will
be
here.
This
will
be
this
subgroup
I
just
mentioned,
and
then
the
project
name,
so
it
it
is
possible
to
have
multiple
sections
of
the
path
here,
referring
to
different
different
name
spaces,
and
this
lookup
path
actually
contains
these
prefixes.
A
So
in
our
case
it
will
be
how
to
call
how
called
pages
project
yeah
it
will
be
a
pages
project
and
then
it
will
just
say
that
yeah
there
is
this
pages
project,
and
for
this
pages
project
there
is
a
yeah.
There
is
a
deployment
somewhere
and
for
this
deployment
we
just
provide
the
url
for
the
yeah,
for
the
actual
zip
file
and
pages
will
then
serve
from
this
zip
file.
A
One
thing
I
actually
forgot
to
mention
every
single
namespace,
as
you
see
here
root-
probably
you
already
guessed
it
can.
I
have
a
multiple
pages
project.
It
can
be
pages
project,
one
pages
project
two.
I
believe
I
had
something
like
tests
pages
project
yeah,
no,
whatever
I
just
had
another
one
before
running
this
test,
but
I
forgot
how
to
how
it
was
called.
So
you
can
have
multiple
domains.
Multiple
projects,
pairs,
the
same
same
pages,
domain,
namespace
domain,
sorry
for
the
custom
domain.
A
A
If
you
remember
from
the
previous
console,
I
mean,
if
you
noticed
in
the
previous
console
on
the
very
first
thing,
when
I
just
asked
for
the
pages
metadata,
there
is
a
flag
called
deployed
here,
so
this
flag
is
here
just
to
like
at
some
point.
We
didn't
have
these
fields,
and
this
deployed
through
was
the
only
way
for
us
to
determine
that
pages.
A
First
pages
site
was
deployed
for
this
project
and
we
can
do
that
very
quickly
in
the
database
with
a
very
efficient
query
and
we
have
a
special
index
for
this.
Now
it's
kind
of
redundant
because
yeah,
obviously,
if
you
have
pages
deployment
id
then
deploy,
it
will
be
true,
but
yeah
anyway,
we're
gonna
solve
these
projects,
collect
them
in
a
single
json
and
send
back
to
pages.
So
let's
go
back
to
our
pages
logs
and
they're,
probably
more
printed
here
since
I've
accessed
a
few
more
things.
A
What
else
yeah
this
just
api
call?
So
there
is,
there
were
no
error,
and
here
are
our
lookup
path.
This
is
what
I've
shown
you
in
the
how
to
call
it
in
the
code
just
a
few
minutes
ago.
So
we
have
this
lookup
path,
multiple
of
them.
Actually,
in
that
case,
we
only
have
one
of
them.
So
yeah
here
is
the
prefix
for
this
project.
A
Here
is
the
project
ide
kinda
redundant
at
this
point
point
actually,
but
we
still
have
it
and
then
we
still,
we
have
a
source
actually
right
now
we
only
support
the
zip
source
in
the
middle
of
the
rear
architecture.
We
supported
other
sources,
but
right
now
we
only
support
zip
source,
so
this
field
is
always
deep
and
then
we
have
this
file.
A
It
actually
shows
you
where
to
find
the
zip
archive,
and
here
different
cases
are
possible.
You
see
that
we
have
this
file
prefix,
which
basically
says
that
you
can
find
this
file
locally
on
your
local
storage
located
in
this
very
directory
by
but
this
path
it
is
possible
to
have
http
or
https
and
in
case
of
the,
if
you
use
object,
storage
to
store
files,
you'll
have
you'll,
see
the
object,
storage
url
here
it
will
be
signed
by
gitlab
instance.
A
A
Yeah,
I
wonder
where
it
is:
it's
probably
in
the
lookup
path
or
something
actually,
let's
look
for
the
up
path:
yeah
yeah,
so
this
model
is
responsible
for
actually
presenting
the
deployment
to
the
pages
demon
and,
in
case
of
the,
how
to
let
me
try
to
find
prefix
deployment
yeah.
There
is
a
source
file
this.
This
is
what
we
see
I
believe
somewhere
here.
A
Yeah
and
yeah,
as
you
see
we
there
is
this
expired,
so
what
it
does
it
just
it
signs
the
url
for
the
file,
which
is
valid
for
one
day
from
now,
and
then
pages
demand
just
has
a
cache
for
these
files.
It's
it's
invalidated
much
more
frequently
than
one
day
so
yeah
we
can
safely
kind
of
expose
these
files
to
pages
daemon
and
we
are
not
trying,
like
pages
demon,
doesn't
have
any
other
credentials
to
the
object
storage
itself.
A
So
it
can't
read
anything,
but
this
single
file
which
it
got
from
the
api
so
yeah,
then
we
have
sha256
it's
used
by
pages
daemon
as
a
cache
key.
A
So
every
if
pages,
sorry,
if
users
doesn't
actually
change
anything,
I
mean
if
we,
for
example,
get
invalidated,
and
then
we
try
to
get
this
new.
A
A
I
wonder
why
we
actually
have
it
two
times
in
here
and
which
sha
256
is
the
valid
one.
Oh,
we
actually
have
diff
yeah.
We
have
two
projects
yeah,
we
have
that's
what
I
mentioned.
I
had
another
project.
It
was
called
pages
project
test,
not
test
pages
project
whatever.
A
A
So
yeah,
this
is
what
we
get
from
the
api.
I
probably
should
go
to
the
pages
pages
code
base
and
show
how
it
all
works
inside
the
pages
demon,
so
it's
kinda
tricky
to
to
get
from
the
first
time,
but
the
pages
codebase
isn't
big.
So
even
though
maybe
code
right
now
isn't
in
the
cleanest
state
possible,
it
shouldn't
be
that
hard
to
find
what
you
need,
especially
if
you
use
smart
ide,
which
shows
you
where
everything
is
called
from
and
stuff
like
that.
A
A
A
So
this
is
why
we're
using
proxy
v2-
and
it
also
supports
http
proxy,
it's
kind
of
different
from
the
normal
http,
because
when
you
use
http
proxy,
we
you
can
supply
some
proxy
headers
which
are
again
showing
what
like
what
ip
was,
the
real
user
and
what
domain
was
requests
stuff
like
that,
so
normal
http
and
http
listeners
just
ignore
these,
but
whatever
in
the
end.
A
There
is
this
pipeline
of
handlers,
which
kind
of
pro
processes
everything
every
single
request
which
you
get
from
pages
to
two
pages.
There
is
a
bunch
of
stuff.
I
will
briefly
go
over
some
of
them,
so
reject
methods.
It's
just
a
security
fix
for
people
trying
to
send
http
methods
and
ship
requests
with
long
method
names
like
getting,
and
there
are
a
lot
of
letters
there.
A
Then
we
have
url
limiter,
it's
for
the
same
reason
for
some
people
trying
to
abuse
us
correlation
id
is
used
by
us
to
it's
injected
in
logs,
it's
injected
in
api
calls,
and
you
can
then
use
it
to
find
how
every
single
request
was
processed
and
all
the
logs,
which
were
related
to
a
single
user
request
yeah.
This
is
just
handling
some
panics.
What
else.
A
Also
some
but
correlation
id
stuff
custom
headers,
it's
just
a
small
feature
which
allows
people
to
set
a
custom
header
set
on
the
server-wide
basis.
So
I
don't
know
if
you
want
to
add
some
header
which
will
be
added
to
every
single
request
response
from
the
gitlab
pages.
You
can
do
that
via
config
file.
A
Health
check.
Middleware
is
just
used
to
reply
that
we
are
okay
and
able
to
receive
requests.
It's.
I
guess
mainly
right
now,
it's
been
used
by
the
kubernetes
installation
just
to
make
sure
that
yeah
this
server
is
actually
up.
We
can
now
send
requests
to
this
instance.
A
Rate
limiter
is
what
we
currently
introduce
to
get
rid
of
some
bogus
requests.
Basically,
some
narrow
service
attacks
and
stuff
like
that,
then
there
is
interesting
middleware.
This
is
a
rotten
middleware.
A
If
we
go
inside
it,
this
is
actually
what
happens.
We
see
this
api
call
wrote
in
middleware
actually
does
this.
What
it
tries
to
understand
is
that
where
we
want
like
what
the
project
is
responsible
for
serving
this
particular
url,
so
we're
just
getting
the
domain
in
host.
A
Is
a
source
source
object
is
basically
some
for
now
you
can
just
think
of
it
as
just
a
abstraction
over
the
api
layer
which
takes
the
host
name
and
responds
with
some
domain
object.
Yeah
and
this
domain
object,
then,
will
contain
all
the
lookup
paths
we
saw,
so
it
will
basically
contain
all
these
api
response.
I
guess
up-
and
I
mean
up
until
here
yeah-
it
will
contain
all
this
api
response
and
it
will
be
then
used
to
process
the
requests.
A
So
if
we
go
inside,
probably
in
interface
yeah,
but
we
actually
need
this
gitlab
source
gitlab
source
is
what
is
proxy.
I
mean
what
is
getting
the
real
answer
from
the
api.
This
parent
source
is
just
an
interface,
and
this
is
what
we
used
to
run
and
test.
So
this
gitlab
source
is
what
we
need.
So
gitlab
client
is
just
abstraction
over
the
gitlab
api
and
yeah.
Then
we
get
this
lookup
object
yeah
it
contains,
as
you
see,
certificate,
k
and
yeah
some
other
stuff.
A
A
It's
kind
of
a
complicated
logic,
a
bit
just
the
way
it's
written,
but
you
can
see
that
we
have
a
cache
in
this
yeah
in
this
gitlab
source.
It's
an
object.
It
contains
a
cache
for
all
the
responses
we
get
and
the
way
it
works
is
that
every
time
we
have
a
new
request
for
the
new
yeah,
not
even
like
for
the
just
just
a
new
request
for
the
new
resource
or
something
we
go
to
this
cache
we
go
to.
This
object,
find
the
something
in
cache.
A
Just
a
like:
resolvable
object
it.
It
may
not
be
yet
resolved,
but
yeah.
We
just
get
it
from
here.
So
if
there
are
multiple
clients
requesting
the
same
domain
at
the
same
time,
they
all
will
get
the
same
object.
A
This
is
object,
then
we'll
call
the
api
and
it
will
be
kept
in
cache.
Cache
actually
has
two
two
things
like
two
configuration,
two
configurations,
yeah.
Basically,
two
timeouts
one
timeout
is
for
refreshing
the
cache
and
one
for
actually
evicting
the
cash.
So,
for
example,
we
can
have
a
timeout
for
refreshing,
the
cash
every
five
minutes
or
every
even
like
30
seconds,
and
I
guess
by
default
it
closer
to
these
values
it's
defined
in
seconds,
not
minutes,
and
we
have
eviction
cash
time
out
much
longer.
A
It
can
be
like
10
minutes,
20
minutes
one
hour
or
whatever.
The
way
it
works
is
that
if
new
requests
comes
to
us-
and
we
have
some
response,
all
the
response
from
cash-
we'll
just
respond
we're
using
this
one,
even
if
something
gets
updated.
We
won't
know
about
this,
but
if
the
refresh
timeout
has
passed
we'll
go
to
the
it
will
just
asynchronously
schedule
a
new
request
to
the
api.
A
So
if
10
seconds
has
passed
from
the
last
request
and
cache
is
kind
of
old,
we
still
use
it
to
show
the
latest
information
we
had,
but
we'll
refresh
it
in
the
background
and
the
new
requests
which,
like
it
will
be,
the
third
request
will
actually
see
the
new
data
if
new
deployment
happened
or
pages
project
got
deleted
or
something.
A
So.
This
is
why
we
have
two
caches
like
two
timeouts.
It
allows
us
from
one
side
to
serve
traffic
without
any
delays,
but
also
have
up-to-date
information
if
something
has
changed
in
the
api.
A
So
I
won't
go
into
the
cache
logic
here,
but
yeah.
This
is
where
we
get
the
information
from
the
gitlab
api
and
this
ultimately,
what
this
wrote
in
middleware
does
it
just
now
actually
does
a
bit
more.
It
gets
dormant,
oh
no,
actually
yeah
it
just
gets.
It
just
gets
a
domain
object.
Then
it's
sort.
As
you
see
we
have
this
requests.
A
Yeah
somewhere
a
request
with
host
and
domain,
so
we
just
use
utilize
the
context
feature
of
golang
request.
We
just
put
the
host
and
domain
object.
We
got
from
the
api
into
the
context
of
the
http
request
and
then
every
middleware,
which
is
down
the
line,
can
actually
just
use.
This
domain
object,
yeah,
so
metrics
middleware
just
reports
a
metrics
access,
logger,
I
guess,
speaks
for
itself.
Acme
middleware,
just
ignore
it.
It's
something
related
to
the
generated
generating
automatic
ssl
certificates.
A
We
via
let's
encrypt
for
our
users,
it
basically
just
redirects
to
the
gitlab
instance.
In
some
cases
it's
a
very
simple
middleware.
A
Then
there
is
notification,
middleware
and
authorization,
middleware,
middleware
and
auxiliary
heaters
middleware.
All
they
are
kind
of
all
work
with
identification.
On
the
we
have
a
feature
called
access,
control
inside
gitlab
pages.
So
what
happens
there
is
that
we
just
go
yeah.
We
just
basically
implement
the
oauth
2.0
protocol
with
gitlab
and
we
authorize
access
to
every
single
project.
A
I
mean
if
it
is
protected
by
default
projects.
Public
projects
aren't
protected
by
the
access
control.
Private
projects
are
protected,
but
by
default
I
believe
access
control
is
still
disabled
on
the
instance
level,
so
clients
need
to
actually
enable
it
and
run
some
stuff
and
on
the
dev
instance,
it's
not
that
easy.
You
won't
have
it
out
of
the
box
yeah
if
you
need
it
for
something
just
ask
someone
who
already
worked
on
the
pages
we'll
help
you
to
set
this
up.
A
So
finally,
we
arrive
into
this
function,
which
is
called
so
file
or
not
found
what
it
does.
It's
actually
serve
the
file
from
this
zip
archive
we
got
so
I
remind
you
that
we
can
get
the
file
on
the
disk
on
the
local
storage
and
we
can
get
it
in
the
object
storage
by
the
url,
but
from
here
it's
kind
of
obstructed
way.
So,
let's
dive
into
this
yeah,
so
we
check
some
notification
yeah.
Actually
there
is
no
give
me
a
second
monster
not
found.
A
Oh
yeah,
this
is
what
actually
serves
the
file.
This
is
what
generates
this
beautiful
hello
world
page.
A
So
if
we
dive
into
this
surf
file
http,
we
at
this
point
we
terminated
ssl.
We
did
everything,
so
we
kind
of
don't
care.
A
Dive
into
this
yeah
so
domain
result.
What
resolve
does
is
that
remember
that
for
every
domain
in
the
api
response
we
can
get
multiple
projects
right
yeah.
I
would
it's
kind
of
hard
to
show
it
again,
but
oh
yeah,
because
it's
here
so
for
per
every
lookup
name.
We
have
multiple
lookup
paths,
so
this
is
what
we're
trying
to
do
here,
we're
just
basically
iterating
over
those
lookup
paths.
Let's
dive
into
the
implementation,
there
is
resolver
yeah,
it's
kind
of.
A
We
have
a
multiple
layers
of
obstruction
here,
it's
just
remnants
of
the
rear
architecture.
Previously
we
didn't
have
this
api.
We
had
everything
stored
on
disk,
so
we
had
this
when
we
were
migrating
from
architecture
to
the
other.
We
created
all
these
abstractions.
Some
of
them
are
redundant
at
the
moment
and
you
can
feel
the
urge
to
remove
some
of
them.
A
It's
not
that
easy,
unfortunately,
at
the
moment,
but
yeah
hopefully
one
day,
we'll
do
that
anyway,
let's
dive
into
this-
and
you
see,
we
have
multiple
resolvers,
but
we
need,
I
believe,
yeah.
This
is
the
abstract
class
like
abstract
interface.
This
is
what
we
actually
need
and,
as
you
see
we're
just
going
in
the
forward,
we
go
over
the
lookup
paths
in
the
domain
object
and
just
compare
the
prefixes
here
and
in
the
end,
we
just
retuned.
The
serving
request
object.
A
A
File
zip
archive
with
all
the
files
for
this
gitlab
pages
website,
so
this
is
what
this
resolver
magic
does.
Let's
go
back
into
yeah,
I
guess
yeah,
so
this
domain
resolve.
So
I
kind
of
I
got
lost
a
bit
here.
So,
let's
dive
into
this
once
again,.
A
Yeah
we
resolve
the
domain.
So
from
this
point
we
already
know
we
already
know
the
domain
of
this
pages
request
and
we
know
the
path.
So
then
we
just
need
to
serve
the
actual
file.
So
request
is
this:
servant
request
object,
and
it
also
knows
how
to
so.
It's
probably
the
single
thing.
It
knows
how
to
do
is
just
to
serve
the
file.
A
So,
let's
dive
into
this
yeah,
it
contracts
constructs
another
abstraction
which
is
called
handler
and
then
calls
another
thing,
but
whatever
we
just
dive
into
this
again-
and
there
are
actually
two-
oh
actually,
there
is
a
single
server
again
ignore
the
disk
names.
There
is
a
single
server
and
we
always
kind
of
think
that
we're
serving
from
some
virtual
file
system
on
the
disk.
A
So
what
we
do
is,
then
we
actually
try
to
try
files.
What
tri
file
means
is
that.
A
Yeah
we're
trying
to
find
the
file
responsible
for
this
particular
http
request.
So
in
our
case
it
will
be
index.html.
Actually,
there
is
a
special
logic
for
handling
cases
when
index.html
isn't
present
here,
it's
kind
of
almost
doing
like
internal
redirects,
but
whatever,
let's
just
dive
into
this
one
more
time.
A
So
we
finally
arrived
at
the
logic
which
actually
serves
the
files.
A
So
what
happens
here
so
again
we
try
to
get
the
full
path
of
the
file
on
the
yeah.
Actually,
let
me
speak
about
the
virtual
file
systems
first,
so
we
have.
This
reader
object
each
contract
constructed
by
everything
we
had
in
the
above,
and
it
has
a
root
method.
A
So
when
we
construct
the
reader
reader
is
just
abstraction
of
the
artifacts
zip
archive
it
has
yeah,
it
just
has
a
url
to
to
it
and
it
tries
to
present
an
interface
showing
kind
of
this
is
the
file
system
and
you
can
read
from
it
it's
kind
of
complex.
What
we
do
is
that
every
first
time
you
create
this
reader,
we
go
and
read
through
whole
zip
archive,
and
then
we
create
an
index
of
files
in
memory
in
memory
cache.
We
create
an
index
of
car
of
files.
A
This
all
kind
of
you
can
find
it
all
in
this
file.
I
believe
so
sorry
for
I'll
stop
scrolling
here.
So
we
create
this
index
of
files
in
memory
and
then,
when
you
try
to
get
the
file
by
name
from
this
root
object,
we'll
just
go
into
this
index
of
files,
try
to
find
it
there
and
if
you
yeah,
and
if
the
file
is
there,
we
just
return
the
reader
writer.
A
Yes,
sorry
reader
read
seeker,
so
yeah
it
will
depends
on
if
zip
archive
is
compressed
or
not,
but
will
either
return
the
reader
or
read
secure
interface
for
the
yeah
for
the
file,
and
then
you
can
just
use
the
copy
yeah
just
io
copy
and
send
files
back
to
the
back
to
the
user.
A
A
So
we
got
the
root,
then
resolve
path.
This
is
what
happens.
A
Let's
actually
jump
into
this
so
yeah,
we
try
to
evaluate
symlinks
inside
this
directory.
Like
do
a
bunch
of
things.
Actually,
we
support
some
links
inside
the
zip
archives.
Actually,
let
me
try
to
find
where
we.
This
is
a
try
file,
I'm
trying
to
find
where
we
actually
create
an
index
of
all
the
files
it
should
be
somewhere
here.
A
It's
your
file,
yeah,
it
isn't,
but
whatever
it's
somewhere
in
this
root
logic
and
reader
logic.
So
sorry
the
result
file.
So
we
have
this.
Try
file
yeah
once
again,
so
we're
resolving
the
file.
Then,
if
we
have
it
successfully
found,
we
just
serve
it
with
all
these
parameters
indicating
how
to
serve
it.
This
is
just
responsible
for
access
control.
This
is
again
just
a
cache
key.
A
This
is
a
full
path
from
the
request.
It's
probably
kind
of
trimmed.
It's
it's
not
a
full
path
of
the
request
that
want
to
include
the
it
won't
include
this
pages
project.
It
will
be
just
index
html.
In
our
case,
I
believe
so.
If
we
dive
into
this.
A
Yeah,
so
there
is
some
content,
encoding
negotiation.
A
But
whatever
we
just
support
server
from
gzip
and
some
other
file
encodings
and
you
can
put
them
directly
into
zip
archive
and
serving
from
them-
will
be
much
much
faster,
so
yeah,
then
we
actually.
A
This
is
what
I
was
speaking
about
the
virtual
file
system,
so
we
kind
of
implemented
the
standard
file
system
methods
on
the
on
this
object.
So
we'll
just
call
elstad
on
the
file
see
if
we
have
it.
If
we
don't
have
it.
A
We
actually
should
just
return
false
from
this
function,
but
whatever
so
we
try
to
get
the
file
information.
A
Do
some
content
type
magic
increase,
the
metrics
whatever
and
then
just
here
is
where
actual
serving
of
the
file
happens.
So
if
you
remember,
I
mentioned
that
we
can
return,
read
seeker
or.
A
A
So
if
someone
wants
to
get
the
specific
range
from
the
file,
if,
for
example,
if
it's,
for
example,
a
video
or
audio
file-
and
you
want
to
have
this-
have
it
scrollable
inside
your
website,
then
you
just
need
to
upload
on
yeah
uncompressed
deep
archive.
There
is
a
special
runner
environment
variable
which
will
do
that
for
you,
but
yeah.
In
the
end,
we
just
have
this
http
serve
content
and
we
pass
read
seeker
inside
this
and
yeah
from
that
it
will
just
work.
A
I
mean
this
is
a
standard
http
serving
from
the
golan
library.
It
will
yeah,
it
will
handle
range
requests
for
us.
It
will
handle
cash
headers
for
us,
as
you
see,
by
the
way.
Just
what
was
recently
introduced
is
that
we
somewhere
here
yeah,
we
added
the
cash
headers,
so
we
now
support
eta
caching.
So
this
http
server
content
will
hand
handle
this
interaction
for
us
and
do
everything
and
this
just
because
http
self-content
requires
a
seekable
interface.
We
can't
use
it
in
cases
when
we
don't
have
it.
A
So
it's
just
basically
a
copy
paste
of
the
self
content
function
with
some
modification
modification,
just
ignoring
everything
which
is
related
to
range
requests,
but
it
again
supports
all
the
cache
header,
cache,
headers
and
other
information
so
yeah.
This
is
how
it
all
works
together.
I
guess
I
can
show
you
one
more
thing
about
the
internals.
A
It's
virtual
file
interface
system,
so
we
have
this
vfs
layer,
it's
basically
yeah.
What
got
this
this
is
root.
Object
is
from
this
virtual
file
system.
So
if
we
go
here
so.
A
Yeah
root
object,
sorry,
which
supports
three
methods.
This
just
I
know
you
actually
don't
care
about
this.
Well,
what
you
care
about
is
the
root
object
and
the
roots.
Let's
dive
into
this
implementation.
Oh
yeah,
it's
actually
right
here,
so
the
root
object
is
just
representing
the
virtual
file
system.
A
Actually,
so
it
supports
start
start
on
the
file
it
supports
reading
because
we
need
some
ceiling
traversal
and
it
supports
opening
the
file
and
it's
basically
just
responsible
for
opening
the
files
inside
the
inside
the
zip
archive,
and
then
we
have
local
and
serving.
A
So
actually
not
sure
if
local
fs
is
still
oh
yeah,
so
the
local
vfs
will
be
working
with
files
stored
directly
on
the
disk
storage,
where
pages
server
is
running.
So
in
our
case,
we
actually
are
using
this
local
virtual
file
system,
because
file
is
stored
locally,
and
then
we
have
this
kind
of
serving
dfs.
A
Yeah,
it's
kind
of
strange
name
to
be
honest,
basically
just
responsible
for
the
object
storage,
but
we
don't
care
if
it's
object,
storage
or
not.
What
we
get
into
this
virtual
file
system
is
a
url.
A
It
can
be
valid
for
whatever
time,
but
we
treat
it
as
just
a
url
where
we
can
get
the
zip
archive
it
should
be.
It
should
itself
allow
arrange
requests
because
then
we'll,
when
we'll
be
serving
particular
files
from
the
zip
archive,
we'll
use
this
range
request.
So,
as
I
mentioned
before,
we
create
all
this
index
of
files.
Maybe
it's
somewhere
here.
A
No,
no!
It
doesn't
look
like
it's
here,
but
whatever
we
just
create
this
index
of
files
in
memory
and
for
every
file,
we
have
a
offset
in
the
zip
archive
where
we
can
read
it
for
from
so.
This
is
why
we
need
this
range
request,
support
from
the
object
search,
but
whatever
every
object,
storage
supports
this.
I
think
so.
A
If
you
want
test
object,
storage
locally,
you
can
get
it
running
with
menia
mean
io,
it's
included
in
the
gitlab
installation
like
behind
in
the
gdp
installation
by
default.
I'm
not
sure
if
it's
enabled
by
default,
but
yeah.
If
you
remember
from
the
previous
video,
there
is
probably
a
documentation
page
for
like
inside
the
github
development
kit.
There
is
a
folder
called
dock
and
there
is
probably
object:
storage
on
meme
io
file
in
the
how-to
directory,
which
guides
you
through
how
to
implement
this
so
yeah,
I'm
kind
of
tired.
A
A
I
hope
you
understood
something
from
this.
I'm
not
sure
if
it
was,
if
I
did
a
good
job
explaining
this.
Obviously,
if
you
want
to
dive
into
this,
it
will
be
best
to
just
pick
up
a
very
small
issue
or
something
and
try
to
implement
it
and
you'll
get
yeah
you'll
understand
the
particular
pieces
of
pages
code
base,
I
believe
quite
easily,
especially
if
you,
as
as
as
you
saw,
I
used
a
lot
of
these
jump
into
helpers
kind
of
my,
which
my
easy
gives
me.
A
A
But
still
you
can
navigate
through
them
easily.
Maybe
it's
not
that
easy
to
modify
them,
but
they
are
all
kind
of
I
don't
know
I
had.
I
didn't,
have
a
huge
problem,
working
with
them
and
understanding
them
and
yeah
asks
ask
us
with
any
questions
the
razer
keeper
pages
channel
on
the
slack
if
you're
internal
developer
or
you
can
just
ask
where
community
contributors,
I
believe
we
have
some
dedicated
channels
for
working
with
community
contributors
or
you
can
just
create
energy
quests
and
pin
the
pages
developers
inside
your
merge
requests.