►
From YouTube: GitLab Pages object storage discussion
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
B
A
A
B
A
B
D
So
I
did
I
think
this
is
the
main
idea
that
Camille
tested
in
in
this
experiment
so
yeah,
because
we
already
have
the
artifacts
uploaded
silver
mom.
Maybe
we
should
make
sure
that
there's
no
cancellation
policy
in
place
expiry
policy
in
place
for
those
artifacts,
otherwise
yeah
will
we
no
longer
have
it.
But
yes,
I
think
this
is
the
main
point:
okay,.
B
D
So
I
can
try
to
explain
this.
So
basically,
what
happened
with
zip
archives
is
that
the
format
is
kind
of
lazy.
So
you
have
a
table
of
contents.
That
tells
you
where
the
things
are
inside
the
artifact
problem.
The
problem
here
is
that,
if
I
remember
correctly,
because,
basically,
if
you
need
to
update
an
archive
that
really
exists,
the
by
UN
I
mean
every
client
they
can
append
something
to
the
end
of
the
file
and
append
a
new
table.
D
So
this
doesn't
happen
in
our
case,
because
we
know
how
we
built
the
the
art,
the
zip
archive,
because
it
is
built
by
MIT
lab
itself.
So
there's
no
magic
tricks
around.
This
is
just
one
table
at
the
end,
but
the
point
is
that
we,
if
you
can
catch
those
information,
you
have
nerve.
You
have
pointers
and
when
you
have
pointers
you
can
basically
open
the
with
arranged
really
object:
storage,
exactly
the
file
that
you
need,
so
that
you
don't
have
to
download
the
zip
archive
locally
and
I
think
this
was
the
point.
D
E
So
the
problem
is
that
basically,
the
table
of
content
in
a
zip
file,
zip
archive
is
at
the
end
of
the
file
or,
basically,
you
need
to
read
and
file
your
file
until
you
encounter
some.
You
know
mark
that
the
table
of
contents
starved
here
and
then
you
can
read
the
table
of
content
and
you
basically
know
exactly
after
reading
it
at
which,
by
its
file
compressed
file,
start
that
each
byte
in
a
file
compressed
file,
and
so
you
can
basically
open
just
the
file.
E
You
can
just
read
from
the
object:
storage
from
the
bag
to
find
starts
until
the
by
that
file
ends
you
with
that,
you
can
uncompress
it
when
you
receive
it
and
serve
the
file.
So
we
we've
built
a
build
artifacts
browser
almost
five
years
ago
and
basically
right
now
you
can
go
to
the
get
up
you
are.
If
you
have
artifacts,
you
can
browse
them.
You
can
download
single
files
from
within
an
archive,
and
this
already
works
right.
It
works
quite
well,
you
don't.
We
we
never
extract
an
entire
file.
We
never
download,
extend
verify.
E
We
never
extract
entire
file
somewhere
to
just
read
one
file.
If
the
file
is
like
one
gigabyte,
we
extract
only
and
anyone
to
access
the
file
that
is
like
10
bytes
large.
We
just
read
those
10
bytes
from
the
object,
storage
and
uncompressed.
That
answer
the
file,
and
in
order
to
do
that,
we
had
to
design
this
caching
method
and
currently,
whenever
gate
lock
rails
receives.
E
The
artifact
is
evocative
art,
the
artifact.
We
are
trying
to
find
the
table
of
content
and
we
are
generating
a
binary
file
that
we
presents
a
card
version
of
table
continent.
We
persist
it
separately
as
a
second
built
artifact.
This
time
we
call
it
built
artifact
metadata
and
it's
already
in
the
objects
search
as
well,
so
we
should
have
the
cached
version,
the
cached.
You
know
artifact,
that
the
the
met
metadata
artifact
for
every
zip
artifact,
also
in
the
object,
storage
and
I
think
it
should
work.
E
I
think
that
the
biggest
concern
I
have
regarding
serving
files.
Direct
form
object.
Storage
is
just
speed
and
efficiency.
These
days,
you
know
people
want
to
deploy
in
their
code
to
the
edge
with
you
know
in
the
matter
of
really
complex
CDN,
so
that
it's
blazing
fast
then
giving
architecting
the
system
in
a
way
that
it's
actually
much
slower
because
it
won't
be
ever
as
fast.
As
you
know,
other
solutions
are
that's
my
biggest
concern.
D
I
think
that
so
I
think
that
camille
idea
on
this
was
that
yeah.
He
was
aware
that
we
already
have
the
metadata,
but
I
think
it
was
meaning
that
we
should
store
in
memory
the
content
of
the
metadata
so
that
we
remove
one
request
because
artifact
browsing
is
simple:
you
don't
do
this
very
often
you
can
you
make
and
of
expect
this
to
be
as
low
also
because
happen
in
the
context
of
our
regular
gif
club.com
recurs.
So
you
have
everything
else
around
it.
D
You
have
database
queries
and
they
said
that,
but
if
you
think
of
a
static
site
generator
you,
you
really
want
to
remove
many
of
those
things
so
I
think
the
community
out
here
was
that
we
download
the
the
metadata
from
hobby
storage.
We
cash
it
in
memory
in
Redis,
I,
don't
know
what
was
this
idea
and
so
that,
within
the
pages
demon
has
this
hard
day
time
in
memory
and
all
the
rivers
that
we
just
go
to
the
same
cache
information.
So
it
was
kind
of
another
cache
yeah.
E
So
I
think
it's
actually
a
good
idea
to
cache
the
table
of
content
metadata
in
memory,
because
it's
quite
small-
and
it's
also
gzipped-
and
it's
no
binary
format.
So
it's
very
minimal.
It's
not
large.
Of
course
the
size
depends
on
the
amount
of
entries
you
have
in
a
zip
file,
so
having
like
1
million
entries
in
the
zip
file
might
result
in
having
metadata
that
probably
few
megabytes
large
but
few
megabytes.
It's
still,
probably
not
that
bad,
but
still
for
every
request.
E
D
One
of
the
first
discussion
around
this
when
we
were
still
deciding
how
to
move
forward.
One
of
the
proposal
was
that
the
artifact
in
object
storage
was
just
a
kind
of
a
seeding
mechanism.
So,
first
time
a
pages
demon
receive
a
request.
You
will
serve
it
from
the
object
storage
directly
with
this
mechanism
and
we'll
start
dump
on
disk
procedure
in
parallel,
so
that
it
will
download
the
comply
all
the
side
and
dump
it
on
this.
E
D
A
D
A
D
A
E
My
suggestion
about
using
a
separate
demon
had
also
one
like,
like
I,
wanted
to
use
a
separate
even
to
solve
a
national
problem.
So
currently
we
don't
really
have
a
way
to
load
balanced
requests
in
a
way
that
requests
for
domain.
My
domain.com
is
always
going
to
hit
one
no
the
same.
No,
so
it
means
that
we
will
need
to
basically
keep
the
same
cache
on
every
node
and
the
cache
will
grow
to
the
size
of
all
the
domains
and
pages
being
used
at
the
moment
on
github.com
right.
E
E
You
know
average
size
of
all
the
pages
that
we
need
to
serve
in
a
given
moment
for
like
an
hour
or
depending
on
the
cash
XPrize
exploration.
So
what
I
wanted
to
what
I
was
proposing
is
basically
making
it
possible
for
pages
to
behave
as
load-balanced
with
some
same
strategy
for
sharding
that
stuff,
but
it
might
not
be
really
necessary.
So
the
question
that
I
think
we
need
to
have
an
answer
for
is:
what's
the
size
of
pages?
D
B
Couple
points
first,
is
that
a
week
ago,
a
couple
weeks
ago
and
noticed
that
one
note
of
pages
was
serving
significantly
more
pages
than
all
other
notes,
like
five
times
more
and
I
discovered
that
all
like
this
huge
increase
was
because
of
dogs
VidCon.
So
basically
only
one
note,
service,
dogs,
github.com
and,
as
I
understand
it,
it's
kind
of
kind
of
evidence
that
we
already
have
this
like
sticky
by
domain
or
load
balancing
for
pages.
You.
E
B
The
second
point
is
I:
guess
we
shouldn't
just
store
all
pages,
we
can
limit
it
to
like
a
few
gigabytes
and
use
some
a
little
cash
or
whatever
I
also
thought.
Maybe
we
I
don't
know
I
kind
of
like
the
idea
of
storing
and
cashing
files
separately.
Note
on
zip
in
the
whole
archive
like
on
warming
up
I
guess
when
people
will
open
their
website
for
the
first
time
most
of
the
resources
will
be
loaded
and
then
only
like
single
pages
will
be
loaded
on
request.
All
like
CSS
will
be
already
there.
D
What
do
you
mean
that
just
for
the
caching
stuff,
do
you
mean,
instead
of
dumping,
the
full
archive
you
just
the
content
that
you
read
so
the
net,
because
you
imagine
that
the
index
page
would
be
served
with
most
of
the
time?
But
maybe
you
have
some
big
email,
just
some
big
artifacts
in
some
pages
about.
C
E
E
A
A
Yeah
but
my
understanding
was
not
like
the
update
pages
service
that
is
now
like
unzipping
the
artifacts
and
saving
them
the
in
NFS
I
I,
somehow
understood
that
it
will
do
the
same
for
this
thing.
Instead
of
saving
in
NFS,
it
will
put
two
like
objects:
Taurus,
always
3
or
whatever,
and
then
we
like
directly
just
proxy
to
to
those
files
directly
but
yeah
I,
see
now
what
you've
all
had
in
mind.
It
was
so
obvious.
E
E
A
good
idea
to
actually
add
additional
code
to
rail
is
that
whenever
we
receive
pages
artifacts,
we
are
extracting
them
and
putting
in
the
object,
storage.
And
this
way
we
won't
need
to
have
you
know
the
table
of
content.
Caching
stuff
and
we'll
save
some
milliseconds
on
extracting
files
as
well.
But
it
might
result
in
a
huge
cost
actually,
but
not.
D
B
D
D
It's
just
something
that
we
have
to
keep
up
in
mind.
It
also
I
would
like
to
add
that
it
adds
some
complexity
in
terms
of
dealing
with
deployment
right,
because
right
now
you
have
this
abyss,
cocaine,
circuit
or
you
have
the
artifacts.
So
you
can
kind
of
link
the
last
deployment
artifact
and
read
from
that.
D
But
if
you
extract
it,
then
you
have
to
create
structures
when
you
have
version
one
of
the
side
version,
2
version
3,
so
deployment
one
for
me,
two
with
deployment
3
whatever
and
I
do
agree
that
you
don't
want
this
to
expire,
but
I
would
add
that
I
would
really
love
to
see
old
version
of
the
sign
expire
after
a
period
of
time.
If
we
have
a
more
recent
version,
because
there's
no
value
for
us
in
paying
the
cost
of
having
thousands
version
of
the
site,
think
about
our
documentation,
side
or
I.
E
A
E
E
A
B
E
A
E
Poc,
sir,
are
usually
a
good
first
step.
I
think
that
the
getting
answers
for
the
size
of
pages
we
would
need
to
keep
in
cash
right
now
the
this.
How?
How
can
we
actually
understand?
What's
the
size
of
pages
we
are
serving
right
now?
Perhaps
we
could
add
some
my
tweaks
to
the
files
that
we
are
serving
right
now
from
the
disk
storage.
E
So
when
we
are
serving
something
we
could
increment
the
counter
with
the
size
over
the
fire
that
it's
being
served
somewhere
in
the
primitive
use
so
that
we
can
get
cumulative
matrix
of
the
or
perhaps
we
can
get
this
information
from
Splunk,
balancers
or
somewhere.
Don't
won't
have
the
content
linked
and
the
locks
and
access
logs,
perhaps
this
or
we
can
just
take
a
look
at
bandwidth
somewhere
in.
D
Deal
we
duplicated
rigas
yeah
same
file
being
downloaded,
thousands
I
mean
some.
Something
is
the
thing
is
we
can
do?
We
are
we
do
the
this
scanning
right.
We
have
the
that
part
of
pages
that
scanned
the
the
the
this
work,
the
disk
and
created
the
groups
and
the
deployments.
Maybe
we
can
add
something
in
power
of
this.
It
will
fetch
the
sides
of
the
thing
and
we
can
publish
this.
E
E
But
the
bandwidth
is
always
going
to
be
higher,
much
higher,
higher,
okay,
sure
higher
so
checking
the
cumulative
bandwidth
would
allow
us
to
understand
the
order
of
magnitude
of
the
content.
We
would
need
to
cash,
so
the
the
cash
will
always
need
to
be
lower
than
the
quality
of
our
booth
right.
So
if
we
can
understand
if
it's
want
to
remind
its
ten
rabbis
or
perhaps
that's
just
a
gigabyte
or
like
an
hour
or
something
like
that,
we
don't
know
so
perhaps
this
would
be
an
interesting
exercise.
I.
B
G
D
D
We
may
end
up
needing
node
affinity
and
special
nodes
for
pages.
Wishes
may
be
something
that
we
can
afloat,
but
maybe
it's
not
a
problem
for
our
average
customer.
We
also
have
to
think
about
this.
That
may
be
our
scale.
If
something
is
well-documented,
our
scale
can
differ
from
the
average
customer.
So
it's
something
worth
considering,
but
yeah.
If
you
need
some
some
data
point
I
can
give.
You
is
that
right
now
in
we
are
running
the
budget
expert
cube
sidekick
you
in
kubernetes,
and
this
queue
requires
storage.
D
E
E
D
We
do
this
by
just
testing
it
with
a
gradual
rollout.
The
same
way
we
did
for
the
API
lookup
I
mean
we
already
have
the
multiple,
the
multiple
sources,
so
we
can
build
the
object,
storage
source
whatever
and
start
a
gradual
rollout,
and
then
we
can
compare
metrics
and
see
yeah,
it's
slower
or
it's
more
or
less
the
same
speed.
We
give
us
more
realistic
numbers
that
just
ask
guessing.
E
E
A
A
Some
some
types
of
artifacts:
now
you
can
view
online
like
HTML,
where,
if
you
differ,
dictations
and
not
through
the
artifacts
browser,
but
we
can
serve
them
directly
and
in
order
to
not
serve
them
under
the
same
domain
as
gitlab,
with
added
disk
as
a
proxy
to
pages
eventhough.
It
doesn
t
want
there.
So
it's
basically
the
same
right.
We
have
the
artifact
and
someone
wants
to
read
that
file
and
it's
just
expansion
of
this
could
be
way
slower,
but
at
least
we
don't
have
to
do
it
like.
B
A
A
B
B
A
A
G
A
A
A
C
I
got
a
chili
Amar's
they're
in
they're,
in
a
gender
they're,
just
documentation
for
developing
under
pages
and
I'd
like
to
get
them
merged,
but
I
just
wanted
to
I.
Guess:
there's
some
question
about:
should
we
be
developing
inside
of
the
JDK
or
is
anyone
developing
it's
either
JDK
folder
structure
itself
or
because
the
way
of
written
the
first
one
is
that
you
know
we're
doing
it
externally
in
a
sec
montebello,
a
wolf.
E
Okay,
Roundy
decade
to
do
anything
like
100%
on
tests
and
perhaps
99%
but
I
think
it's.
If
people
already
do
have
pages
cloned
under
their
DDK
folder,
they
can
use
it.
It
does
not
change
much
so
I
would
say
that
does
not
really
matter
where
people
are.
You
know,
don't
have
the
pages
download
it
and
what
folder
they
are
using.
Yeah.
D
D
D
I
mean
in
turn
pages
support
in
GDK,
so
I'm
a
bit
biased
here,
but
I
think
we
I
mean
the
company
experience
should
be
you
don't
know,
GDK
everything
is
there
and
you
can
do
everything
so
you
can
test
all
the
integration,
but
I
mean
I,
usually
do
also
externally.
So
it
really
depends
on
what
you're
doing,
but
I
think
that
indicates
should
work,
because
it's
the
main
entry
point
for
onboarding
new
developers
so.
C
Okay,
so
I'll
take
a
look
at
the
first
merge
request
and
see
whether
it's
still
worth
keeping
and
in
the
second
one
which
is
about
how
to
use
you
know,
get
lab
as
an
OAuth
provider.
I
think
that
we
that's
still
useful
and
so
I'll
go
ahead
and
do
that.
Okay,
okay,
I
just
wanted
to
kind
of
check
with
what
the
kind
of
it
you
know.
Official
position
was
so
the
official
position
is
we
prefer
to
go
inside
of
JDK
yeah.