►
From YouTube: Git LFS Internals
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
so
I
guess
this
is
a
presentation
about
get
lfs
or
get
large
file.
Storage,
I,
guess
I'll,
preface
this
by
saying.
Most
of
my
understanding
comes
from
this
much
longer
talk
about
I'll
get
lfs,
I
guess
as
image
production
at
a
high
level.
A
So
what
it
actually
is
is
instead
of
storing
the
actual
files
in
the
repo,
as
as
normal
they're
kind
of
stored
in
a
separate
file
and
what's
actually
stored
in
the
repo,
is
like
a
text
pointer
to
that
file
in
the
separate
server.
A
I
guess
what
I
actually
did
recently
was
kind
of
explore
what
happens
to
or
how
to
get
lab
handles.
Lfs
objects
during
forking
or
Repository,
so
I
think
most
of
the
actual
get
lfs.
Things
are
covered
in
this
video
that
I'll
just
share
in
the
chat,
I
guess
but
I
guess
what
I'll
talk
about
is
the
things
that
I
kind
of
discovered
were
different
from
this
video
and
some
of
the
forking
specific
things,
oh
compared
to
the
same
stuff.
A
A
So
in
our
case
it
looks
something
like
this,
where
first
it's
like
getting
authorization
and
then
and
then
actually
does
the
upload
to
Workhorse
and
then
Workhorse
actually
saves
actually
saves
the
file
where
it's
supposed
to
after
getting
authorization
again
from
rails.
So
this
step,
and
then
this
part
that
looks
like
in
the
code,
it
looks
like
it's
doing
like
I,
think
the
method
is
called
file,
upload,
finalized
or
something
like
that,
and
it's
not
it's
not
verifying
finalized.
It's
let's
find
it.
A
Yeah
upload
finalize,
it
looks
like
it's
actually
dealing
with
a
file
but
like,
as
the
video
mentions,
workhorses
actually
already
saved
it.
So
I
was
kind
of
glossed
over
in
the
video
but
I
kind
of
poked
around
to
see
how
it
actually
works,
and
we
have
this
multi-part
middleware
that
I
guess.
If
you
sum
the
right
things,
there's
a
uploaded
file
object
that
ends
up
in
the
params
of
the
requests
that
you
can
kind
of
use,
and
that's
kind
of
part
of
like
this
bigger
thing
where
you
can
have
direct
uploads.
A
A
See
I
guess
when
it
comes
to
working
it's
it
starts
over
here.
So
this
is
like
when
you
create,
when
you
click
the
create
Fork
button.
It's
going
to
call
this
service
which
ends
up
in
you
end
up
doing
this
well
anyways.
It
ends
up
like
hearing
Apple
worker
that,
like
yeah,
it
ends
up
calling
this
thing,
which
is
just
gonna
link.
All
the
lfs
objects
from
the
source
project
to
the
new
Fork
has
created
I
guess
to
explain
that
a
little
bit
on
the
back
end.
A
I
guess
that's
kind
of
all.
I
had
but
I
kind
of
went
through
that
kind
of
quickly,
so
I
imagine
there
might
be
some
questions.
B
Actually,
actually,
we've
discussed
it
quite
recently
whether
the
lfs
subjects
the
duplicated
when
we
create
the
book
yeah
or
whether
we
use
like
the
same
amount
of
storage
for
the
fork.
So
I
guess
your
message
is
that
they
be
duplicated
and
we
just
link
create
link.
If
you
create
the
book,
we
create
a
link
to
the
existing
lfs
object.
Yeah.
We
don't
re-upload
it
once
again
for
the
work.
C
Yeah,
so
I
actually
got
kind
of
got
two
questions.
That's
okay,
the
first.
The
first
question
is
actually
just
following
on
what
Eagle
just
said.
So
do
we
know,
and
maybe
this
needs
some
more
thought,
but
do
we
know
if
you've
once
we've
fought
a
repo
that
includes
lfs
objects
and
then
a
new
object
gets
added
to
the
fork?
C
Is
that
going?
Where
is
that
going?
Is
it
going?
Is
it
is
it
being
added
to
the
primary
repositories?
All
of
his
storage.
C
Okay
right
so
then,
so
then,
in
terms
of
you
know,
because
this
original
question
was
who's
paying
for
it,
and
so
that
would
then
count
against
Hawk
the
forks
okay
and
then,
when
the
fork
was
or
I
guess,
whenever
pull
request
was
merge,
request
was
merged
and
then
do
we
know
if
it
then
goes
to
the
primary
would
be
in
both
places.
Yeah
actually.
C
A
Okay,
okay,
I
guess
just
to
explain
for
when
when
we
work
it
so
we're
talking
about
how
we
count
storage,
it's
like
if
we
sum
all
of
the
sizes
of
the
lfs
objects
Associated.
So
for
the
for
the
fork.
When
we
create
the
fork,
we
link
all
of
the
lfs
objects
from
the
Upstream
project,
so
the
fork
starts
off
with
lfs
storage.
A
That's
like
kind
of
overstated
like
we
don't
we're,
not
storing
extra
for
the
fork,
but
we
are
kind
of
all
of
these
things
and
then,
when
you
add
something
to
the
fork,
that
is
for
the
fork
itself.
But
then,
when
you
create
the
merge
request,
then
that
lfs
object.
That's
in
the
GIF
of
the
merge
request
is
links
to
the
Upstream
project.
So
not
even
when
you
merge
it,
creating
them
will
I
guess,
use
storage
of
the
Upstream
project.
Okay,.
C
Okay,
so
that's
that
that's
that
poses
an
interesting
question
then,
because
or
quandary
because
in
theory,
then
you
know
like
a
public
project
could
be
fought.
Probably
a
project
could
be
forked,
and
then
you
know
someone
could
just
upload
arbitrary.
You
know
files
and
that
would
actually
count
against
the
primary
projects.
C
Project.
I
should
say
you
know
size
outside
of
the
repo.
A
Yeah,
oh
yeah,
that
actually
brings
up
another
thing
that
I
forgot
to
mention:
trying
to
remove
lfs
objects
is
actually
kind
of
mysterious
like
in
our
in
our
instructions.
I
guess
is
just
kind
of
like
this
is
how
you
do
it.
You
just
go
through
these
steps,
and
then
you
run
our
repository.
Cleanup
I
didn't
really
figure
out
where
this
goes.
It
kind
of
disappears
into
giddly
somewhere
yeah,
but
it
is
kind
of
mysterious
how
it
works.
So
how
to
if
some
someone
were
to
do
that.
C
D
Yeah
I
did
that,
but
so
what
it
does
like
behind
the
scenes,
so
we
create
like
a
special
file.
This,
like
from
this
filter,
wrapper
there's
like
a
couple
of
ways
how
to
do
that.
So
we
create
a
special
files
that
create
like
a
kind
of
mapping
between
the
old
commits
and
the
new
commits
and
kind
of
allows
us
to
remove
some
of
the
comments
from
the
repo,
and
we
upload
that
to
one
of
our
endpoints
that
provides
that
to
Gita
and
on
guitarist
side.
D
We
basically
execute
this
git
garbage
collection
command
with
a
specific
Flags,
basically
might
miss
some
parts,
but
in
general
reports
or
something
like
that,.
D
Has
like
the
direct
access
access
to
the
git
marker
execution
file.
D
C
B
B
Yeah
I
mean
Jerry
already
is
showing
it
on
the
screen.
So
I
wasn't
talking
about
this
field
yeah,
so
this
films
allowed
us
to
upload
the
files
generated
and
then
I'll
definitely
performs
big
comments
based
on
those
files
and
since
I
I
started
talking
one
quick,
one
quick
note
regarded
the
duplication,
I
haven't
touched
the
code
or
for
a
while,
but
now
I
quickly
checked.
B
Are
we
duplicating
the
subjects
globally
like
we
have
sha
256
yeah,
so
that
allow
us
to
to
state
that
the
file
that
it's
Unique
is
a
shape
is
unique
profile
yeah,
so
we
can
store
them
globally
and
then,
if
someone
uploads
a
file
that
already
exists
in
the
system,
we
don't
like
duplicate
this
file.
We
just
use
the
existing
one
and,
of
course,
there's
a
security
concern
that
someone
can
could
just
upload
the
Sha
s
h
a
and
just
receive
the
file
that
is
already
uploaded
somewhere
in
another
project.
B
B
But
if
someone
pays
for
the
lfs
storage,
then
we
just
calculate
how
much
how
much
files,
how
many
files
were
uploaded
for
this
specific
project
yeah
if
it
was
for,
then
we
we
state
that
the
user
uses
the
same
amount
of
data,
the
same
amount
of
storage
yeah
we
duplicated
in
a
background
yeah,
but
the
user.
Well,
it's
shown
as
displayed
as
if
you
forward
the
project
you
used
to
to
have
to
depend
the
size
of
the
project,
so.
C
Yeah
yeah
I
think
that's
what
a
reason.
What
thank
you
Eagle?
That's
really
helpful.
I
think
what
you're
pointing
out
is
also
what
originally
spurred
the
investigation
into
this,
because
you
know
we're
effectively
overcharging
but
I
do
have
another
question,
but
I'm
happy
to
wait
until
someone
else
has
a
question.
In
other
words,.
C
Okay,
in
that
case
Jerry,
would
you
mind
just
going
back
to
the
diagram.
C
So
so
you
mentioned
something
about:
Workhorse
had
already
uploaded
the
file,
so
is
the
quick.
Is
this
essentially
in
parallel
once
it
once
the
pile
comes
up,
it
goes
to
Workhorse
and
then
Workhorse
does
the
upload,
because
that
upload
could
take
some
time
right
depending
how
big
the
file
it
is,
and
then
it
creates
the
pointer
or
how
does
that
work
or
does
anyone
know.
C
Because
I
just
I'll
just
you
know
I
mentioned
basically,
you
know
multi
megabyte.
Let's
say
you
know,
50
megabyte
file,
so
the
push,
even
even
if
it's
going
through
Workhorse
and
to
object
storage,
it
might
still
be
slower
than
just
pushing
them
a
small
text,
file
directly
gear
and
get
low
shell,
so
does
that
whole.
C
Does
that
whole
thing
get
treated
as
a
single
and
maybe
someone
else
on
the
cool
one?
No
answer
to
this
as
well.
Does
that
whole
thing
get
treated
as
a
single
transaction
from
the
terminal
and
in
the
same
thing,
with
the
git
pool,
do
we
do
we
wait
for
a
whole
the
whole
file
to
come
back
before
we
present
it
to
the
user.
B
Yeah
I
think
so
as
far
as
I
remember
it
was
it
works
like
you
get
clone
yeah,
you
can
just
see
the
progress
like
you
know,
push
and
the
files
yeah,
and
you
see
the
whole
problem
that
the
file
is
being
uploaded
and
I.
B
Don't
remember
the
details
from
the
new
user
experience,
use
the
interface
but
I
guess
to
use
a
it
doesn't
seems
like
it
doesn't
make
much
difference,
yeah
that
it
is
just
it
usually
just
sees
a
file
being
uploaded,
and
it's
just
it
doesn't
matter
where,
where
it's
going
to
the
object,
search
or
to
the
repository
yeah,
their
interface
is
mostly
the
same
and
the
file
is
being
uploaded.
But
in
the
background,
it's
not
it's
not
performed
in
the.
B
E
Eagle,
can
you
just
explain
to
me
again
the
the
security
feature
you
mentioned
so
in
my
head
thinking
about
this
there's,
basically,
a
big
bucket
of
files
and
each
repository
kind
of
points
to
says
this
binary
belongs
to
this
repository,
a
file
with
a
hash
in
it
essentially,
and
that's
how
we
don't
have
duplications
of
files
when
we
Fork
repos
or
have
you
know
any
duplication
across
the
complete
system?
What
is
you
mentioned?
E
B
Yeah,
like
the
user,
uploads
uploads,
for
example,
user
uploads
file,
yeah
content,
of
which,
for
example,
user
knows
Sha
of
the
file.
But
a
user
doesn't
know
the
content
of
the
file.
What
they
can
try
to
do.
B
It's
uploading,
lfs
pointer,
which
contains
sha
yeah,
upload
it
to
repository
and
just
upload,
like
a
f8
fake
file,
just
to
make
the
appearance
that
the
some
file
is
being
uploaded
and
if,
for
example,
we
didn't
check
the
contents,
the
contents
on
the
file,
it
didn't
verify
the
content
or
we
could
just
in
order
to
duplicate
the
file
search.
We
could
just
take
the
same
chain
and
find
the
object
that
we
need
and
say
that
you
pointed
that
file.
E
E
About
also
the
same
process,
it
must
be
very
memory
intensive
to
do
it.
No
ask
the
foreign.
B
C
And
Jerry,
what
what
is
it?
What
are
we
looking
at
on
this
on
the
terminal
screen
there.
A
Oh,
this
is
just
the
actual
thing
that
is
uploaded
to
the
repo,
so
this
is
the
hash
of
the
file
that's
uploaded
to
lfs
and
the
size,
and
these
are
some
of
that's
basically
what's
used
to
de-duplicate,
so
the
hash
file
and
the
the
size
is
what
kind
of
uniquely
identifies
it.
A
C
E
Can
I
just
can
I
just
one
more
sorry
Sean?
What
is
the
like?
How
what
is
the
and
like?
How
do
we
sell
this
to
customers
this
like?
How
do
they
pay
for
it?
Do
they
pay
for,
like
I've,
got
two
gigabytes
of
storage
for
binary
files,
or
this
is
an
option
it
can
option,
they
can
add
into
it,
and
the
question
is:
if
we
have
this
G
duplication
system,
however,
like
who
pays
the
first
guy
or
the
second
guy?
Is
it.
C
Yeah,
that's
a
great
question
governor
and
that's
in
fact
what
started
the
issue
that
Jerry
was
working
on
or
in
fact
there's
even
an
upstream
issue
about
how
we
are
building
for
this
stuff
and
whether
we
should
be
charging,
because
you
know
object.
Storage
is
cheaper
than
reveal
storage
and
should
we
be
charging
separately
and
and
then
the
the
current
iteration
is
we're
just
applying
some
type
of
factor,
but
I
think
we
will
actually
eventually
get
to
a
more
exact
accounting
for
this
stuff
yeah.
C
E
C
Yeah
Jerry.
That
was
really
awesome.
A
really
awesome
talk.
Thank
you
so
much
for
presenting
it
really
good
and
yeah.
I
just
want
to
make
sure
everyone
that
was
Gary's
first
presentation,
so
hats
off.