►
From YouTube: Workhorse overview for the Dependency Proxy
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
so
I
think
today
we're
gonna
spend
some
time
sort
of
just
going
over
general
discussion
and
overview
of
workhorse,
no
real
sort
of
plan,
in
particular,
but
touching
on
any
sort
of
general
design
points
of
interest
conventions
that
may
exist
and
then
maybe
getting
a
little
bit
into
the
package-related
aspects.
A
As
we
work
towards
the
general
idea
of
moving
the
dependency
proxy
to
workhorse
so
maybe
I'll
hand
it
over
to
nick.
B
I
realized
that
this
is
going
to
happen
at
approximately
three
o'clock
today
when
I've
looked
at
the
agenda
and
was
like.
Oh,
I
see
I've
got
to
present
on
workhorse
and
talk
all
about
workbooks,
which
in
my
head
wasn't
what
was
going
to
happen
today.
We
were
just
going
to
chat
about
this
feature
in
particular,
so
I'll
apologize
in
advance.
If
any
of
this
is
quite
unstructured,
I
had
a
last-minute
brush
around
for
references
and
I've
popped
those
into
the
chat
there.
B
I
guess
I'll
share
my
screen
just
so
that
we're
all
on
the
same
sort
of
page
and
yeah.
I
guess
I'll
just
talk
and
try
to
address
these
main
points
and
if
you
have
any
questions
or
anything
you're,
particularly
interested
in
as
I
go
along
or
just
want
to
interrupt
and
send
me
on
a
different
path
all
together.
Just
let
me
know
I
don't
want
to
be
talking
of
what
I'm
talking
about
is
uninteresting.
I
want
to
be
talking
about
things
that
are
interesting
to
you.
B
So
in
terms
of
the
general
design
and
structure
of
workforce,
I
guess
we
start
with
the
overall
architecture
of
gitlab
and
what
course
is
a
http.
Reverse
proxy
sits
in
front
of
gitlab
actual
down
here,
puma
gitlab
rails,
and
it
intercepts
every
single
http
request
that
goes
to
gitlab
rails.
So
everything
you
write
in
a
controller,
all
that
code
is
handling
http
requests
and
returning
http
responses
that
have
come
through
gitlab
workloads.
B
Gitlab
workhorse
always
has
a
reverse
proxy
in
front
of
it.
Usually
that's
nginx.
Sometimes
people
install
from
source
they
put
apache
in
front.
Instead,
that's
not
something
we
support
very
well,
but
it
does
work
quite
often,
people
will
then
have
another
reverse
proxy
in
front
of
ending
x
or
apache.
Like
h,
a
proxy
gitlab.com
then
has
another
layer
in
front
of
that
which
is
a
cloudflare
or
whatever
our
cdn
is
at
the
moment.
I've
honestly
stopped
trying
to
keep
track
of
that.
So
there's
a
long
list
of
http
reverse
proxies.
B
Nginx
is,
unlike
most
of
the
others.
Sorry
gitlab
workhorse
is
unlike
most
of
the
others,
in
that
it's
very
tightly
coupled
to
gitlab
rails.
The
rest
are
quite
generic
servers.
Workhorse
is
full
of
code
that
makes
changes
that
gitlab
rails
depends
on
in
order
to
do
its
job
offload
work
to
workhorse
when
that's
required,
etc.
So
the
two
components
are
very
tightly
coupled,
which
is
not
normal
for
a
reverse
proxy.
Normally,
a
reverse
proxy
will
be
quite
generic,
and
it
won't
have
very
closely
coupled
perv
root
behavior
in
the
way
the
workhorse
does.
B
There's
been
lots
of
nodding
and
thumbs-ups.
That's
great.
I
guess
the
major
well
we'll
keep
on
this
for
a
little
while
the
most
important
thing
about
workhorse
is
what
it's
doing
with
the
requests
and
the
responses
and
I've
mapped
this
code
a
little
while
ago.
So
this
is
internal
upstream
routes.go,
which
is
essentially
where
all
the
magic
happens
in
workhorse
and
most
of
you
on.
This
call
will
have
seen
this
bit
of
code
already
udot,
root,
etc.
B
B
So
here
we're
pushing
the
body
of
the
request
up
to
object,
storage,
usually
and
then
replacing
the
reference
in
the
http
request
that
goes
on
to
gitlab
rails
with
the
location
of
the
file,
so
gitlab
rails
itself
doesn't
have
to
bother
with
the
file
at
all.
It
only
has
to
work
with
the
reference
to
the
file
and
that's
how
they're
very
closely
coupled
as
david
fernandez
pointed
out
in
the
issue.
B
I
read
his
comments
about
five
minutes
before
this
meeting
started.
There
is
another
way
in
which
workhorse
behaves.
As
I
said,
we've
got
all
these
different
routes
which
have
special
logic
encoder.
Then
this
executes
code
before
the
request
is
sent
off
to
gitlab
rails.
It
can
also
execute
code.
After
the
request
has
been
sent
off
to
get
lab
rails.
B
Six,
just
here
changed
in
the
past
few
minutes.
We
have
this
idea
of
a
general
proxy
which
is
full
of
all
this
magic,
and
these
are
actually
response
filters,
rather
than
request
filters.
So
we
have
a
thing
called
send
data.
We
have
a
thing
called,
send
files
and
archive
said
bluffs
and
diff,
and
what
all
these
do
is
the
request
gets
passed
on
to
get
lab
rails
unmodified
via
this
standard
proxy
that
we
have
here.
It
just
takes
the
request:
hands
it
off
to
get
lab
rails
when
git
lab
rails
response
comes
back.
B
If
it
has
a
magic
header,
then
workhorse
will
perform
actions
that
are
dictated
by
that
header.
So
you
can
ask
it
to
send
the
file.
This
is
the
gitlab
workhorse
send
data
header
that
david
was
talking
about.
There's
a
similar
header
for
send
an
archive
from
the
git
repository,
send
a
blog
from
the
git
repository
or
diff,
and
originally
all
of
these
would
call
out
to
git
commands
they
would
run.
Git
show
object.
B
Now
they
do
italy
rpc
calls
instead
and
send
the
response
back
to
the
eventual
client
they're,
all
replacing
the
response
body
and
they're.
Quite
a
lot
more
restricted
than
what
they
can
do
compared
to
these
ones
down
here,
because
these
cannot
operate
both
before
and
after
the
requests
and
sent.
B
B
These
are
limited
to
modifying
the
response
that
comes
back
from
the
gitlab
rails
upstream.
They
can
do
other
things
as
well,
but
in
general
they
can't
modify
the
request.
They
can
only
modify
the
response
that
the
client
sees,
and
I
know
there
was
some
discussion
about
whether
we're
going
to
use
this
approach
or
this
approach
when
it
comes
to
building
the
dependency
proxy
in
workhorse.
B
From
my
perspective,
either
can
work.
You
can
probably
make
it
work
with
either.
I
don't
know
if
perhaps
this
approach
is
a
little
more
appropriate,
but
it's
not
a
strong
feeling.
On
my
part,
I
I
could
quite
happily
see
her
an
implementation
to
buzz
either
yeah.
So
that's
the
general
idea
of
how
workhorse
functions
and
the
kinds
of
things
it
does,
but
does
that
make
sense
to
everyone?
Are
there
any
questions.
A
I'm
kind
of
curious
with
the
idea
of
like
rails
handing
back
headers.
Is
there
like
a
specific
reason?
Why
why
it's
better
to
use
headers
in
the
response
versus
you
know,
just
handing
back
a
response
body
for
workhorse
to
deal
with.
B
I
think
that
the
answer
to
that
you
can
see
I'm
stalling
for
time.
It's
how
the
response
writer
works
inside
of
go
it's
very
hard
to
interpret
the
start
of
the
response
body.
It's
really
easy
to
interpret
the
response
headers
and
in
general,
you
load
all
of
the
response
headers
up
and
you
leave
the
body
unread
entirely
until
you've
decided
what
to
do
with
it
based
on
the
headers.
So
if
I
take
the
internal
send
data,
I'm
completely
off
piece,
it
might
be
in
here.
B
There
we
go
so
here's
where
all
the
magic
is
happening
at
the
point
where
we
run
this
we've
already
read
all
the
headers
from
gitlab
rails,
but
we
have
read
none
of
the
response
body.
The
response
body
can
be
an
arbitrary
size
and
it's
difficult
to
know
how
much
of
it
to
read.
So
I
think
it
ended
up
in
the
header
simply
because
it's
more
convenient
that
way
it
doesn't
have
to
be
there,
but
it's
just
what
we
happen
to
have
implemented.
B
B
B
All
the
code
that
you
would
write
for
new
dependency
proxy
would
be
called
maybe
inside
this
package
and
just
create
a
new
instance
that
can
carry
the
state
it
needs
around.
Send
data
will
take
the
metadata
that
gitlab
rails
has
written
pass.
It
match
it
with
that
injector,
which
is
the
new
code,
you'd,
be
writing,
and
then
it
would
invoke
it
for
you
with
the
decoded
data.
B
So
this
approach
is
quite
simple:
it's
a
bit
more
involved
than
this
kind
of
approach
where
you're
saying
just
run
this
function.
Whenever
you
see
this
route,
but
the
infrastructure
is
there
and
I
guess
what
I'm
trying
to
show
is
it's
not
difficult
to
add
new
things
here,
it's
not
a
large
overhead.
We
recently
added
this
new
image
resizer.
B
So
when
gitlab
sends
back
a
large
image,
we
dynamically
gitlab
doesn't
normally
send
back
images,
it
will
send
a
reference
to
the
image,
and
now
we
take
that
image
and
we
can
make
it
smaller
in
the
clients
so
that
it's
more
convenient
to
read.
This
is
only
out
of
a
few
months
ago,
so
it's
not
difficult
to
do
in
the
abstract.
All
the
difficulty
is
inside
this
new
hypothetical
function
here.
B
Got
to
the
agenda:
yes,
that's
a
general
design
and
structure
of
workforce,
as
I
say
it,
just
sits
in
front
of
get
lab
rails
interprets
some
requests
interprets
some
responses.
B
A
B
B
I
did
link
to
the
original
implementation
merge
request
here,
because
I
thought
it
was
quite
useful
to
refresh
my
memory
on
how
the
whole
dependency
proxy
hangs
together
at
the
moment
and
I'm
assuming
the
code
hasn't
changed
a
great
deal
since
it
was
written,
but
that's
just
the
general
idea
of
how
it.
B
B
Particularly
important
and
to
me
making
both
of
these
be
in
the
workhorse
this
one,
the
manifest
is
just
as
important
as
the
blob,
because,
yes,
this
is
a
small
file,
but
if
the
upstream
server
is
misbehaving,
this
can
cause
just
as
many
problems
as
the
large
blob.
It
doesn't
matter
if
it's
a
100
by
file,
if
it's
taking
an
hour
to
be
proxied
from
server
down
to
client.
B
Yes,
so
at
the
moment
we
are
actually
using
the
send
upload,
which
is
a
workhorse
magic.
That's
using
the
same
data
header
that
we
talked
about
if
we
happen
to
have
the
blob
already
and
then,
if
not
we're,
just
returning
back
the
status
that
we
don't
have
it
proxy
about
the
error.
So
we
can
either
enhance
this
so
that
this
code
is
happening
in
workforce
beforehand
or
we
can
change
this,
send
upload
to
be
fetch
and
send
upload
or
something
similar
to
that
which
was
david's
suggestion.
B
Just
recently,
and
as
I
said,
I
don't
have
a
strong
preference
on
either
of
those
convention
wise.
The
good
news
about
gitlab
workhorse
is
that
it's
very
standard
go
and
go.
Have
a
very
opinionated
set
of
rules
about
how
to
write
coding
go.
There
are
some
gitlab
specific
things,
but
they're
almost
all
encoded
in
the
make
file.
B
Which
is
in
here
we've
got
a
very
large
make
file.
We
have
nick
format
which
will
do
fmt,
which
there's
a
large
number
of
custom
checks,
and
there
are
some
special
rules
about
what
you
can
do
inside
the
workhorse.
We
have
a
rule
about
using
context.
For
instance,
you
can't
use
context.background
somewhere,
because
if
you
do,
the
make
file
will
complain
that
it's
incorrect.
We
like
everything
to
be
descended
from
a
single
context,
so
you
can
cancel
the
entire
process
quite
easily
from
a
single
head
point.
Another
rule
is
to
do
with
logging.
B
We,
like
the
log
messages
to
be
structured,
they're,
usually
outputted
using
json
and
fields.
So,
let's
just
see,
if
I
can
give
you
an
example
in
code.
B
They've
got
all
these
different
error
messages
that
we're
giving
and
we'll
only
ever
use
print
or
worn
or
similar
words.
Here
we
will
never
use
format
strings
inside
of
the
error
message.
The
area
is
always
static
and
then
any
fields
or
any
context
get
added
on
in
here
and
what
that
does.
If
I
can
find
the
recourse.
B
B
This
might
be
the
access
log,
but
the
message
is
always
static,
so
it's
easy
to
search
in
kibana
and
then
you
have
all
of
the
interesting
data,
the
things
that
might
vary
between
different
instances.
The
same
log
message
passed
out
as
separate
fields,
it's
just
important
for
observability.
B
B
I
guess
I
don't
have
a
great
deal
more
to
say
about
workhorse,
that's
generally
applicable.
If
you
wanted
to
talk
about
the
nitty-gritty
of
the
dependency
proxy
or
if
you
had
questions
about
what
I've
just
said
or
anything
like
that.
A
Yeah,
so
I
think,
with
the
dependency
proxy,
so
there's
there's
kind
of
we've
been
kind
of
actually
thinking
about.
You
said.
Maybe
we
could
use
the
response
headers
or
maybe
we
could
use
the
request,
interception
and
I
think,
we've
kind
of
discussed
the
idea
of
using
both
where
we
would
probably
hijack
the
response
or
the
request
in
order
to
say,
hey
rails.
A
A
So
if
the
image
isn't
present
yet
then
right
now
rails
makes
it
a
request
for
an
access
token
from
docker
hub
and
then
makes
a
request
for
the
actual
you
know
blob
or
manifest,
and
so
I
think
the
initial
idea-
and
we
you
know
it's
still
just
ideas-
was
to
still
allow
rails
to
either
request
that
token
or
provide
credentials
to
workhorse
if
needed
and
then
allow
workhorse
to
make
the
request.
B
B
I
forget
what
the
setting
is
called
just
now,
but
the
idea
is
sometimes
the
object
store
is
not
directly
accessible
to
the
client.
It
might
be,
for
instance,
a
netapp
appliance
which
is
sat
on
private
ipspace
and
when
that
happens,
proxy
download
it's
called.
B
You
can
have
in
here
across
the
download
false
when
the
proxy
download
is
false.
You're,
always
sending
the
object,
store,
url
directly
back
to
the
client.
That's
a
redirect
tool.
So,
but
when
proxy
download
is
true,
the
object
store
is
not
visible
to
the
client
at
all.
And
what
happens
is
that
gitlab
workhorse?
B
As
it
says
it
proxies
the
url
so
that
it's
hiding
the
implementation
details
of
the
object
store
from
the
eventual
client?
So
I
guess
someone
doing
how
that
approach,
where
you're
sending
back
the
pre-signed
download
the
url.
As
you
say,
how
is
that
going
to
work?
If
we
have
proxy
download
turned
on.
C
Yeah,
I
think
I
think
this
would
be
two
options,
one
without,
as
I
said,
one
without
proxying
one
with
proxing,
and
actually
the
way
this
is
laid
out
is
similar
to
how
the
container
registry
works.
There
is
also
a
similar
option,
as
well
and
for
github.com.
That
is
always
redirect.
Clients
and
self
managers
get
to
choose
if
they
want
that
or
not.
So
I
think
we
will.
We
would
have
to
provide
an
equivalent
option
for
for
this
as
well,
so
that
it
is
consistent
with
the
rest
of
the
artifacts
download
as
well.
C
B
Yeah,
so
I'm
essentially,
then
the
url
here
that
would
be
gitlab.com.
Something
and
we'd
need
to
keep
that
state
somewhere
when
proxy
download
is
turned
off,
then
it's
completely
stateless.
Everything
you
need
is
encoded
in
the
url
and
that's
going
out
to
an
external
service
which
has
the
states
when
proxy
download
is
true.
You
need
to
generate
and
save
some
state
somewhere
to
make
this
part
work,
and
that's
the
bit
that
I
suppose
worries
me
a
little
bit
about
this
approach.
I'm
sure
it's
solvable,
I'm
just
thinking
about
it.
D
B
D
No,
but
you,
when
you
download
the
package,
you
still
access
an
object
in
object,
storage
and
you
can
still
configure
if
you
want
the
proxy
enabled
or
not,
and
the
endpoint
will
still
work
in
both
cases.
Yeah
in.
B
B
I'm
just
gonna
get
it
through
so
here
we
have
work
that
goes
on.
B
So
just
down
here,
I
guess
what
I'm
missing
from
here
are
the.
So
we
go
back
to
the
client
here
assume
this
thing
is
a
bit
small,
but
when
does
the
client
come
back
to
us?
There's
no!
There's
two
left
arrows
here.
Oh
here
we
are
right.
D
It's
an
it's
an
option:
it's
in
case
of
okay,
a
cache
hit
and
in
case
of
a
cache,
miss
okay,
and
so
we
will
always
send
a
single
response
to
the
client,
which
could
be
a
redirect
or
not
depending
on
the
proxy
setting.
I
guess,
but
from
the
client
point
of
view,
it
will
be
always
a
request
and
a
response.
A
I
think
from
the
client
point
of
view,
I
mean
well
not
necessarily
from
the
point
of
view,
but
I
think
things
will
generally
always
be
served
from
like
from
object
storage.
Maybe
if,
if
there's
a
cache
hit,
then
we
kind
of
treat
it
like
we
do
with
package
downloads,
we're
just
downloading
something
from
object.
A
Storage,
if
it's
a
not
if
it's
not
a
cache
hit,
then
workforce
downloads
it
from
externally
first
and
then
either
serves
that
response
and
also
caches
it
or
maybe
we
cache
it
first
and
then
serve
it
directly
from
object,
storage.
B
Yeah
yeah,
so
we
have
request
one
here
and
then
requests
you
here,
and
that
makes
a
lot
more
sense.
There's
work
going
on
here,
that's
an
alt,
so
this
is
what
happens
in
the
middle,
so
we
don't
send
back
the
307
temporary
redirect
until
we've
completed
downloading
the
thing
from
docker
hub
or
container
registry.
C
B
D
Yeah,
a
small
note
on
the
idea
that,
on
the
comment
I
I
put
on
the
issue
so
on
this
interaction
schema,
we
see
that
there
is
an
authorized
right
after
the
first
get,
so
we
are
using
a
route.
If
I'm
getting
this
correctly,
meaning
that
workers
we
will
intercept
that
get
and
do
something.
B
D
And
my
idea
was
to
not
do
this,
but
instead
use
only
the
response
injectors
to
tell
work
across
hey
you.
We
have.
We
have
a
file
at
this
url,
you
need
those
credentials
and
you
need
to
download
the
file
request
an
upload
to
this
url,
so
that
would
be
a
like
an
internal
url
baked
by
rails,
but
download
yeah
so
download
from
this
url
with
this
credentials.
D
D
That's
a
a
lot
of
of
logic
in
a
single
response,
but
the
nice
thing
is
that
once
workhorse
has
this
response,
it
can
fetch
the
data
from
the
url
and
then
just
follow
the
same
logic
for
file
uploads,
meaning.
I
have
a
file
here
and
I
need
to
upload
this
to
this
url
on
rails.
So
I
will
contact
rails
on
the
authorizing
point
to
get
the
object,
storage,
key
or
location
and
upload
it
there
and
then,
once
that
happens,
which
is
the
upload
logic
we
can
send
it
back
to
the
to
the
client.
B
Yeah-
and
I
was-
I
was
actually
quite
amused
when
I
saw
this
pop
up
because
it
essentially
recapitulates
the
discussion
me
and
jacob
had
two
years
ago,
where's
that
got
her,
so
I
have
to
keep
going
back
here,
because
zoom
keeps
hiding
so
essentially
back
in
april
2019
I
was
like
ooh.
We
should
use
the
authorized
approach
with
the
roots
in
order
to
make
this
happening
workforce
and
then
jacob
right
at
the
end
suggested.
Oh,
no,
let's
just
use,
get
lab
work
or
send
data
which
is
david's
suggestion.
B
So,
as
I
said,
I
think
either
can
work.
I
don't
have
a
strong
opinion
as
to
which
approach
to
use-
I
think
that's
probably
best
left
to
whoever
does
end
up
implementing
it,
which
probably
won't
be
me
because
I
am
disappearing
on
the
16th
of
october
for
paternity
leave.
So
chances
are,
somebody
else
will
implement
it
and
I
will
be
consulting
on
it
while
I'm
there.
But
it
wouldn't
be
very
good
for
me
to
write
this
and
then
disappear
and
just
kind
of
throw
it
in
everybody's
laps.
B
We
need
to
build
up
knowledge
in
other
people
in
source
code
for
workhorse,
but
yeah.
Whoever
does
end
up
coming
to
this
will
have
the
one-hour
approach
or
the
other,
and
I
think
that
decision
is
best
left
to
them
either
can
work.
There
are
pros
and
cons
to
both
approaches,
but
I
wouldn't
want
to
sit
someone
down
and
say
you're
implementing
this,
and
you
have
to
follow
this
plan
that
I've
devised.
I
would
much
rather
have
them
work
through
it
and
make
the
decision
for
themselves.
D
Now
that
we
are
talking
about
both
approaches,
I
see
a
small
slight
benefit
on
the
response,
header
approach,
which
is
this:
this
need
of
telling
workers
hey,
download
this
and
cache
it
on
object,
storage
using
this
url
on
rails.
This
will
be
reused
on
some
features
from
the
package
team,
such
as
the
dependency
proxy
for
packages
or
virtual
registries,
which
are
back
basically
a
package
registry.
A
package
registrant
point
that
will.
D
That
will
gather
many
urls
that
could
be
internal
or
external
that
have
packages
and
we
could
be
implementing
caching
there.
So
using
the
response,
header
means
that
adding
a
new
route
to
use
this,
we
don't
do
we
don't
need
any
change
on
workhorse.
We
just
send
the
response
and
implement
the
authorize
and
the
uploading
point,
and
that's
it
using
the
new.
Well,
the
the
route
approach.
We
would
need
to
implement
a
new
route
on
workhorse
to
catch
the
the
request,
if
I'm
not
wrong,
yeah.
B
And
we've
observed
this
with
package
book
loaders
in
particular.
If
you're
about
to
root
stock
go,
we
can
see
every
time
we
add
a
new
type
of
package.
We
have
to
add
a
new
route
and
it
gets
long
and
it's
quite
awful,
and
if
these
were
instead
implemented
as
send
data
type
filters,
if
there
was
some
way
of
doing
that,
I
don't
think
there
is
for
those
package
uploaders.
B
Then
you
wouldn't
have
to
do
that.
You
would
just
be
able
to
send
the
response
out
there,
because
these
filter
every
single
response.
Every
single
response
is
checked
against
these,
whereas
here
you
have
to
link
it
to
a
specific
path,
so
this
is
best
for
very
specific
functionality.
That's
only
relevant
to
a
single
route,
whereas
this
is
best
for
something
that
any
rails
controller
could
conceivably
do.
B
So
were
there
any
more
questions
or
ideas,
I
I
don't.
As
I
said,
I
don't
really
want
to
come
out
of
this
with
a
firm
recommendation
and
say
we
should
do
it
this
way.
None
of
us
sat
here
at
the
beginning
of
this
issue
actually
know
what
the
challenges
are
going
to
be
once
we
get
halfway
down
so
yeah.
I
would
much
rather
leave
the
person
implementing
it
with
the
capability
to
change
their
minds
halfway
through
and
they
actually
have
tried
the
root
approach
and
it's
too
difficult.
B
A
A
Cool
yeah,
I
think
that
was
a
great
little
overview
and
intro.
So
thank
you
for
coming
to
help
us
out
and
then
I'm
sure
it's
probably
gonna
be
one
of
us
eventually,
that
will
start
digging
in
and
working
on
this,
so
I'm
sure
we'll
be
pinging
you
or
some
of
the
other
work
for
workhorse
folks
for
some
help
along
the
way.
B
I
think
sean
mentioned
that
he
thinks
by
sean
carroll
source
code.
Back-End
engineering
manager
mentioned
that
he
thinks
source
code
should
be
the
ones
to
implement
it.
I
don't
have
a
strong
opinion
either
way.
I
think
source
code
does
need
to
have
more
workhorse
expertise,
and
this
could
be
an
ideal
opportunity
to
build
that
expertise
in
source
code
by
having
someone
in
source
code
implement
it
for
a
bit
of
context.
B
We
only
really
have,
I
think,
it's
three
maintainers,
maybe
four
maintainers
now
of
work
course,
and
only
one
of
them
myself
is
in
source
code.
So
well
quite
often
when
there
are
things
that
need
to
be
done
in
work
course.
It's
somebody
outside
of
source
code
who
ends
up
doing
it,
just
because
we
don't
have
the
capacity,
and
this
would
be
an
ideal
approach
to
build
capacity
in
source
code.
D
D
B
Yes,
if
I
show
you
the
I'll
just
share
my
screen
again,
very
briefly,
so
I
know
everything
over
time.
Are
you
running.
B
B
This
is
upstream.gov,
which
does
the
hard
work
of
sending
the
requests
to
rails
and
returning
yeah,
I'm
just
looking
for.
We
have
this
interface,
which
is
common
across
all
of
govi
surf
http
interface,
and
by
the
time
we
hit
our
code,
we've
already
read
the
http
request.
This
contains
all
the
request.
Headers
already
the
body
is
not
yet
read.
The
body
is
an
io
object
that
you
can
really
choose
to,
but
at
a
time
when
we're
executing
all
of
this
code,
this
happens
before
we
decide
what
to
do
so.
B
For
instance,
we
immediately
forbid
any
connect
requests
because
those
are
evil
and
so
on
this
happens
really
really
really
early.
We
already
have
the
request
headers.
We
don't
yet
have
to
request
body
chances.
Are
the
request?
Body
is
still
in
the
client.
The
eventual
http
client
hasn't
sent
a
single
request
of
the
body
yet,
but
we're
executing
code
that
can
work
with
the
headers
and
can
even
start
replying
if
he.
D
Wants
to
okay,
so
I'm
asking
because
I
hit
a
slight
bug,
so
we
have
packages
for
nuget
where
the
the
client
nugget
will
do
something
not
strange,
but
I
guess
expected
he
will
trigger
an
upload
request
without
any
credentials.
First
and
if
it
receives
a,
I
don't
recall
the
status
code,
something.
D
Authenticate
it
will
redo
the
upload
request
with
the
proper
credentials
and
what
happens
on
staging
and
production.
If
you
have
a
quite
big
file
like
like
a
one
gigabyte,
you
get
package,
the
first
request
seems
to
be
uploaded
entirely
because
it's
really
really
long
like
it's
a
eight
minute.
Eight
minutes
upload
and
I
was
expecting
the
headers
to
be
read
quickly
so
that
the
authorized
endpoint
is
called,
and
then
wales
will
reply.
No,
this.
B
B
Where
are
we
401
response
very
quickly
and
then
we
have
this
100
continue
from
the
client
as
well.
So
I
think
it's
a
similar
problem
to
this
and
I
don't
have
an
immediate
answer,
but
I
don't
even
know
if
we
solved
this.
D
B
B
Your
local
setup
in
gdk
yeah,
I
did
okay.
In
that
case,
I
would
be
inclined
to
blame
it
on
gitlab.comproxy
or
cloudflare,
one
or
the
other,
because
it's
definitely
to
do
with
request
buffering.
That's
actually
something
somewhere
along
the
chain
is
buffering
the
entire
request
before
continuing
on,
and
I
think
I'm
not
certain,
but
I
think
we
sold
fat
in
workhorse
and
nginx.
B
So
if
it's
working
as
you
expect
with
nginx
in
front
of
workhorse,
then
the
answer
has
to
be
either
gitlab.comproxy
or
our
cdn
network
is
offering
the
request
before
it
gets
to
us.
I
think
that's
quite
likely.
I
think
I
would
almost
expect
cloudflare
to
do
that.
D
B
But
at
least
you're
not
the
first
person
to
have
this
problem,
I'm
pretty
sure
it's
the
same
class
of
problem
as
this
kerberos
authentication.
So
that
should
help
great
thanks.
I
will
write
that
issue
guys.
While
I've
been
talking,
there
have
been
a
couple
of
other
things
that
have
popped
into
my
mind
around
this
general
design
and
structure
of
workhorse.
I
I
think
I
should
say
that
just
kind
of
a
grab
bag
of
things
and
the
most
important
one
it's
almost
like
the
philosophy
of
workhorse.
B
B
Gitlab
rails
doesn't
charge
and
to
an
extent,
this
gets
blurred
with
the
path
specific
urls,
where
we're
doing
work
before
it
gets
back
to
us.
But
the
general
idea
is
that
we
should
be
responding
to
things
that
rails
tells
us
to
do.
Sometimes
we
just
have
to
encode
that
for
efficiency
reasons
in
the
code
of
workhorse
and
the
other
major
thing,
I
guess
I'd
say
is
that
workhorse
doesn't
really
do
background
processing
of
any
kind
whatsoever.
B
I
mean
talking
about
this
whole
feature
as
I've
been
going
through.
I've
been
quite
conscious
of
the
fact
that
we've
got
a
long
running
process.
We
have
to
do
which
is
download
the
file
from
docker
hub.
We
then
get
those
bytes,
we've
already
streamed
them
to
the
client.
If
we're
being
efficient,
we've
finished
the
response
that
we're
giving
to
the
client
and
now
we've
got
finalization
work
to
do.
We've
got
to
finish
the
up
finish,
the
storing
of
this
into
the
cache,
if
we're
not
to
unnecessarily
delay
the
client.
B
So
usually,
when
we
have
background
work
to
do,
we
want
to
do
that
in
sidekick,
sometimes
that's
quite
difficult
to
get
to
work
at
the
same
time
as
having
the
course
in
the
mix.
I
do
think
this
absolutely
belongs
in
workhorse.
This
is
just
one
of
those
four
knee
problems
that
I'm
putting
in
the
middle
and
saying
nobody
twitch.
This
is
quite
hard.
You
know
we
have
to
somehow
handle
background
work
and
that's
quite
difficult
to
do.
B
I
think
the
stan
linked
us
to
a
particular
issue
to
do
with
object,
storage
and
the
finalized
call
where
essentially,
what
we
end
up
doing
is
uploading
the
file
to
a
temporary
location,
and
then
the
finalized
call
is
to
move
it
from
the
temporary
object
storage
book
to
its
final
location,
and
that
can
take
a
long
time
if
they're
different
buckets.
So
I
don't
think
that
blocks
this
work,
but
the
general
idea
of
there
being
large
pieces
of
work
to
do
at
the
very
end
of
the
response.
B
B
Well,
I
mean
if
we
can
offload
these
things
for
sidekick.
It's
always
better
to
do
so.
The
general
rule
that
we
have
for
workhorses
that
features
our
best
not
done
in
workforce
unless
there's
really
no
other
way
to
do
it,
and
as
we're
seeing
on
gitlab.com
that
we
were
hopeful
that
writing
this
and
only
having
it
turned
on
when
it's
puma,
that's
the
application.
B
We
have
to
be
absolutely
sure
that
this
code
works
when
it's
an
old
workhorse
and
a
new
gitlab
rails,
or
new
gitlab
rails
in
the
old
workhorse,
both
of
those
directions
really
matter,
and
we
have
to
make
sure
that
we
have
compatibility
there,
which
could
be
an
issue
if
you
just
always
think
about
it.
As
in
half
the
fleet
is
running
old,
workhorse
new
rails,
or
vice
versa.
A
All
right
well,
if
there's
no
other
questions
from
anyone.
Thank
you
so
much
for
taking
the
time
to
get
together
with
us
and
walk
through
some
of
this
and
discuss
some
of
these
ideas.
We
really
appreciate
it.
B
That
was
good
to
chat
and
if
you
have
any
questions
or
want
any
more
involvement,
then
just
you
know
poke
me
in
slack
or
in
an
issue
and
I'll
respond
as
fast
as
I
can
I'll
bring
it
up
with
sean
tomorrow
as
well.
I've
got
a
regular
one-to-one
with
him
and
just
see
if
I
can
get
an
idea
as
to
what
the
plan
is
from
source
codes
right.