►
From YouTube: Package office hour June'2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
A
Okay,
we've
been
working
through
this
epic
and,
from
the
last
I
heard,
Ethan
the
the
issues
right
now.
The
go
repository
is
sitting
behind
the
feature
flag
and
what
Stan
mentioned
to
me.
He
was
the
features
that
are
blockers,
for
that
are
the
moving
the
archive
generations,
the
workhorse
facing
the
sequel
and
plus
one
problem
and
reducing
the
usage
of
giddily
and
then.
E
A
E
A
D
E
A
D
A
E
So
there's
a
couple
of
mrs
that
haven't
got
a
whole
lot
of
response,
so
there's
one
for
the
I
created
a
function
specifically
for
go
in
the
they
created
the
workhorse
method,
I'm,
not
sure
what
those
are
called
specifically
for
creating
doe
archives,
because
we
were
talking
previously
about
filtering
the
zip
that
came
back
from
the
worker
that
came
back
from
Italy,
but
that
is
zips.
Don't
really
work
that
way.
E
You
pretty
much
have
to
write
the
whole
thing
disk,
so
I
made
a
custom
and
pointing
that
asked
diddly
for
8r
and
then
filters
the
tar
and
constructs
a
zip
on
the
fly
which
constructing
zips
can
be
done
without
writing
to
disk
reading
them.
Not
so
much
so
it's
a
little
custom
function.
It
I
tried
to
reuse
as
much
code
as
possible
from
the
existing
workhorse
Creed
archive
function.
D
E
Without
going
into
expensive
details
as
to
exactly
how
go
fetches
things,
if
we
add
a
single
meta
tag
to
the
one
of
the
middlewares,
that
should
mostly
eliminate
the
need
for
configuration.
So
with
that
small
addition
go,
should
automatically
pull
from
the
proxy
without
having
the
users
do
anything.
So
that's
a
nice
feature.
B
E
E
What
okay
I
missed
the
workhorse
one,
the
wrong
tab,
so
those
are
the
two
I
was
talking
about
and
then
separately.
There
is
a
you
know,
there's
stuff
worth
discussing
for
follow-up,
but
maybe
it's
still
too
early
for
that,
and
also
it's
not
personally
high
on
my
list
of
priorities,
because
things
like
having
get
lab
implement
a
checksum
database.
E
E
B
B
B
I
thought
that
the
the
zip
archive
generation
in
the
within
the
package,
modeling
waves,
was
a
good
idea.
We
just
need
to
plug
to
plug
it
when
there
is
an
action
where
we
need
to
rethink
the
go
modules
with
packages,
but
then
one
once
the
you
have
this
kind
synchronization
you
do
have
the
zip
file
within
packages,
and
so
the
go
proxy
wouldn't
need
to
generate
them.
He
will
just
not
inquire.
B
We
just
need
to
find
the
right
package
and
with
the
zip
file,
and
we
can
return
that
and
that
was
kind
of
a
good
approach,
since
it's
all
done
in
a
worker.
So
we
are
not
limited
in
time
and
I
wonder
what
make
the
decision
to
move
everything
to
workhorse
or
well
right
now
we
have
both
ways
open,
but
I
guess
in
the
end,
only
one
of
them
will
be
surviving.
E
But
it's
not
it's
unlike
it
in
that
you
know
NPM,
you
upload
something,
whereas
though
it's
fetching
from
the
repository
so
they're
coupled
but
I,
wouldn't
say
they
have
a
strong
coupling,
I
guess
so
to
speak,
and
so
the
proxy
endpoint
at
this
point
is
just
generating
everything
on
the
fly.
Yeah,
it
looks
through
everything
is
generated
as
it
scans
it.
You
know
the
scanning
making
get
early
calls
and
scanning
the
repository,
so
the
moving
the
archive
generation
to
the
workhorse
came
up
when
I
was
working
on
that,
mr
and.
E
That
the
go
proxy
says
you
know:
the
user
makes
a
request
to
the
proxy
checks.
Ghibli
says:
oh
yeah,
that's
about
a
tag,
that's
about
you
know
it
has
a
go
mod
file,
cool,
we'll,
do
it
and
then
it
the
the
rails,
application
generates
the
archive,
so
that
step
since
that
could
take
a
while,
especially
if
the
project
is
very
large.
B
I
totally
understand
the
performance
concern
of
having
the
archive
generation
in
line
with
a
web
request,
which
isn't
not
a
great
idea,
I
guess,
but
now
that
you
working
on
on
refresh
service
or
sync
service,
that
would
quit
all
the
go
modules
and
store
them
in
the
package
files.
The
go
proxy
could
have
this
behavior
where
the
packages
act
like
like
us,
oh
gosh,
I,
guess
if
we
check
the
packages
oh
I
have
this
version.
B
But
I
guess
I
will
bring
up
this
question
in
the
archive
generation
in
workhorse
and
and
she
checked
for
the
others,
opinions
on
on
that.
What
is
sure
is
that
we
will
not
have
to
archive
generations.
That's
not
really
good.
I
guess
we
will
need
to
just
choose
one
but
I,
don't
know
which
one
is
better
from
what
I
am
saying.
B
The
waste
one
seems
to
be
simpler
to
implement,
but
perhaps
there
are
some
aspects
that
I:
don't
groups
don't
grasp
about
the
archive
generation
on
workhorse,
because
on
workhorse
you
need
to
implement
some
ping
pongs
between
workhorse
in
the
way
in
the
way
up,
so
that
workers
receives
a
request
from
rails.
Saying,
hey
I
need
an
archive,
so
you
will
need
to
implement
that
in
both
projects,
whereas
the
the
archive
generation
in
Wales,
it's
all
within
Wales
or
small
container
guess.
But
what
I
will
ask
the
question
on
the
workhorse
issue
so.
E
The
concern
with
putting
it
in
well
with
having
in
rails
at
work,
which
is
where
it
currently
is,
is
that
if
the
repository
is
a
very
large
repository,
then
the
archive
generation
will
be
fetching
blobs
for
potentially
every
single
file
for
a
given
ref,
which,
if
it's
large,
could
take
a
while.
So
that's
that's.
The
concern
of
having
it
in
rails
is
that
it
takes
a
while
and
I
assume
also
that
it
might
delay
the
request
and
eat
up
a
worker
for,
however
long
it
takes.
E
One
thing
that
I've
I
don't
know
so
I
would
like
to
have
package
files.
It
would
be
nice
to
show
the
files
that
the
go
proxy
serves
in
the
package.
Interface
I
wonder
about
putting
those
in
the
database,
since
they
are
all
reconstructive
all
from
get.
So
it's
not
it's
redundant.
It's
essentially
redundant
data
and
I
think
it's
valuable
to
have
a
cache
I
wonder
if
that
should
be
in
the
database.
You
know.
E
B
B
Radius
yeah,
you
might,
you
might
encounter
issues
with
rebooting
and
and
such,
and
so
you
would
need
to
reconstruct
everything,
because
if
you
take
some,
if
you
take
a
project
code
base
where
it
doesn't
move,
you
would
generate
the
archive
generation,
the
all
the
archives
once
and
that's
it,
and
we
can.
We
can
implement
something
around
generating
archives
for
versions
that
don't
have
the
package
model
and
everything
else
is
it
just
kept.
I
mentioned
that
I
added
a
comment
on
on
DMR
and
reviewing
about.
B
You
can
compare
a
go
module
and
the
package
and
say:
okay,
this
code
module
is
really
this
package
and
since
we
have
all
the
files
below,
we
don't
need
to
regenerate
the
archive,
because
I
thought
about
since
the
go
proxy
is
all
about
get
tags.
What
happens
if
you
remove
a
tag
and
then
we
create
it?
Well,
not.
We
reuse
the
tag
name,
but
you
point
to
a
different
get
history.
You
would
have
different
archive.
B
So
if
we
have
this
synchronization
between
Wales
and
between
the
package
model
in
Wales
and
the
go
modules,
we
need
a
mechanism
to
verify
that
they're
still
in
sync,
just
verifying
that
they
exist.
We
need
something
more
and
for
me
it
was
the
comitia
good
one,
but
perhaps
we
can
use
something
else,
and
this
way
the
the
worker
can
can
reason
about.
Ok,
this
module
has
already
been
generated.
I
don't
need
to
deal
with
it.
I
can
just
take
the
next
one.
B
This
one
has,
as
files
has
the
package
models,
but
they
are
not
current
that
they
don't
match
the
chakram
it
doesn't
match.
So
I
need
to
destroy
the
package
and
recreate
it.
But
well
that's
all
ideas
for
now,
but
yeah.
Just
the
starting
point
is
throwing
this
question
in
the
workhorse
issue
and
see
what
others
opinion
on
that
I.
E
B
G
B
E
So
I
was
thinking
about
the
you
know.
One
of
the
reasons
the
proxies
behind
the
feature
flag
is
the
giddily
changes
and
there's
one
of
those
is
relatively
well.
One
of
the
minor
changes
is
fairly
straightforward
in
that
adding
a
filter
trying
to
figure
out
work
with
that
issue,
adding
a
filter
to
one
of
the
giddily
endpoints
will
reduce
the
amount
of
data
that
it
has
to
return
because
go
the
go.
Proxy
only
cares
about
go
mod
files,
but
you
can't
that
and
that
function
does
not
support
globs
or
anything.
E
That
would
let
you
ask
for
only
go
mod
files,
so
you
have
to
ask
for
everything
and
that's
a
whole
lot
of
extra
data.
So
that's
a
fairly
minor
change.
The
there's
a
couple
of
those
there's
other
another
change
that
could
potentially
eliminate
the
need
for
n
plus-1
queries
that
one
is
a
significantly
more
ambitious
query
in
that
it
would
search
for
it
would
basically
be
like
search
files
by
name
but
for
a
large
set
of
tags.
E
That's
effectively
the
the
optimization
that
is
needed,
but
you
know
I
have
a
proof
of
concept.
It
can
scan
giddily
in
one.
Second,
that's
still
a
fairly
long
period
of
time,
so
that
one
I
would
you
know
I
would
not
mind
having
someone
else
take
that
over,
because
that's
a
pretty
ambitious
change
to
get
Li,
but
it
would
really
improve.
The
you
know,
reduce
II
should
eliminate
the
n
plus
one
queries.
I.
B
Haven't
thought
about
something
else,
but
you
be
mad
at
me
and
le
to
as
go
developers
what
about
he
leaf
for
the
first
iteration?
We
just
support
go
mod
files
that
are
at
the
root
of
the
project,
and
you
this
way
you
don't
need
to
search
within
all
the
directories
for
go
mod
files.
You
would
just
check
if
there
is
one
at
the
root
of
the
project
and
I
asked
this
question
on
the
go
channel
off
of
the
club
and
yeah.
B
My
question
was:
if
it
is
a
common
usage
to
have
multiple
go
mod
files
within
the
same
code
repository
and
the
answer
that
he
was
that
it's
not
commonly
used,
but
it
is
used.
So
if
we
go
out
without
this
feature,
it
would
need
to
be
implemented
like
on
iteration
number,
two,
three
or
four
like
really
really
after
the
needs
of
the
first
one.
But
I
don't
know
if
it's
it's
a
balance.
B
E
E
One
of
the
ways
to
have
a
version
two
is
to
put
a
v2
folder
in
the
repository,
so
I
think
it
would
be
valuable
to
look
for
those,
because
that
could
well
I
mean
one
of
the
things
as
the
proxy
works.
You
know
the
proxy
it's
behind
a
feature
flag,
but
it
works
and
it
doesn't
have
when
you
query
the
proxy.
You
query
for
a
specific
path,
so
it
doesn't
have
to
look
through
a
set
of
tags,
and
that
is
an
N
plus
one
query.
E
But
it's
not
you
know
it's
only
looking
at
a
specific
path.
It's
not
scanning!
You
know
the
the
Refresh
worker
is
doing
an
LS
five.
You
know
search
files
by
name
for
the
whole
repository
and
then
treating
any
go
mod
file
as
the
root
of
a
module,
but
the
oxi
only
looks
at
the
path
of
the
user
enters
you
know,
so
the
request
has
a
path.
It
only
looks
at
that
path,
so
it
doesn't
have
the
same
problem
exactly
so.
E
The
proxy
works-
and
you
know
it's
behind
a
feature
flag,
but
it
does
everything
it
needs
to
do
so
by
only
looking
in
the
root.
If
we're
talking
about
the
EMR
that
we're
working
on
that
is
going
to
what
is
displayed
in
the
UI.
So
it's
not.
You
know
it
would
be
nice
to
have
everything
displayed
in
the
UI,
but
it's
not
gonna
be
a
big
functional
impact.
If
users
can't
you
know
if
it's
limited
so
make
sense,.
B
Yeah
totally
I'm
sorry
I
was
not
explicit,
but
I
was
effectively
talking
about
the
the
worker
that
needs
to
sync
all
the
gold
modules
of
one
project
with
packaged
models.
It
has
to
scan
all
the
git
repository
for
gold,
mod
files
and
I
was
wondering
if
we
could,
for
the
first
iteration,
just
scan
for
the
wood
file.
D
E
D
E
B
D
D
B
There
is
another
constraint,
I
suggested,
and
it
was
the
API
that
receives
the
call
to
refreshed
refreshed
packages,
has
an
optional
parameter,
which
is
worth
and
I
wonder
if
we
could
just
drop
it
for
the
first
version
and
just
implement
everything
so
that
the
API
will
just
sync
really
all
the
git
repository
or
go
modules
with
all
the
package
models.
And
that's
it.
You
don't
have
the
choice
to
choose
or
wear
for
whatever,
but
we
we
could.
We
could
be
adding
that
in
at
a
later
time
to
to
a
future
iteration.
B
If
we
need
that,
but
by
having
that
it
simplifies
a
bit
thing,
we
could
find
those
will
not
need
to
have
this
wave
support
and
though
the
whole
thing
will
be
just
scanning
for
files
and
not
so
much
about
looking
for
files
or
looking
for
go
mode
files
in
in
exact
waves
or
exact
commits.
You
will
just
need
to
scan
everything
so.
B
E
E
E
Some
of
the
queries-
yes
I-
think
I
have
not
entirely
gotten
a
hold
of
or
internalized
the
minimal
iteration
philosophy,
so
I
do
tend
to
think
what
am
I
going
to
be
wanting
in
the
future,
and
so
some
of
some
of
the
stuff
in
the
finders
I
could
I
think
I
can
simplify
those
and
tear
out
some
of
the
logic
that
isn't
actually
being
used
right
now.
Yeah.
B
E
B
D
B
Yeah,
that's
correct.
I
think
I
saw
a
finder
which
was
accepting
a
lot
of
different
objects
and
classes.
You
can,
it
can
be
a
string,
it
can
be
a
comet,
be
a
git
commit
and-
and
if
you,
if,
if
those
usages
are
not
within
the
DMR
yeah,
it's
better
to
not
have
them
so
that
you
first
you
simplify
the
code
and
it's
shorter
and
then
you
might
never
know
what
the
future
holds
right.
E
A
E
Think
we've
talked
about
the
main
things
that
are
important.
You
know
that
are
high
priority
right
now.
Oh,
there
is
the
yeah,
so
in
the
the
MMR
that
David
and
I
are
working
on
I,
don't
think
so.
The
sequel
stuff,
this
equal
n
plus
one
I,
think
that's
probably
a
straightforward
fix.
But
my
ORM
experience
is
with
dotnet
stuff
so
with
entity
framework
and
is
not
translating
to
writing
effective
queries
in
rails.
E
B
C
B
C
B
B
Yeah
I
guess
it's
more
general
feedback
is
that
when
you
see
that
the
EMR
size
is
growing
really
really
fast,
it's
usually
it's
time
to
split
it
up,
but
it
happens
to
to
everyone.
Even
even
me.
We
we
did
a
lot
of
big
big
size
immersed
with
where
we
needed
to
split
it
them
off
so
yeah.
That
would
be
keeping
an
eye
of
the
amount
of
changes
and
we've
closed
above
reasonable.
A
reasonable
limit,
perhaps
think
about
splitting
things
up
or
leaving
dropping
features
that
could
be
dealt
with
at
a
later
time,
but.
D
B
D
D
B
B
You
just
it
has
to
trigger
I
guess
I
thought
about
who
is
this?
Is
this
mr2
big
can
can
I
split
it
in
smaller
ones?
Can
I
drop
a
feature
or
a
parameter
or
functional,
or
something
so
I
can
make
it
smaller?
And
it
really?
It
really
helps
for
the
review
times,
because
it's
easier
to
digest
and
dig
in
with
you
and
the
same
goes
for
the
maintainer.
Will
you
face
c0
to
digest?
B
D
B
That's
logical
right
so
yeah
that
would
be
a
general
feedback
for
all
these
Amar's
keeping
an
eye
on
the
growth
of
changes
and,
and
perhaps
sometimes
just
it
goes
above.
The
1k
quadrants
change
perhaps
think
about
splitting
it
or
dropping
of
each
world
leaving
something
for
future
usage.
If
you
don't
need
right,
if
you
don't
need
it
right
now,.
D
C
G
E
E
E
E
E
It
takes
the
module
name
and
turns
it
into
an
HTTP
request
and
tax
go
get
equals
one
as
a
query
variable
to
the
end
and
whatever
server
responds
to
that
is
expected
to
respond
with
a
meta
tag
that
points
to
the
actual
repository.
So
a
vanity
URL
is
when
the
canonical
name
of
the
module
is
not
where
it's
stored,
so
the
canonical
name.
You
know
one
of
the
NGO
developers
has
a
lot
of
lot
of
stuff
on
his
personal
website.
E
B
E
B
E
Yeah,
that's
I,
think
that's
probably
so.
Yeah
I
think
that's
probably
the
most
straightforward
way
to
do
it.
It
could
theoretically
be
handled
automatically,
but
I
think
having
a
configuration
like
that.
That's
kind
of
like
the
the
way
pages
validates
that
you
actually
have
control
over
a
domain.
When
you
try
to
add
a
custom
domain,
I
think
that
would
be
a
a
good
way.
We.
B
B
B
E
Because
that's
how
the
go
developer
decide
so
it's
kind
of
a
semantic
versioning
thing,
and
this
is
really
getting
into
go
philosophy,
but
the
idea
is
when
you
make
backwards,
incompatible
changes
to
the
API
of
the
package
to
the
to
the
exposed
interface
of
functions
and
types
that
that
major,
that
you
should
not
do
that
as
much
as
possible
is
one
of
the
big
things
that
you
should
as
much
as
as
almost
you
know.
Ideally,
you
should
never
make
backwards,
incompatible
changes
when
you
do
it's
the
the
path
changes
force.
E
It
force
the
developer
to
treat
it
as
if
it
is
a
different
package.
So
you
know
if
I
have
in
an
example
pack
parse,
which
is
a
packet
parsing
tool.
If
I
released
a
version
two
of
that,
then,
when
you
import
it,
you
would
put
vitu
in
the
import
path.
So
it
is
explicitly
marked,
as
this
is
a
new
version.