►
From YouTube: Package: Internal customer call 03-24-20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
You
great
this
is
our
internal
customer
conversation
for
the
packaged
stage
and
we're
going
to
be
working
through
an
agenda
which
is
not
being
shared.
The
first
item
on
the
agenda
is:
is
there
any
objection
to
moving
these
meetings
to
occur
on
Wednesdays
instead
of
Tuesdays
dan
has
a
conflict
at
this
time,
usually
on
Tuesdays.
So
I
was
wondering
if
there's
any
objection
just
making
it
so
we
can
attend
each
month.
A
Okay,
I'll
update
that
the
next
one
is
related.
I
have
a
few
updates
for
the
dependency
proxy,
so
it
is
now
available
on
blab
comm.
Thank
you,
Alex
for
scrambling
to
make
sure
we
got
that
done
right
in
the
current
milestone,
we're
adding
the
ability
to
purge
the
cache
via
the
API
and
then
in
the
next
milestone.
We'll
add
the
ability
to
define
a
cadence
for
which
to
purge
there's,
currently
a
bug
where
images
are
not
being
pulled
from
the
cache.
A
It's
been
verified
by
a
support
engineer
so
we'll
tackle
that
in
the
next
milestone,
as
well
so
what's
happening
now
is
if
you
go
to
pull
an
image
from
dr.
hub,
it
will
pull
successfully,
but
then,
when
it
goes
to
pull
from
the
cache
it
actually
just
it
gets
a
failure.
So
hopefully
we
could
work
on
that
and
the
next
milestone,
and
then
the
next
thing
that
we
want
to
start
thinking
about
is
adding
support
for
caching
NPM
packages.
We
have
David
one
of
our
back-end
and
product
developers,
get
added
some
insight
into
that.
A
B
A
Okay,
on
the
container
registry
front,
we've
the
past
several
milestones.
We
really
invested
effort
in
optimizing
the
garbage
collection
algorithm.
We
weren't
able
to
optimize
it
to
the
point
where
we
could
run
it
for
get
babcom,
but
many
of
our
customers
have
been
unblocked
from
from
that
performance
work
and
it's
also
informing
how
we
start
to
work
towards
zero,
downtime
or
online
garbage
collection.
That
work
is
starting
now
with
defining.
Basically,
the
first
step
is
to
move
this
storage
of
the
docker
manifests
from
object,
storage
into
a
database
and
there's
some
discussion.
A
D
I'm
interested
to
see
where
it
goes
as
as
a.m.
I'm
not
trying
to
like
get
in
and
make
technical
decisions
on
part
people
that
have
a
clear
idea
of
how
that
stuff
should
work.
But
I
think
my
concerns
are
the
impact
on
performance
with
an
unknown
new
system
inside
the
same
cluster
/
database
that
we
might
be
using
for
Gil
out
and
all
get
lab.
D
I,
don't
know
how
attuned
the
existing
instance.
We
don't
seem
to
have
a
good,
clear
way
to
measure
the
performance
of
the
current,
because
it's
using
the
file
system
right
now.
So
how
does
that
equate
to
how
it
would
translate
into
using
a
data
store
from
manifests
that
it's
like
there's
a
bunch
of
factors
here
which
I
think
we
can
work
through
for
sure?
But
those
are
the
considerations
but
I
think
said.
All
of
that
we've
been
giving
clear
guidance
that
one
one
product
one
data
store
so
like
at
some
level.
D
D
A
Cool
and
we're
also
making
progress.
There's
an
epic
now
that
details
online
garbage
collection
and
there's
a
proposed
plan
for
how
we
can
enable
that
there
will
be
some
migration
to
switch
from
not
just
the
database
but
actually
switch
versions
of
the
container
registry.
And
how
do
we
actually
run
that
so
drama
Nayla
are
working
through
that
plan
and
we're
adding
issues
as
we
go
along.
I
think
we're
saying
that
by
Dan
I
believe
we're
targeting
like
August
for
having
that
done
and
for
having
the
the
cost
of
the
container
registry
brought
down.
A
D
I
mean
I
what
we
should
have
talked
to
that
mid-year,
but
we've
sort
of
talked
in
terms
of
all
this,
just
because
we
really
have
a
functional
team
of
two
people,
and
so
the
risks
are
relatively
high
in
terms
of
their
impact.
If
not
the
probability
of
issues
arising
so
yeah,
we
sort
of
agreed
August
was
a
likely
delivery,
but
we
really
are
trying
to
get
something
out
there
by
July
and.
A
We
saw
we
ran
garbage
collection
on
Deb,
our
Deb
duck
at
lab
comm,
and
we
saw
40
percent
reduction
in
storage
costs,
which
is
not
guaranteed
because
we
don't
know
that
the
behavior
on
Deb
is
the
same
as
on
babcom.
But
that's
kind
of
an
estimate
that
I've
been
throwing
out
there
that
we
could
see
a
40%
reduction
in
cost,
which
would
be,
which
would
be
great.
A
Okay,
we
have
also,
on
this
front
of
lowering
costs
of
the
container
registry,
been
rolling
out
docker
expiration
policies.
So
this
was
a
fun
one
because
we
rolled
it
out
in
twelve
eight
and
we
basically
only
turned
it
on
for
new
projects
as
a
way
of
sort
of
testing
and
making
sure
they
weren't
going
to
be
any
performance
concerns
and
the
community
of
system
administrators.
A
The
performance
of
the
delete
service
that
runs
that
that
actually
powers,
the
expiration
policies
and
we've
seen
a
94%
improvement
in
performance,
so
we're
hoping
that
we're
going
to
turn
the
feature
or
allow
admin
to
control
this
for
all
projects
for
self
managed
and
then
in
assuming
everything
is
going.
Ok
in
13:1,
we'll
turn
it
on
for
get
lab
comm.
A
No
and
then
separately
on
in
terms
of
the
package
registry,
we're
currently
working
on
a
pipe
I
in
supporting
pipe
I
for
Python,
and
also
we
have
a
community
contribution
for
go
modules
and
where
we're
currently
working
with
that
person.
Now
we
had
a.
We
have
had
a
couple
of
meetings
with
them.
I'm
wondering
with
anybody
on
this
call
be
able
to
benefit
from
get
lab.
Having
a
go
for
positive
module.
Repository
would
be
able
to
dog
food
that
at
all.
A
So
we
have
a
meeting
with
the
contributor
tomorrow
in
the
it's
damaged
and
that
Steve
is
working
on
this
or
supporting
that
person
or
that
if
there
are
any
questions
that
come
to
mind
that
are
important
for
you
to
consider
or
use
cases.
Let
me
know
and
I
could
add
those
to
the
discussion
or
feel
free
to
just
I
link
to
the
VM
are
here,
so
you
can
feel
free
to
reach
out
in
EMR
and
ask
any
clarifying
questions
because
it'd
be
great.
A
D
A
What
would
be
most
useful
in
terms
of
dog,
fooding
I
know
Ruby,
we
would
have
to
it's
not.
We
would
need
more
than
just
the
package
manager
support.
We
would
also
need
to
proxy
the
requests
as
well,
and
so
I
was
just
wondering.
Well,
how
would
you
choose
to
prior
to
the
prioritize
between
Linux
and
and
Ruby,
for
your
own
news
cases.
C
I
think
we
could
dog
food
both
so
certainly
for
the
to
have
the
most
impact.
The
Ruby
would
have
to
be
proxying,
but
there
are
a
few
cases
in
which
gitlab
is
using
internal
gems
that
don't
necessarily
have
to
be
coming
from
ruby,
gems
in
terms
of
style,
guide,
gems
and
linking
gems
and
stuff
like
that.
That
are
get
loved
specific.
So
there
is
there's
a
little
bit
of
dog
food
and
we
could
do
even
without
the
dependency
proxy,
but
I
think.
C
C
Yeah
I,
don't
know,
I,
don't
know
that
the
like
wheat,
so
we
could
dog
food
I,
don't
know
that
there
would
be
a
huge
benefit
to
us
to
the
team
using
the
included
one
versus
using
what
we're
already
already
doing,
but
I
think
forward-looking
both
for
Deb
and
our
penis,
we're
definitely
at
the
mercy
of
our
provider
for
the
current
package
management
there.
So
any
any
steps
we
can
take
to
slowly
be
able
to
do
something
ourselves.
I
think
is,
is
positive
from
from
my
side
at
least
okay.