►
From YouTube: Monthly Internal Customer Call - August 2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
this
is
our
monthly
package,
internal
monthly
customer
call
and
actually
for
this
month
before
the
there's,
a
hackathon
tomorrow
that
I'm
presenting
at
so
I
put
together
a
little
presentation
for
that
I
thought.
I
could
share
that.
That's
the
first
item
on
the
agenda
just
to
quickly
review
the
roadmap.
A
Let's
see
I'll
just
jump
to
like
the
two
relevant
slides
here,
so
the
first
I
just
wanted
to
show
what
we've
launched
recently,
which
I'm
not
some
of
this
stuff
may
address
some
problems,
but
back
in
12
point:
oh,
we
you
know
really.
Just
me
and
DZ
was
doing
some
work
for
the
package
stage.
So
we
did,
we
launched
a
new
template
and
we
launched
the
dependency
proxy.
Since
then,
we've
been
improving
NPM,
so
we
launched
support
for
groups
and
subgroups
and
authentication
with
the
get
lab
token,
and
then
we've
been
changed.
A
The
overall
UI
we've
added
in
an
API
endpoint
to
list
the
images
of
a
group.
So
the
next
thing
that
we
could
do
is
start
thinking
about
a
group
level
browser
UI
for
the
container
registry
and
we've
improved
the
delete
process
for
the
user
interface
for
those
that
the
user
interface
works,
for,
where
you
can
now
select,
multiple
images
and
and
coming
out
next
week,
will
resolve
that
problem.
A
We're
deleting
a
single
an
image,
delete
all
images
with
the
same
ID,
that's
in
review
right
now,
and
that
should
go
out
in
the
next
couple
of
weeks,
definitely
by
12:00
3:00
and
then
just
what's
on
our
plan
you
can
see
here
by
this
table.
The
container
registry
is
definitely
the
bulk
of
our
work
and
a
big
piece
of
that
is
lowering
the
cost
of
storage
for
us
and
for
our
customers.
A
So
a
couple
of
things-
small,
are
smaller
things
at
the
top
and
giving
users
the
option
to
delete
images
from
CI
before
CI
registry
user
didn't
have
permission
to
untag
its
own
images.
So,
and
that's
something
that's
in
review
now
and
should
definitely
be
out
in
12
3
I
mentioned
the
improved
deletion.
Ui,
that's
done!
This
is
for
improving
the
logic.
I
just
mentioned
that
so
that'll
be
in
12
3
as
well.
Okay,
so
the
big
stuff
are:
how
do
we
improve
garbage
collection,
at
least
for
our
customers?
A
We've
been
talking,
I've
been
talking
to
some
large
enterprise
customers
who
have
terabytes
many
terabytes
of
storage,
that
they're
using
for
the
container
registry
and
the
our
garbage
collection
code,
won't
work
for
them,
and
so
we've
been
going
back
and
forth
with
Dewey
for
docker
distribution.
Do
we
build
our
own?
So
we
did.
A
We
did
decide
that
we
would
like
to
fork
docker
distribution
registry,
but
even
before
that,
we're
going
to
try
to
take
this
docker
pruner
code,
that
Camille
wrote
that's
an
experimental
phase
right
now
and
and
see
what
we're
investigating
what
has
to
be
true
for
that
to
be
used
in
production
and
by
customers.
We've
seen
some
customers
reported
that
they're
using
it
and
when
it
works
for
them
it's
really
fast
and
efficient,
and
then
others
say
it
breaks
backups
for
them
or
that
they
have
these
other
problems.
A
So
the
this
is
what
we're
going
to
be
working
on
in
12
4
is
figuring
out.
What
would
what
it
would
take
to
bring
that
to
production
worthy
we're,
also
planning
on
adding
a
bulk
removal
API
at
the
group
level,
so
for
those
customers
they
can
now
have
options
to
clean
up
or
removed
at
least
untag
things
and
slate
them
for
garbage
collection,
and
that's
a
that'll
be
really
valuable
for
companies
like
ours
that
have
many
groups
and
that
don't
want
to
that.
A
Don't
know
every
project
that
they're
administrating
for
and
then
here
comes
the
work
for
actually
forking
that
working
the
distribution
code
up.
The
first
thing,
we'll
do
is
optimize
that
garbage
collection
process
looks
pretty
straightforward
and
I've
seen
some
em
ARS
out
there
or
pull
requests
on
github
subject.
You
know
that
optimize
it
like
orders
of
magnitude
and
docker
never
really
accepted
those
changes.
So
not
sure.
What's
gonna
happen,
if
we
make,
if
we
make
a
change,
will
docker
accept
those
changes
into
docker
registry?
We
don't
know,
or
do
we
have
to
maintain
a
fork.
A
B
A
B
A
Okay,
yeah,
that's
one
piece
that
we're
gonna
have
to
figure
out
is
if
we
are
forking
the
distribution
code
and
we
want
to
push
it
back,
that's
one
thing,
but
if
they're
not
accepting
you
know,
they
don't
want
to
accept
our
pushes
and
we're
gonna
have
to
figure
out
a
way
to
stay
up
to
date
with
those
security
updates
or
their
release.
Tag
updates,
so
I
think
that'll
requires
some
conversation
and
but
once
we
do
that,
and
we
optimize
the
garbage
collection,
we
want
to
add
in
the
ability
to
expire
images
from
CI.
A
So
now
in
all
of
our
github
CI
ml
files.
The
idea
is
that
you
could
very
similar
to
how
we
handle
artifacts.
You
could
just
write
expire
in
seven
days
for
all
those
short-lived
images
and
tags
that
we're
building,
which
I've
seen
both
you
are
very
guilty
of,
and
then
adding
in
the
ability
for
self-manage
instances
to
run
garbage
collection
from
the
application
so
for
users
that
they
don't
have
a
ton
of
space.
A
Okay
and
then
the
big
story
in
the
packaged
registry
is
we're
adding
in
Conan
for
C
C++,
that's
going
to
launch
in
12:3
and
nougat
is
the
our
next
most
highly
requested
feature.
We
I
just
wrote
a
an
issue,
an
epoch
for
that
issue,
for
adding
it
and
and
then
after
that,
we're
thinking
of
pipe
I
would
probably
be
the
next
one.
It's
not
on
this
list,
because
it's
not
scheduled
yet
and
the
dependency
proxy
we're
we're
not
really
actively
working
on
it.
A
Just
that
we
really
it
was
gonna,
be
either
really
high
prior.
If
we
decided
we
were
gonna,
build
our
own
container
registry
and
then
we're
gonna
use
leverage
the
dependency
proxy.
But
it's
lower
priority
just
because
we
have
to
solve
some
of
those
storage
problems
for
our
customers,
who
are
currently
paying
they're
very
unhappy
with
the
that
for
the
container
registry,
and
we
just
don't
have
the
resources
to
work
on
both.
A
B
Seems
like
it's
going
well,
I
haven't
been
I'm,
not
aware
of
a
date
and
which
are
switching
customers,
though
okay,
but
it
seems
like
it's
been
going
well
in
on
canary
it's
on
canary.
Yet
right
it
was
I'm
staging
and
he's
made
it
to
canary
I.
I,
don't
know
about
canary,
but
I
do
know
that
Devon
Hobson
since
is
using
okay,
just
Devin
ups
I,
don't
think
we've
had
any
problems
with
it,
though
right.
B
A
A
A
B
Yeah
for
for
our
use
and
distribution,
it
certainly
isn't
like
it's.
It's
not
a
big
party.
It's
just
that
without
it
being
turned
on
everywhere.
We
can't
really
use
it.
Of
course,
the
bigger
the
bigger
stuff
from
looking
for
is
that
first
column
that
you
have
there
and
the
container
industry,
okay,
cool
for
sure.
B
A
B
A
We
I
think
we
should
I
think
the
the
deletion
process.
His
problem
was
that
from
the
UI,
where,
if
he
deleted
a
specific
tag,
it
would
delete
all
tags
with
that
to
have
the
same
image
ID.
So
some
some
people
if
they
were
reusing
a
lot
of
the
same
image
ID
but
with
different
names
they
were,
they
would
could
delete
like
20
at
once
by
accident,
but
that
problem,
the
API,
is
more
specific.
A
It
would
make
the
UI
more
performant,
which
would
be
nice,
and
then
it
would
mark
all
of
those
things
for
deletion.
Soames.
Do
you
optimize
the
garbage
collection
code?
It
would
be
able
to
run.
It
would
be
able
to
clean
up
more
but
yeah
optimizing.
That
code
is
really
important
and
it
doesn't
seem
I,
don't
want
to
say
it
seems
easy,
because
there's
a
lot
of
logistical
things
about
how
we
make
those
changes
to
the
registry,
but
from
what
I've
heard
from
Dan
and
team
that
the
actual
optimization
may
not
be
that
challenging.
A
It's
just
there's
a
lot
of
architectural
decisions
that
need
to
be
rolled
in
there
but
I'm.
When
we
talk
to
go
hard,
I
just
talked
to
go,
harbour
or
Harbor.
I
should
say
not
go
over
and
they
never
optimized
their
garbage
collection,
and
they
have.
You
know
huge
customers
and
the
way
that
they've
handled
it
is
they've.
Just
always
had
the
retention
and
expiration
policies
and
the
schedule
right
from
the
app.
So
it
never
got
to
the
point
where
a
customer
could
could
get
up
to
55,
terabytes
of
storage
or
get
bad
gets
up
to.
A
I
bet
it's
almost
2
petabytes
now
of
storage,
because
we'll
always
have
had
that
process
in
place.
I
think
we
need
to
do
the
cleanup,
but
the
next
thing
will
be
like
making
sure
that
we
are
expiring
images
appropriately
and
just
running
GC
every
30
day
be
I,
don't
know,
forget
lab
comm.
Maybe
it
makes
sense
to
run
it
every
week
or
something.
B
A
Would
be
cool,
I'm
excited,
yeah
and
I'll?
Let
you
know,
as
we
are,
making
more
decisions
in
progress
on
the
docker
pruner.
If
that's
looking
like,
we
can
make
that
production
worthy.
That
seems
like
our
fastest
route
and
if
not,
then
we
have
to
do
the
fork
and
then
optimize
the
code
and
figure
out
how
to
make
make
that
work
for
us
so
I'll,
let
you
know
what
happens
over
the
next
week
or
two.
We
should
have
some
better
answers:
okay!