►
From YouTube: Monthly Internal Customer Call 9-24-19
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
How's
it
going
this,
is
our
monthly
internal
customer
call
so
how's
it
going
internal
customer,
it's
going
well,
are
you
good
I
was
just
read:
do
you
have
any
well
I
have
put
added
some
items
to
the
agenda.
We
can
go
through
those,
but
I
do
if
you
have
any
questions
or
pressing
matters,
we
could
start
with
that
as
well.
If
they
haven't
been
add
added
to
the
agenda,
yeah.
A
Yeah,
no,
no
major
changes
on
that
front.
Yet
I
updated
the
roadmap
presentation,
some
small,
some
small
changes,
we
added
new
themes
and
overall,
foresee
ICD
and
those
persist
throughout
the
whole
CIC
CD
section.
As
part
of
that
we
updated
a
bunch
of
our
epics
for
the
package
stage
as
well.
So
those
are
actually
one
thing,
I'm
really
excited
about.
Is
we
moved
from
having
really
being
milestone
to
milestone,
but
in
terms
of
our
planning
to
having
these
epics
laid
out
and
covering
milestones
like
12
for
now,
through
12
7,
so
it's
and
and
beyond?
A
So
it's
nice
to
have
all
these
epics
listed
out
and
all
of
the
issues
at
least
slated
I'm
sure
things
will
move
around
or
get
reprioritized
or
not
finished,
but
at
least
we
have
a
plan
and
it
that
feels
really
good.
So
it
only
took
me
four
and
a
half
months
to
get
to
the
where
we
could
think
beyond
the
first
milestone,
but
with
finally
getting
there
as
a
team.
So
that's
good.
A
We
are,
you
know,
the
core
theme
that
we're
have
going
now
is
just
lower
the
cost
of
the
container
registry
dance
done
on
the
call
we
decided
I.
Think
I
mentioned
last
months
that
we
for
crocker
ddr
distribution
registry
and
the
first
thing
we're
going
to
try
to
do
is
improve
the
garbage
collection
algorithm.
So
we
saw
that
there
are
some
changes
that
were
attempted
to
push
back
up
through
Dockers
github
project
that
were
never
accepted.
A
That's
that
showed,
like
orders
of
magnitude,
improvement
in
the
process,
so
we're
evaluating
if
we
could
make
those
changes
ourselves
and
if
possible,
push
them
back
up
and
if
not,
then
we're
gonna
have
to
start
to
think
about
maintaining
a
separate
fork
and
what
some
of
the
implications
of
that
will
be,
and
I
think
dan
has
started
a
couple
of
conversations
with
people
on
the
distribution
and
infrastructure
team.
Although
I'm
not
sure
how
far
those
conversations
have
gone
so
far,.
A
The
other
thing
that
we'll
do
is
optimize
the
bulk
delete
API.
We
we
experienced
this
during
the
cutover
for
going
from
a
e
to
c,
or
you
know,
a
single
codebase
and
we
were
trying
to
delete
30,000
tags
or
something-
and
it
was
just
hanging
and
I
learned
a
lot
about
that
process
about
the
process
of
how
that
bulk
delete.
Api
works,
it's
actually
just
going
and
queuing
one
at
a
time,
synchronously
or
asynchronously.
I
can't
remember,
but
it's
it's
problematic.
The
performance
is
one
of
the
worst
performing
API
endpoints
on
get
lab
comm.
A
So
in
12:4
we
have
two
big
performance:
optimization
tasks
planned
one
is
the
garbage
collection
algorithm,
and
the
second
is
improving
that
both
delete
api.
If
we
could
delete
that
both
delete
api,
then
we
could
focus
on
introducing
a
group
level.
Both
delete
process
endpoint,
which
would
allow
us
to
you
know,
remove
tags
at
a
much
broader
level
and
maybe
even
start
to
do
that
and
forget
lab
comm.
Even
if
we
don't
it
depends
on
how
successful
our
optimization
is
of
the
garbage
collection
algorithm.
A
If
we
see
it's
several
orders
of
magnitude
and
we
get
something
that
we
could
run
with
a
bit
of
downtime-
maybe
it's
something
we
could
leverage,
but
more
more
investigation
and
testing
is
needed
for
that.
I
mentioned
the
epoch.
So,
on
the
these
are
the
active
epochs
that
were
pursuing
and
in
priority
order.
So
I
mentioned
the
lower
the
cost
of
the
container
industry.
There's
other
things
that
are
included
in
there
like
being
able
to
expire
images
via
CI
or
similar
to
how
we
handle
merge
requests.
You
could
say,
delete
the
source
branch.
A
C
A
Good
to
know,
okay
so,
and
you
could
check
out
these
issues
if
you're
curious,
the
the
other
remaining
epochs,
that
creating
visibility
and
transparency.
That's
really
about
improving
our
overall
user
interface
to
make
sure
that
it
makes
sense
for
gitlab,
like
for
the
container
registry,
we're
not
showing
any
we're
not
showing
which
pipeline
or
dock
docker
file
created
an
image
or
we're
not.
You
know
we're
not
including
a
lot
of
the
metadata
that's
available
to
us,
let
alone
the
metadata
that's
available
from
from
docker
of
docker,
manifest
or
from
NPM,
or
anything
like
that.
A
So
we
have
a
survey
out
now
where
we
have
some
more
user
research
that
our
product
designers
doing
and
we
have
tasks
to
redesign
the
user
interface
and
get
that
data
showing
up
in
the
in
the
UI.
So
I'm
excited
about
that
and
I
think
I'm
just
anxious
to
get
it
done,
because
I
think
that's
gonna
be
I,
hope,
that's
gonna
be
really
valuable
and
the
next
one
is
expecting
additional
package
managers
so
Coenen,
which
was
planned
for
twelve
three
slipped
to
twelve
four.
A
Unfortunately,
it's
very
everything's
in
review,
but
it
just
couldn't
make
12:3
in
time.
We
have
nougat
that
we're
planning
on
starting
for
dotnet
developers
and
12/5,
and
we
have
a
community
contribution
for
composer,
which
is
the
PHP
package
manager
which
we're
going
to
try
and
push
over
the
line
if
possible.
But
if
it's
too
much
work
we'll
see
how
it
goes,
the
other,
the
ones
that
impact
us
python
is
is
next
and
then
and
then
ruby.
A
But
we
we're
trying
to
figure
out
a
way
to
make
the
whole
process
go.
Faster.
Conan's
two
taken
us
a
few,
a
couple
of
milestones
to
get
this
done,
and
we
really
want
to
improve
the
process
and
make
sure
that
when
we
start
one
that
we
have
a
one
milestone
path
to
an
MVC
and
maybe
a
to
milestone
path
to
you
know
the
feature
being
viable
and
then
there's
the
the
other.
C
Don't
I'm
trying
to
think
I
I,
don't
even
know
I,
don't
know
that
my
team
would
be
the
one
to
do
that,
but
I
have
not
heard
taco
doing
that.
I
did
see
an
issue
recently
about
tuning
tuning
it,
but
I,
don't
know
I
think
we
might
be
doing
it
in
staging
I'd
have
to
verify
that
okay,
yeah.
B
C
A
I
wonder
if
we
could
find
any
if
any
of
you
have
use
cases
for
the
dependency
proxy
that
we
could
test
on
dev
or
on
staging
just
because
I'm
I
don't
want
it
to
go
to
production
and
then
find
out
that
the
feature
doesn't
work
at
any
level
of
scale.
I
know
Jason
was
asking
me
to
help
to
find
some
use
cases
we
were
working
through
I
know
for
our
CI
fleet,
we're
not
cashing
any
of
those
images.
B
Know
for
the
for
the
container
registry
dependency
proxy,
so
we
do
it
in
dev,
where
it
would
be
available
right
now.
We
do
actually,
basically
everything
that
we
do
in
comm.
We
actually
also
do
in
dev,
so
we
have
lots
of
containers,
but
most
of
them
are
our
own
there's.
We
actually
have
very
few
in
the
docker
registry
dependencies
on
docker,
hub
I,
think.
A
B
A
I'm
gonna
keep
working
to
try
and
find
some
some
use
cases
that
we
can
leverage
yeah
generally
in
order.
The
next
step
for
the
dependency
proxy
will
be
to
add
authentication
support,
so
it
works
for
private
projects
and
I
suspect
there
might
be
some
performance
tuning
once
we
start
to
see
it
on
under
under
load
as
it
gets
adopted.
B
D
I
think
I
probably
haven't
made
very
many
of
these
if
any
since
I
started,
I
finally
have
gotten
myself
into
a
position
where
I
understand
like
this,
where
the
goal
posts
are
and
how
the
team
works,
and
so
now
I'm
sort
of
taking
the
next
step
to
understand
things
like
package
dependencies
and
whatnot.
So
having
this
call
I
know
DJ,
you've
you've
said
in
on
most
of
these
and
have
a
much
better
understanding
of
what
the
team
needs
in
terms
of
the
packaging
I'm
interested
in
the
dependency
proxy
stuff.
D
We've
got
it's
not
happening
yet,
but
the
idea
of
being
able
to
build
our
cloud
native
images
completely
offline
in
an
air-gapped
style
environment
is
something
that
we're
working
through.
The
federal
team
wants
it
we're
saving
that,
towards
the
end
of
our
projects,
to
where
we've
got
a
working
thing,
it's
usable.
We
just
need
to
be
able
to
build
it
in
in
an
offline
environment,
so
we
may
have
some
new
requirements
or
ideas
of
things
that
we
need.
Maybe
you
know
in
the
coming
quarter.
I
would
say
how.
D
Think
I'm,
a
good
place
to
start
would
be
I'll.
Add
the
links
for
the
issues
that
we're
working
on
today.
A
big,
a
big
thing
that
we're
struggling
with
with
this
federal
project
in
particular,
is
the
lack
of
hearted
requirements,
it's
kind
of
like
hey.
We
know
we
want
hardening,
but
we
don't
know
exactly
what
we're
going
to
harden
and
so
we're
going
and
then
hey.
We
want
to
be
able
to
build
it
offline,
but
we
don't
have
all
the
details.
D
A
Yeah,
that
would
be
great
I
just
want
to
if
I
can
get
involved
sooner,
because
what
a
lot
of
the
dependency
proxy
work
is
sitting
behind,
planned
container
registry
work
and
some
new
net
new
integrations,
so
I'm
gonna
make
sure
I
just
have
my
priorities
straight
with
what
get
about
China.
But
there
are
long
combined
long.
D
Term
gotcha,
yeah
and
I
think
you
know
the
dependency
proxy
on
the.com
stuff
makes
total
sense
to
me
ranks
we
can
say
you
know
the
money
infrastructure
we
have
to
run
but
I
think
I'm,
the
self
managed
stuff
it's
a
little
bit
more.
We
don't
have
anybody
doing
it
yeah
exactly
so
it
we
don't
have
any
been
like
kind
of
an
unsupported
model
to
look
at
yet
so
there's
there's
lots
of
questions
on
you
know.
Is
it
even
viable
right?
D
Yeah?
Definitely
that's
in
terms
of
where
our
that
project,
you
know,
meets
the
package
team.
That's
that
feels
like
a
and
an
effort
that
we're
gonna
be
taking
on,
like
I
said,
probably
probably,
and,
and
you
know,
October
November
will
be
ready
to
at
least
look
at
it.
Okay,.
A
B
You
know
I
think
we've
we've.
Definitely
in
the
last
month,
we've
hit
some
pain
points
that
would
have
been
solved
if
we
had
the
ruby
gems
and
a
dependency
proxy
for
the
gents
are
really
both
the
Rudy
gems
package
manager
and
the
dependency
approximate
place
for
that.
Of
course,
that's
much
further,
but
that's
still
top
of
mind
for
us
as
well,
and
then
our
team
dis
releases
now
starting
to
take
a
look
at
helm.
Three,
which
is
somewhat
related
to
one
of
the
items
for
the
packaging
team,
was
well
alright.
A
Industry
we
yeah,
we
are
I'm,
I,
think
it's
relative,
I
shouldn't
say
ever
say:
anything's
easy,
but
I
think
it's
relatively
easy
compared
to
some
of
the
other
problems,
we're
tackling
to
add
in
helm,
support
to
the
container
registry.
We
have
an
issue
plan
that
I
just
moved
to
twelve
five
I
think
we
just
have
to
change
the
format,
accepted
format,
types
for
what's
in
the
container
registry,
to
support
helm,
charts,
yeah.
B
So
for
us
at
this
point,
where
we're
just
starting
the
the
research
task,
so
we
might,
we
might
as
a
result
of
that,
be
throwing
some
questions
in
this
wallet
the
issue
around
the
hosting
it
in
get
lavas
wall.
Okay,.